id
string | text
string | source
string | created
timestamp[s] | added
string | metadata
dict |
---|---|---|---|---|---|
1103.3494
|
# Maybe there’s no such thing as a random sequence
Peter G. Doyle Dartmouth College.
(Version 1.0 dated 17 March 2010
No Copyright††thanks: The authors hereby waive all copyright and related or
neighboring rights to this work, and dedicate it to the public domain. This
applies worldwide. )
###### Abstract
An infinite binary sequence is deemed to be random if it has all definable
properties that hold almost surely for the usual probability measure on the
set of infinite binary sequences. There are only countably many such
properties, so it would seem that the set of random sequences should have full
measure. But in fact there might be no random sequences, because for all we
know, there might be no undefinable sets.
_For Laurie and Jim_
## 1 What is a random sequence?
About 30 years ago now, my friend and mentor J. Laurie Snell got interested in
the question of what constitutes an infinite random sequences of $0$s and
$1$s. The model here is an infinite sequence of independent flips of a fair
coin, with heads counting as $1$ and tails as $0$. Or more technically, the
standard product measure on $\prod_{i=1}^{\infty}\\{0,1\\}$, which up to a
little fussing is the same as the standard Borel measure on the unit interval
$[0,1]$.
Some sequences are obviously not random:
* •
$0000000000\ldots$;
* •
$1111111111\ldots$;
* •
$0101010101\ldots$;
* •
$1101001000100001\ldots$.
Other presumably non-random sequences are the binary expansion of $1/\pi$, or
$1/e$, or any sequence that can be printed out by a computer program. As there
are only a countable number of computer programs, this gives us only a
countable set; the complement is still an uncountable set of full measure.
Throwing out these computable sequences doesn’t come near to doing the job.
Many more sequences need to be weeded out. For example, a random sequence
should exhibit the strong law of large numbers: Asymptotically it should have
half $0$s and half $1$s. There are uncountably many sequences that fail this
test, e.g. all those of the form $00a00b00c00d00e\ldots$ (On Beyond Zebra!).
And having half $0$s and half $1$s is far from enough. Really we want the
sequence to have all those properties dictated by probability theory, like
say, the law of the iterated logarithm. This is a relative of the central
limit theorem which states—well, you can look it up. What matters is that it
is a statement about the sequence that either is true or false, and which
according to probability theory is true _almost surely_.
Now, some people have felt that a test for randomness should be effective in
some sense. For example, maybe you should be able to make money from a non-
random sequence, as you could for example from a sequence of more than half
$1$s by betting on $1$ each time. Looking into this led Laurie into a thicket
of papers by Kolmogorov, Martin-Löf, Schnorr, and Chaitin, which are all very
interesting and answer the question of what is random from a certain
perspective. (This approach is by now a thriving industry: See Downey et al.
[4]; Nies [6]; and Downey and Hirschfeldt [3].)
At this point Jim Baumgartner, a logician who had been drawn into this morass
by Laurie, proposed the following:
###### Definition 1.
A sequence is _non-random_ if and only if it belongs to some definable set of
sequences having measure $0$. Here definable means uniquely definable by a
formula in the language of first order set theory having only one free
variable (no parameters). And by measure $0$ we mean outer measure $0$, in
case the set is not Borel-measurable.
Since there are only a countable number of formulas, we’re throwing out a
countable collection of sets of measure $0$, so the sequences that remain—the
_random_ sequences, should form a set of full measure. Almost every sequence
should be random.
I accepted this definition of random sequence for over 30 years. Then about a
month ago, thanks to a stimulating colloquium talk by Johanna Franklin and
subsequent discussions with Rebecca Weber and Marcia Groszek, I came to
realize that the argument that there are plenty of random sequences is bogus.
(Or perhaps it would be better to say that it is ‘suspect’, in light the fact
that the same reasoning was used by Tarski—see section 5 below.) It is
possible that under this definition _there are no random sequences_! The
reason is that the standard of axioms of set theory (assuming they are
consistent) do not rule out the possibility that _every set is definable_.
There are models of set theory, satisfying all the standard axioms, where
every set in the universe is definable. (Cf. section 4 below.) Of course these
are countable models. In these models, every sequence is definable, and hence
every singleton set consisting of a single sequence. Since a singleton set has
measure $0$, every sequence belongs to a definable set of measure $0$, hence
is non-random. So in such a model there are no random sequences.
Now we might have objected earlier, ‘Of course there are no random sequences:
Given any sequence $\sigma$, the singleton set $\\{\sigma\\}$ has measure $0$.
A random sequence cannot be equal to any particular sequence.’ We thought we
were avoiding this by only considering definable sets of sequences: There are
only a countable number of definable sequences, so there should be a full
measure set of sequences left after we’ve ruled out definable sequences. And
even after we’ve gone on to throw out all definable sets of sequences of
measure $0$, there should remain a full-measure set of random sequences.
But now we’re saying that in fact this objection might be justified, after
all: For all we know, it might be the case that _all sequences are definable_.
So maybe there really are no random sequences.
## 2 How can this be?
The answer is _Skolem’s paradox_. We’re dealing here with a countable model of
set theory. In any model of set theory, the collection of definable sets will
be countable from outside the model. If the model is countable, there is no
obvious impediment to having every set be definable. And in fact this turns
out to be possible.
Look, the real problem here is that you can’t define definability. There is no
formula in the language of set theory characterizing a definable set, because
there is no formula characterizing a true formula. (If there were, we’d be in
real trouble.) And so we can’t write a formula characterizing random
sequences. What we can do is write a formula characterizing random sequences
_within a given model of set theory_. Now, if the standard axioms of set
theory are consistent, then there is a model of set theory satisfying these
axioms, and in this model every set could be definable, which would mean in
particular that there are no random sequences in the model.
Note that we are not saying that it must be the case that there are no random
sequences. There certainly are models where not all sequences are definable
(assuming set theory is consistent). There presumably are models where the
random sequences have full measure. We’re just saying that it _may be_ that
there are no random sequences.
## 3 What to make of all this?
One sensible response would be that we have missed the boat. This definition
of random sequence is too restrictive. We should be less demanding. We should
climb into the boat with Kolmogorov, Martin-Löf, Schnorr, and Chaitin. And
then we can talk not just about infinite sequences, but finite sequences as
well. No finite sequences are completely random, of course, but clearly some
are more random than others.
We prefer to stick with Definition 1, and consider the possibility that there
really are no random sequences. Maybe the Old Man doesn’t play dice with the
universe of sets.
That is, assuming there really is a universe of sets. In this connection, I
can’t resist quoting Abraham Robinson [7, p. 230]:
> My position concerning the foundations of Mathematics is based on the
> following two main points or principles.
>
> (i) Infinite totalities do not exist in any sense of the word (i.e., either
> really or ideally). More precisely, any mention, or purported mention, of
> infinite totalities is, literally, _meaningless_.
>
> (ii) Nevertheless, we should continue the business of Mathematics “as
> usual,” i.e., we should act _as if_ infinite totalities really existed.
See also Cohen [2].
## 4 Models of set theory for which all sets are definable
I’ve found that Cohen’s book ‘Set Theory and the Continuum Hypothesis’ [1] is
a good place to look for general background on model theory. The book is
addressed to non-specialists, and ‘emphasizes the intuitive motivations while
at the same time giving as complete proofs as possible’.
Here, quoted verbatim from Cohen [1, pp. 104–105], are precise statements
about models where all sets are definable.
###### Theorem 1.
$\mathrm{ZF}+\mathrm{SM}$ implies the existence of a unique transitive model
$M$ such that if $N$ is any standard model there is an $\in$-isomorphism of
$M$ into $N$. $M$ is countable.
Here $\mathrm{SM}$ is the statement that $\mathrm{ZF}$ has a standard model,
meaning one where the membership relation in the model coincides with the
‘real world’ membership relation $\in$. The existence of a standard model is
not provable because it implies $\mathrm{Con}(\mathrm{ZF})$. Cohen [1, p. 79]
says that $\mathrm{SM}$ is ‘most probably “true”’, and gives an intuitive
argument for accepting it as an axiom. $\mathrm{SM}$ holds just if it is
possible to quit early in the transfinite induction that produces Godel’s
constructible universe $L$, and still have a model of $\mathrm{ZF}$. To get
the minimal model $M$, we stop the construction at the earliest possible
ordinal. $M$ satisfies $V=L$ (cf. [1, p. 104]), hence also the axiom of
choice, so in $M$ we have a standard model of $\mathrm{ZFC}+(V=L)$.
###### Theorem 2.
For every element $X$ in $M$ there is a formula $A(y)$ in $\mathrm{ZF}$ such
that $x$ is the unique element in $M$ satisfying $A_{M}(x)$. Thus in $M$ every
element can be ‘named’.
Models where every set is definable are called ‘pointwise definable’. So
according to these results, if $\mathrm{ZF}$ has a standard model, then
$\mathrm{ZFC}$ has a pointwise definable standard model.
The requirement that $\mathrm{ZF}$ has a standard model can be dispensed with,
if we don’t care about winding up with a pointwise definable model that is
non-standard. John Steel points out that the definable sets within any model
$N$ of $\mathrm{ZFC}+(V=L)$ constitute an elementary submodel $H$. This
implies that $H$ is a model of $\mathrm{ZFC}+(V=L)$, and every set in $H$ is
definable in $H$ (not just in $N$). So, starting with any model for
$\mathrm{ZF}$, we can restrict to a model of $\mathrm{ZFC}+(V=L)$, and within
that find a pointwise definable model of $\mathrm{ZFC}+(V=L)$.
For much more about pointwise definable models of set theory, see Hamkins,
Linetsky, and Reitz [5].
## 5 What was wrong with the proof?
So, what was wrong with the proof that there are plenty of random sequences?
Let’s look at the proof, given by Tarski [8, p. 220] in 1931, that there exist
undefinable sets of real numbers. This same proof method would show that there
exist undefinable real numbers, or what is the same, undefinable random
sequences. Tarski’s original paper [8] is in French. Here, from Tarski [9, p.
119], is an English translation:
> Moreover it is not difficult to show that the family of all definable sets
> (as well as that of the functions which determine them) is only denumerable,
> while the family of _all_ sets of numbers is not denumerable. The existence
> of undefinable sets follows immediately.
If anyone but Tarski had written this, I think we would say that the author is
confusing the system with the metasystem. In the metasystem, we can talk about
the family $D$ of definable sets of reals of the system, and prove that it is
denumerable _in the metasystem_. In the system itself, we can’t talk about
$D$, but we can talk about the family $P$ of all sets of reals of the system,
and prove that it is not denumerable _within the system itself_. But from this
we cannot conclude that $D$ differs from $P$: $D$ is denumerable in the
metasystem; $P$ is not denumerable in the system. There is no contradiction
here.
Tarski goes on to give a second proof:
> ‘Plus encore’ [the translation has ‘Also’, but a better rendering might be
> ‘And not only that’], the definable sets can be arranged in an ordinary
> infinite sequence; by applying the diagonal procedure it is possible _to
> define in the metasystem a concrete set which would not be definable in the
> system itself_. In this there is clearly no trace of any antinomy, and this
> fact will not appear at all paradoxical if we take proper note of the
> relative character of the notion of definability.
If anyone but Tarski had written this, I think we would say that the author
has failed to take proper note of the relative character of _being a set_.
This concrete set that we’ve defined in the metasystem might not be a set in
the system itself. So we have failed to produce an undefinable set of the
system.
Now, this critique of Tarski’s reasoning has been written from the perspective
of model theory—a modern perspective based in large part on work that Tarski
did after writing down these proofs. We interpret the ‘metasystem’ as a formal
system in which statements about models (‘systems’) of set theory can be
formulated and proven. If we choose as the metasystem ZFC, the usual formal
system for set theory, then in this metasystem we can prove a statement to the
effect that there exist models of ZFC where every set is definable. This means
that Tarski’s arguments that there must be undefinable sets cannot be
correctly formalized in ZFC (assuming ZFC is consistent). And we have pointed
out where the problem lies in trying to formalize them.
So, does this mean that Tarski’s arguments are _wrong_? They are if it is fair
to recast them in the framework of model theory. But it is not really clear
that this is fair. It has been suggested, for example, that Tarski is thinking
of the ‘standard model’ rather than some arbitrary model. That’s all very
well, but what does it mean concretely? What special methods of proof apply to
the ‘standard model’?
If we can make sense of Tarski’s arguments, then presumably we can salvage the
argument that the set of random sequences from Definition 1 has full measure.
Maybe it is false that maybe there are no random sequences.
## Acknowledgement and disclaimer
I’m grateful for help I’ve received from Jim Baumgartner, Johanna Franklin,
Steven Givant, Marcia Groszek, Joel David Hamkins, Laurie Snell, John Steel,
and Rebecca Weber. None of them is responsible for my opinions, or whatever
errors may be on display here. Despite the impression I’ve been trying to
give, I’m aware that I know next to nothing about logic. I’ve written this
note because I find this whole business intriguing, and I believe other people
will too.
## References
* [1] Paul J. Cohen. Set theory and the continuum hypothesis. W. A. Benjamin, 1966.
* [2] Paul J. Cohen. Comments on the foundations of set theory. In Axiomatic Set Theory (Proc. Sympos. Pure Math., Vol. XIII, Part I, Univ. California, Los Angeles, Calif., 1967), pages 9–15. Amer. Math. Soc., Providence, R.I., 1971.
* [3] Rod Downey and Denis R. Hirschfeldt. Algorithmic Randomness and Complexity. Springer, 2010.
* [4] Rod Downey, Denis R. Hirschfeldt, André Nies, and Sebastiaan A. Terwijn. Calibrating randomness. Bull. Symbolic Logic, 12(3):411–491, 2006.
* [5] Joel David Hamkins, David Linetsky, and Jonas Reitz. Pointwise definable models of set theory. preprint, 2010.
* [6] André Nies. Computability and randomness, volume 51 of Oxford Logic Guides. Oxford University Press, Oxford, 2009.
* [7] Abraham Robinson. Formalism $64$. In Logic, Methodology and Philos. Sci. (Proc. 1964 Internat. Congr.), pages 228–246. North-Holland, 1965.
* [8] Alfred Tarski. Sur les ensembles définissables de nombers réels. Fundamenta Mathematicae, 17:210–239, 1931.
* [9] Alfred Tarski. On definable sets of real numbers. In Logic, Semantics, Metamathematics. Papers from 1923 to 1938, pages 110–142. Oxford at the Clarendon Press, 1956. Translated by J. H. Woodger.
|
arxiv-papers
| 2011-03-17T19:41:36 |
2024-09-04T02:49:17.739867
|
{
"license": "Public Domain",
"authors": "Peter G. Doyle",
"submitter": "Peter G. Doyle",
"url": "https://arxiv.org/abs/1103.3494"
}
|
1103.3500
|
# Intrinsic Alignment of Cluster Galaxies: the Redshift Evolution
Jiangang Hao11affiliation: Center for Particle Astrophysics, Fermi National
Accelerator Laboratory, Batavia, IL 60510 , Jeffrey M. Kubo11affiliation:
Center for Particle Astrophysics, Fermi National Accelerator Laboratory,
Batavia, IL 60510 , Robert Feldmann11affiliation: Center for Particle
Astrophysics, Fermi National Accelerator Laboratory, Batavia, IL 60510
22affiliation: Kavli Institute for Cosmological Physics, The University of
Chicago, Chicago, IL 60637 , James Annis11affiliation: Center for Particle
Astrophysics, Fermi National Accelerator Laboratory, Batavia, IL 60510 , David
E. Johnston11affiliation: Center for Particle Astrophysics, Fermi National
Accelerator Laboratory, Batavia, IL 60510 , Huan Lin11affiliation: Center for
Particle Astrophysics, Fermi National Accelerator Laboratory, Batavia, IL
60510 , Timothy A. McKay33affiliation: Department of Physics, University of
Michigan, Ann Arbor, MI 48109 44affiliation: Department of Astronomy,
University of Michigan, Ann Arbor, MI 48109
###### Abstract
We present measurements of two types of cluster galaxy alignments based on a
volume limited and highly pure ($\geq$ 90%) sample of clusters from the GMBCG
catalog derived from SDSS DR7. We detect a clear BCG alignment (the alignment
of major axis of the BCG toward the distribution of cluster satellite
galaxies). We find that the BCG alignment signal becomes stronger as the
redshift and BCG absolute magnitude decrease, and becomes weaker as BCG
stellar mass decreases. No dependence of the BCG alignment on cluster richness
is found. We can detect a statistically significant ($\geq$ 3 sigma) satellite
alignment (the alignment of the major axes of the cluster satellite galaxies
toward the BCG) only when we use the isophotal fit position angles (PAs,
hereafter), and the satellite alignment depends on the apparent magnitudes
rather than the absolute magnitudes of the BCGs. This suggests the detected
satellite alignment based on isophotoal PAs from the SDSS pipeline is possibly
due to the contamination from the diffuse light of nearby BCGs. We caution
that this should not be simply interpreted as non-existence of the satellite
alignment, but rather that we cannot detect them with our current photometric
SDSS data. We perform our measurements on both SDSS $r$ band and $i$ band
data, but did not observe a passband dependence of the alignments.
###### Subject headings:
Galaxies: clusters: general – large-scale structure of universe
## 1\. Introduction
Galaxy orientations contain important information about the gravitational
environment in which it resides. Brown (1938) pointed out that galaxy
orientations may not be isotropic due to the large scale gravitational
interaction. Hawley & Peebles (1975) reported a weak evidence of anisotropy of
the galaxy orientation. Galaxy orientation becomes especially interesting in
the vicinity of galaxy clusters, where the strong gravitational field may
produce detectable orientation preference for both central galaxies and
satellite galaxies. With the advent of modern sky surveys, such as Sloan
Digital Sky Survey (SDSS) (York et al., 2000), the shape and orientation of
galaxies can be measured to high precision. The large sky coverage
substantially increases the sample size to allow statistically significant
measurements on cluster galaxy alignments, from which our understanding of the
cluster formation process can be greatly improved.
The term galaxy alignment has been used extensively in the literature and
refers to alignments in different contexts. In the galaxy cluster environment,
there are two types of alignments that are of great interest. The first is the
alignment of the major axes of the cluster satellite galaxies towards the
cluster center, which we will call “satellite alignment”. In our case, as an
operational definition, we will consider the alignment between the major axes
of the satellite galaxies and the BCG111Though cluster center is well defined
as the deepest gravitational potential well, its determination from
observational data, especially optical data, is not unambiguous. The central
galaxy in a cluster (the one which resides near the bottom of the cluster
potential well) is very often the brightest galaxy (BCG) in the cluster. This
BCG is then coincident with the region with the deepest potential
traditionally identified in theory as the center of a cluster. Using the BCG
as the cluster center simplifies precise comparisons between observations and
theory.. The second type of alignment is the alignment of the BCGs major axis
towards the distribution of satellite galaxies in the cluster. We will call
this alignment “BCG alignment” hereafter.
In addition to these two types of alignments, the possible alignments between
the major axises of the satallite galaxies and the cluster (Plionis et al.,
2003), and between the cluster shape and large scale structures (Paz et al.,
2008; Faltenbacher et al., 2009; Wang et al., 2009; Paz et al., 2011) have
been studied, but these are beyond the scope of this current paper.
There are extensive studies on these two types of alignments with both
simulations and observations. It is argued, based on simulations, that the
preferred accretion direction of satellite halos toward the host halo along
the filaments is largely responsible for the BCG alignment (Tormen, 1997;
Vitvitska et al., 2002; Knebe et al., 2004; Zentner et al., 2005; Wang et al.,
2005). The detections of this alignment from real data has been reported by
many teams (Sastry, 1968; Austin & Peach, 1974; Dressler, 1978; Carter &
Metcalfe, 1980; Binggeli, 1982; Brainerd, 2005; Yang et al., 2006; Azzaro et
al., 2007; Wang et al., 2008; Siverd et al., 2009; Niederste-Ostholt et al.,
2010), though non-detection of this alignment were also reported (Tucker &
Peterson, 1988; Ulmer et al., 1989). For the satellite alignment, tidal torque
is thought to play a major role in its formation (Ciotti & Dutta, 1994; Ciotti
& Giampieri, 1998; Kuhlen et al., 2007; Pereira et al., 2008; Faltenbacher et
al., 2008; Pereira & Bryan, 2010). Its detection based on SDSS data has been
reported by Pereira & Kuhn (2005); Agustsson & Brainerd (2006); Faltenbacher
et al. (2007), while non-detections of this alignment are also reported based
on data from both 2dF Galaxy Redshift Survey (2dFGRS) (Colless et al., 2001;
Bernstein & Norberg, 2002) and SDSS (Siverd et al., 2009). In Table 1, we
summarize the previous work that reports the existence and non-existence of
these two types of alignments based on real data. On the other hand, the
intrinsic alignment of galaxies will contaminate gravitational lensing
measurements and therefore needs to be carefully modeled in lensing analysis.
Along these lines, Mandelbaum et al. (2006); Hirata et al. (2007) reported
correlations between intrinsic shear and the density field based on data from
SDSS and 2SLAQ (Croom et al., 2009).
In general, the cluster galaxy alignment signals are weak, and their
measurement requires high quality photometry and well measured galaxy PAs.
Moreover, a galaxy cluster catalog with high purity, well-determined BCGs and
satellite galaxies is important too. Since measuring the redshift evolution of
the alignment is crucial for understanding its origin, the cluster catalog
needs to be volume limited and maintain constant purity for a wide redshift
range. Most of the existing alignment measurements (see Table 1) are based on
galaxy clusters/groups selected from spectroscopic data. In SDSS data, due to
the high cost of obtaining spectra for a large population of galaxies, the
completeness of spectroscopic coverage is limited to $r$ band Petrosian
magnitude $r_{p}\leq$17.7, corresponding to a median redshift of 0.1 (Strauss
et al., 2002). This greatly limits the ability to look at the redshift
evolution of the alignments.
On the other hand, one can also measure the alignments by using
photometrically selected clusters. The advantage of photometrically selected
clusters lies in the large data sample as well as relatively deep redshift
coverage, allowing a study on the redshift evolution of the alignments.
However, there are clear disadvantages too. For example, the satellite
galaxies are prone to contamination from the projected field galaxies, which
will dilute the alignment signals. The level of this contamination may also
vary as redshift changes, complicating the interpretation of the alignment
evolution.
In this paper, we show our measurements of the two types of alignments based
on a volume limited and highly pure ($\geq$ 90%) subsample of clusters from
the GMBCG cluster catalog for SDSS DR7 (Hao et al., 2010). The large sample of
clusters allows us to examine the dependence of the alignments on cluster
richness and redshift with sufficient statistics. With this catalog, we detect
a BCG alignment that depends on redshift and the absolute magnitude of BCG,
but not on the cluster richness. We also observe that the satellite alignment
depends on the apparent brightness of the BCG and the methods the position
angles are measured. We can only see a statistically significant satellite
alignment at low redshift when we use the isophotal fit PAs (see § 3.3 for
more details). Furthermore, we notice that the satellite alignment based on
isophotal fit PAs depends strongly on the apparent magnitude rather than the
absolute magnitude of the BCG. This suggests that the measured satellite
alignment is more likely due to the isophotal fit PAs, whose measurements are
prone to contaminations from the diffuse light of the BCG.
The paper is organized as following: in §2, we introduce the two parameters
used to quantify the two types of alignments. In § 3, we introduce the data
used in this paper. In § 4, we present and discuss our measurement results. By
convention, we use a $\Lambda$CDM cosmology with $h=1.0$, $\Omega_{m}=0.3$ and
$\Omega_{\Lambda}=0.7$ throughout this paper. All angles measurements that
appear in this paper are in units of degrees.
Table 1Alignment Measurements Summary Data | BCG Alignment | Satellite Alignment
---|---|---
Source | Exist | Non-exist | Exist | Non-exist
| Sastry (1968) | Tucker & Peterson (1988) | |
| Austin & Peach (1974) | Ulmer et al. (1989) | |
Photometric | Dressler (1978) | | |
Plates | Carter & Metcalfe (1980) | | |
| Binggeli (1982) | | |
| Brainerd (2005) | | Pereira & Kuhn (2005) | Siverd et al. (2009)
SDSS | Yang et al. (2006) | | Agustsson & Brainerd (2006) |
Spectrosopic | Azzaro et al. (2007) | | Faltenbacher et al. (2007) |
| Faltenbacher et al. (2007) | | |
| Wang et al. (2008) | | |
| Siverd et al. (2009) | | |
SDSS | Niederste-Ostholt et al. (2010) | | Pereira & Kuhn (2005) |
Photometric | This Work | | | This Work
2dF | | | | Bernstein & Norberg (2002)
Spectroscopic | | | |
## 2\. Alignment parameters
In this paper, we consider two types of alignments: (1) Satellite alignment;
(2) BCG alignment. Each of them is quantified by a corresponding alignment
parameter. For the satellite alignment, we follow Struble & Peebles (1985);
Pereira & Kuhn (2005) and use the following alignment parameter:
$\delta=\frac{\sum^{N}_{i=1}{\phi_{i}}}{N}-45$ (1)
where $\phi_{i}$ is the angle between the major axes of the satellite galaxies
and the lines connecting their centers to the BCGs, as illustrated in the left
panel of Figure 1. N is the number of satellite galaxies in the cluster. For
every cluster, there will be a unique alignment parameter $\delta$ measured,
which is the mean angle $\phi$ of all cluster satellite galaxies subtracted by
45. If the major axes of satellite galaxies do not preferentially point to the
BCG of the cluster, the $\phi$ will randomly distribute between -45 and 45
(degrees), leading to $\delta=0$. On the other hand, if the major axes of
satellite galaxies preferentially point to the BCG, there will be $\delta<0$.
The standard deviation (error bar) of $\delta$ can be naturally calculated by
$\delta_{err}=\sqrt{\sum_{i=1}^{N}{(\phi_{i}-\delta-45)^{2}}}/N$.
For the BCG alignment, we will focus on the angle $\theta$ between the BCGs PA
and the lines connecting the BCG to each satellite galaxy, as illustrated in
the right panel of Figure 1. In our cluster sample, each cluster has more than
15 satellite galaxies (see §3.2 for more details). Therefore, instead of
looking at the full distribution of $\theta$ from all clusters, we will focus
on the mean of $\theta$ measured for each cluster. In analogy to the satellite
alignment parameter $\delta$, we introduce a BCG alignment parameter $\gamma$
defined as:
$\gamma=\frac{\sum^{N}_{i=1}{\theta_{i}}}{N}-45$ (2)
where $\theta_{i}$ is the angle between BCGs PA and the line connecting the
BCG to the $i^{th}$ satellite galaxy (see the right panel of Figure 1). N is
the number of satellite galaxies in the cluster. Similarly, each cluster will
correspond to a BCG alignment $\gamma$, which is the mean of angle $\theta$
subtracted by 45. If the major axis of BCG preferentially aligns with the
distribution of the majority of the satellite galaxies, the $\gamma<0$. If no
such preference exist, $\gamma=0$ will be expected. The uncertainty of
$\gamma$ can be readily calculated as
$\gamma_{err}=\sqrt{\sum_{i=1}^{N}{(\theta_{i}-\gamma-45)^{2}}}/N$.
Figure 1.— left: Illustration of the angle $\phi$ used in the definition of
satellite alignment parameter $\delta$; right: Illustration of the angle
$\theta$ used in the definition of BCG alignment parameter $\gamma$.
These two parameters quantify the two types of alignments and are easy to
measure. In the follows, we will focus on these two quantities and their
dependencies on various cluster/BCG properties.
## 3\. Data
In this section, we describe the details of the galaxies, their PA
measurements, and the galaxy cluster sample used in our measurement.
### 3.1. Galaxy Catalog
The galaxies we use are from the Data Release 7 of the Sloan Digital Sky
Survey (York et al., 2000; Abazajian & Sloan Digital Sky Survey, 2008). The
SDSS is a multi-color digital CCD imaging and spectroscopic sky survey,
utilizing a dedicated 2.5-meter telescope at Apache Point Observatory, New
Mexico. It has recently completed mapping over one quarter of the sky in
$u$,$g$,$r$,$i$ and $z$ filters. DR7 is a mark of the completion of the
original goals of the SDSS and the end of the phase known as SDSS-II. It
includes a total imaging area of 11663 square degrees with 357 million unique
objects identified.
This work focuses on the Legacy Survey area of SDSS DR7, which covers more
than 7,500 square degrees of the North Galactic Cap, and three stripes in the
South Galactic Cap totaling 740 square degrees (Abazajian & Sloan Digital Sky
Survey, 2008). The galaxies are selected from the PhotoPrimary view of the
SDSS Catalog Archive Server with object type tag set to 3 (galaxy) and
$i$-band magnitude less than 21.0. Moreover, we require that the galaxies did
not trigger the following error flags: SATURATED, SATUR_CENTER, BRIGHT,
AMOMENT_MAXITER, AMOMENT_SHIFT and AMOMENT_FAINT.
In addition to the above selection criteria, we also reject those galaxies
with photometric errors in $r$ and $i$ band greater than 10 percent.
Additionally, we require the ellipticity222The ellipticity is defined as
$\mathrm{\sqrt{m_{e1}^{2}+m_{e2}^{2}}}$, where the
$\mathrm{m_{e1}=\frac{<col^{2}>-<row^{2}>}{<col^{2}>+<row^{2}>}}$ and
$\mathrm{m_{e2}=\frac{2<col*row>}{<col^{2}>+<row^{2}>}}$. Details of
estimating $\mathrm{<...>}$ can be found in (Bernstein & Jarvis, 2002) of each
galaxy in the $r$-band and $i$-band to be less than 0.8 in order to remove
edge-on galaxies whose colors are not well measured. By doing this, we will
retain about 95% of the total galaxies. All the magnitudes used in this paper
are dust extinction corrected model magnitudes (Abazajian & Sloan Digital Sky
Survey, 2008).
### 3.2. Galaxy Clusters
In order to measure the two types of alignments, we need to have a galaxy
cluster catalog with well determined member galaxies. The GMBCG cluster
catalog for SDSS DR7 is a large catalog of optically selected clusters from
SDSS DR7 using the GMBCG algorithm. The catalog is constructed by detecting
the BCG plus red sequence feature that exists among most clusters. The cluster
satellite galaxies are within 2$\sigma$ of the red sequence mean color
detected using a Gaussian Mixture Model. Since the red sequence of each
cluster is measured individually, it allows a more accurate satellite galaxy
selection than using a universal red sequence model. We count the satellite
galaxies down to 0.4L* at the cluster’s redshift. In Figure 2, we show the
color distribution of galaxies around a cluster in GMBCG catalog. To study the
redshift evolution of the alignment, we choose the volume limited sample of
clusters with redshift below 0.4, where the satellite galaxies are selected
using $g-r$ color. Also, to get high purity, we choose clusters with richness
equal to or greater than 15, which leads to a purity above 90% across the
redshift range. As a result, there are about 11,000 clusters with over 260,000
associated satellite galaxies. More details of the cluster catalog can be
found in Hao et al. (2010).
Figure 2.— left: Galaxy $g-r$ color distribution around a cluster overlaid
with a model constructed of a mixture of two Gaussian distributions. The red
curve corresponds to the red sequence component while the blue one corresponds
to the sum of background galaxies and blue cluster satellites. The green
vertical line indicates the color of the BCG. $\mu$ and $\sigma$ are the means
and standard deviations of the two Gaussian components. right: Color-magnitude
relation for the same galaxies. Galaxies within the 2$\sigma$ clip of the red
sequence component are shown with red points; the green line indicates the
best fit slope and intercept of this red seqence.
### 3.3. PA Measurements
In the SDSS data reduction pipeline, the PAs of galaxies are measured with
several different methods (Stoughton et al., 2002). In this work, we will use
three of them: isophotal PA, exponential fit PA 333Since most satellite
galaixes are selected using red sequence, the exponential fit may not be
suitable. In this paper, we want to demonstrate how will the PA measurement
affect the alignment signal, and therefore include it in our discussion. and
De Vaucouleurs fit PA.
To measure the isophotal PA, the SDSS pipeline measures out to the 25
magnitudes per square arc-second isophote (in all bands). The radius of a
particular isophote as a function of angle is measured and Fourier expanded.
From the coefficients, PA (isoPhi) together with the centroid
(isoRowC,isoColC), the major and minor axes (isoA,isoB) are extracted.
For the exponential fit PA, the SDSS pipeline fit the intensity of galaxy by
$I(R)=I_{0}\exp[-1.68(R/R_{eff})]$ (3)
where the profile is truncated outside of 3$R_{eff}$ and smoothly decreases to
zero at 4$R_{eff}$. After correcting for the PSF, the PA is calculated from
this fitting and reported as expPhi in the CASJOB database. The De Vaucouleurs
fit PA follows the same procedure as exponential fit PA, except the model in
the fitting is
$I(R)=I_{0}\exp[-7.67(R/R_{eff})^{1/4}]$ (4)
where the profile is truncated outside of 7$R_{eff}$ and smoothly decreases to
zero at 8$R_{eff}$. The PA from this fitting is reported as devPhi. For more
details about these PA measurements, one can refer to Stoughton et al. (2002).
All the angles are in degrees East of North by convention. By comparing these
different PAs, isophotal PA tends to trace the exterior shape of the galaxy
while the two model fit PAs tend to trace the inner profile of the galaxy. At
low redshift, the BCG is very bright and its diffuse light may severely affect
the measurement of the outer part of the nearby galaxies. Therefore, the
isophotal PA is more susceptible to this artifact, leading to an “artificial”
orientation preference toward BCG. To give a sense of the BCG diffuse light
effect, we show a low redshift and a higher redshift cluster in Figure 3.
Figure 3.— Left is a cluster at redshift 0.103 and the right is a cluster at
redshift 0.310. The light from BCG will pose a risk on the proper measurements
of the PAs of the satellite galaxies, especially at low redshift.
### 3.4. Control Sample
As we talk about the detection of the alignment signals, we need to have a
control sample to compare with. Naively, one may directly compare the measured
alignment signal to what is expected for non-detection. However, this will
implicitly assume that the data as well as the measurements are free from any
systematics. When talking about the significance level of the detection, it is
not sufficient to consider only the statistical uncertainties. Systematics
(intrinsic scatter), if they exist, also need to be considered. Therefore,
introducing an appropriate control sample and applying the same measurements
we apply to the cluster data will help to eliminate the possible false
detection resulted from potential systematics.
For this purpose, we prepare our control sample as follows: we shuffle the
BCGs by assigning random positions (RA and DEC) to them, but keep all other
information of the BCGs unchanged. Then around each BCG (at new random
position), we re-assign “cluster satellites” by choosing those galaxies that
are falling within the $R_{scale}$ from the BCG. Also, the $i$ band magnitude
of the “satellites” should be in the range from 14 to 20. The $R_{scale}$,
measured in Mpc, plays the role of virial radius (Hao et al., 2010). In
addition to this non-cluster sample, we also add another random PA to every
galaxy in both the true cluster sample and the non-cluster sample by replacing
the measured galaxy’s PA with a random angle uniformly sampled between 0 and
180 degree. This random PA will serve as another control sample to double
check the possible systematics in our measurements.
## 4\. Results
### 4.1. Satellite Alignment
There are two basic questions concerning the formation of the satellite
alignment: (1) is it a residual feature of the initial condition of the
cluster formation? (2) or is it a dynamically evolving effect that varies as
the cluster evolves (Pereira & Kuhn, 2005), for example, due to the tidal
torque? The two different scenarios lead to different redshift dependence of
$\delta$. If it is left over from the initial alignment, its strength should
decrease as redshift decreases. On the other hand, we should see a stronger
alignment signal at low redshift if it is a dynamically evolving effect
(Ciotti & Dutta, 1994; Catelan et al., 2001; Kuhlen et al., 2007; Pereira et
al., 2008; Pereira & Bryan, 2010). Therefore, looking at the redshift
dependence of the measured $\delta$ is our primary interest. To do this, we
first measure the $\delta$ for each cluster using 4 different PAs, i.e., the
random PA, exponential fit PA, De Vaucouleurs fit PA and isophotal PA in $r$
band. We bin the clusters into redshift bins of size 0.05. In each bin, we
calculate the weighted mean of $\delta$ and the standard deviation of the
weighted mean, with the weights specified by $1/\delta_{err}^{2}$. Then, we
perform the same measurement on the control sample, and present the results in
Figure 4 and Figure 5.
Figure 4.— Satellite alignment for clusters at different redshift bin of size
0.05. The legend random PA indicates using of randomized PAs. exp indicates
using exponential fit PAs. dev indicates using De Vaucouleurs fit PAs and iso
indicates using isophotal PAs. $r$ indicates the SDSS $r$ filter. This legend
convention is also applicable to other figures in the paper. Figure 5.—
Satellite alignment measured based on the random control sample. Figure 6.—
The axis ratio of the satellite galaxies. Here, the error bar is the standard
deviation to the mean in that bin. The legend satellite expAB_r refers to the
$b/a$ ratio is measured by fitting the satellite galaxy with an exponential
profile in $r$ band. devAB_r and isoAB_r refer to the $b/a$ ratio by fitting
De Vaucouleurs and isophotal profiles respectively. This convention is also
applicable to other figures in this paper.
Based on the results in Figure 4, one can see that the $\delta$ measured using
isophotal PAs deviated from that based on random PAs in a statistically
significant way at low redshift, though approaching zero as redshift
increases. While the $\delta$ measured using the exponential fit PA and De
Vaucouleurs fit PA are consistent with that measured using random PAs except
in the lowest redshift bin. In Pereira & Kuhn (2005), the authors used
isophotal fit PAs and also considered a cluster sample (sample B) with
satellite galaxies selected using red sequence. When limiting the $r$ band
magnitude to less than 18, they measured a $\delta=-1.06\pm 0.37$, which is
consistent with our results in the lowest redshift bin using isophotal fit
PAs. The results based on the random control sample in Figure 5 show that
$\delta$ measured using all types of PAs are consistently zero across the
redshift range. In Figure 6, we plot the axis ratio of the satellite galaxies
in different redshift bins. The axis ratio determines how precisely the PAs
are measured. From Figure 6, the axis ratio measured using the isophotal
method does not vary much as redshift increases, indicating that the
diminishing satellite alignment based on isophotal fit PAs is not due to the
decreasing S/N at high redshift.
There are two possible explanations for the measured $\delta$ in the low
redshift bins. The first one is that the diffuse light from the BCGs affects
the measurements of the PAs of the cluster satellite galaxies’. This creates
an artificial preference of major axes of the satellite galaxies. This
contamination is most severe when the PAs are measured using isophotal fit,
but less prominent when the PAs are measured using exponential fit and De
Vaucouleurs fit. This is because the isophotal PAs are sensitive to the shape
of the outer profile of galaxy while the model fit PAs are more determined by
the inner profile of the galaxy.
The second possible explanation to these results is the twisting of galaxy.
This leads to different PAs when we use different methods. The outer rim of
the galaxy is more susceptible to the tidal torque so that the alignment will
show up when we use the isophotal fit PAs. One way to distinguish these two
explanations is to look at the way $\delta$ depends on the absolute and
apparent magnitudes of the BCG. To see this, we plot the measured $\delta$ for
all the clusters with respect to apparent magnitudes and absolute magnitudes
of the corresponding BCGs in the Figure 7 and Figure 8 respectively. One can
see that $\delta$ shows a strong dependence on the apparent magnitude but not
on the absolute magnitude. Therefore, we conclude that the $\delta$ in the low
redshift bins in Figure 4 is more likely resulted from the artifact of the PA
measurement. In Figure 9, we show the absolute magnitude vs redsfhit for the
BCGs.
Figure 7.— The dependence of satellite alignment $\delta$ on the r-band
apparent magnitudes of the corresponding BCGs. This shows a strong dependence
of $\delta$ measured using isophotal PAs on the apparent magnitude of the
BCGs. Figure 8.— The dependence of satellite alignment $\delta$ on the r-band
absolute magnitude of the BCGs. This shows that there is very little
dependence of $\delta$ measured using isophotal PAs on the $r$ band absolute
magnitude of the BCGs Figure 9.— Photometric redshift vs. r-band absolute
magnitude for BCGs. The red overplotted dots are the means in each redshift
bin of size 0.05.
### 4.2. BCG Alignment
#### 4.2.1 Redshift Dependence
Now, we consider the BCG alignment. We first look at the redshift dependence
of $\gamma$. Again, we perform our measurements on both the cluster sample and
the random control sample. The results are shown in Figure 10 and Figure 11
Figure 10.— BCG alignment at redshift bins of size 0.05 using different PAs
measured from the cluster sample. Figure 11.— BCG alignment at redshift bins
of size 0.05 using different PAs measured from the random sample.
The results from the random control sample show that $\gamma$ is consistently
zero. From the cluster sample, we detect a clear BCG alignment and its
strength decreases as redshift increases. From the figures, we can also see
that $\gamma$ is almost the same no matter how the PAs of BCGs are measured.
This has two important implications: 1. the PAs of BCGs are well measured by
both isophotal fit and model fit; 2. the diffuse light of satellite galaxies
does not affect the PA measurement of the BCG. Before we can conclude the
redshift dependence of $\gamma$, we still need to do one more test. That is,
if the axis ratio ($b/a$) of the BCG and cluster become systematically smaller
due to the decreased S/N at higher redshift, the measured strength of $\gamma$
will decrease too.
To calculate the cluster $b/a$, we use the method as described in Kim et al.
(2002); Niederste-Ostholt et al. (2010). For clusters, the axis ratio is
defined as $b/a=(1-\sqrt{Q^{2}+U^{2}})/(1+\sqrt{Q^{2}+U^{2}})$, with the
stokes parameters $Q=M_{xx}-M_{yy}$ and $U=2M_{xy}$. The radius-weighted
second moments are given by $M_{xx}=\left<x^{2}/r^{2}\right>$,
$M_{yy}=\left<y^{2}/r^{2}\right>$ and $M_{xy}=\left<xy/r^{2}\right>$ with
$r^{2}=x^{2}+y^{2}$. $x$ and $y$ are the distances between the satellite
galaxies and BCG in the tangent plane. For BCGs, we use the measured $b/a$ in
SDSS pipeline based on isophotal fit, exponential fit and DeVoucular fit. In
Figure 12, we plot the $b/a$ in each redshift bin of size 0.05. From the
results, the axis ratio does not depend on redshift in a statistically
significant way. Furthermore, we also checked that both the BCGs’ PAs and
cluster PAs are distributed randomly, as show in the right panel of Figure 13.
Therefore, the evolution of $\gamma$ in Figure 10 should not result from the
S/N variation of the PA measurements.
Figure 12.— The axis ratio $b/a$ of BCGs and clusters vs. redshift. Figure
13.— The distribution of BCGs’ and clusters’ PAs.
#### 4.2.2 Magnitude Dependence
We measure the BCG alignment vs the $r$ band absolute magnitudes of the BCG.
The results are presented in Figure 14
Figure 14.— The dependence of BCG alignment on the absolute $r$ band
magnitudes of BCGs.
From the plot, we see that $\gamma$ strongly depends on the BCGs absolute
magnitude. To further show that this is not due to S/N of the measurement of
BCGs shape, we plot the axis ratio $b/a$ of BCGs and clusters as a function of
the BCGs $r$ band absolute magnitude in Figure 15. There is no dependence of
$b/a$ on the BCGs absolute magnitude.
Figure 15.— The axis ratio $b/a$ of BCGs and clusters vs. the $r$ band
absolute magnitude of BCG.
From the above results, we see that the $\gamma$ depends on both redshift and
the BCG absolute magnitude. To show this more clearly, we bin the cluster
samples into photoz bins of size 0.05 and absolute magnitude bins of size 0.5.
Then, we calculate the mean $\gamma$ in each bin and plot the results in
Figure 16. The color in the plot indicates $\gamma$.
Figure 16.— Contours of $\gamma$ w.r.t. redshift and BCG absolute magnitude.
The crossover of the contour lines are mainly due to the large error bar of
each data point, which cannot be expressed in the contour plots.
From the plot, we can see that the $\gamma$ increases (i.e. absolute signal
decreases) as redshift increases and as the absolute magnitude of the BCG
decreases. Since the absolute magnitude of a galaxy is proportionally
correlated with its mass, the above results indicate that the more massive
BCGs tend to be more aligned with the cluster orientation. A subsample of the
BCGs ($\sim$2800 BCGs) has their stellar masses measured in the MPA-JHU value-
added catalog for SDSS DR7 (Kauffmann et al., 2003; Salim et al., 2007). This
allows us to directly look at the trend of $\gamma$ with respect to the BCG
stellar mass (a proxy of the total mass) and redshift. We choose three stellar
mass bins and plot the $\gamma$ vs. redshift in each of them in Figure 17. We
can see that the $\gamma$ increases as redshift increases in each stellar mass
bin and the massive BCGs shows more negative $\gamma$.
Figure 17.— Redshift dependence of $\gamma$ in three BCG stellar mass bins.
#### 4.2.3 Richness Dependence
Next, we check whether $\gamma$ depends on cluster richness. To do this, we
bin the clusters by their richness in bins with edges at 15, 25, 35, 50, 65
and 90. We did not go to the richness bins above 90 due to the smaller number
of clusters in that richness range. Then, we look at the mean $\gamma$ in each
bin. Note that we did not re-bin them into different redshift bins to keep the
number of clusters reasonably large. This will not affect our purpose for
detecting richness dependence since clusters from different redshifts are
randomly falling into different richness bins. We plot the results in Figure
18. Based on the results, we do not see a dependence on cluster richness.
Figure 18.— Dependence of BCG alignment on cluster richness. Here, we bin the
richness into bins with edges at 15, 25, 35, 50, 65 and 90. The results do not
show a statistically significant dependence of BCG alignment on cluster
richness.
### 4.3. Redshift Evolution of BCG Alignment Once Again
Measuring the redshift evolution of BCG alignment is very important for
understanding its origin. However, the redshift evolution of the measured
alignment signal needs to be interpreted with great caution, especially for
the cluster samples selected using photometric data. There are at least four
factors that will introduce systematic redshift dependence and complicate the
interpretation. (1) The S/N of the galaxy shape measurements will decrease as
redshift increases. (2) When we look at clusters of different redshift, we
need to make sure we are comparing the same population of satellite galaxies.
That is, the cluster catalog needs to be volume limited in the redshift range.
(3) The purity of clusters need to be consistently high across the redshift
range. The change of purity will lead to decreased mean alignment signal. (4)
The level of contamination from the projected field galaxies are prone to
redshift dependence. Higher level of contamination will dilute the alignment
signal. We have addressed (1) and (2) in previous sections, where we introduce
the results. In the follows, we will focus on the (3) and (4).
The alignment signal from the falsely detected clusters should be consistent
with zero, decreasing the mean alignment of the whole sample. For the
subsample of GMBCG clusters with richness equal or greater than 15, it has
been shown that the purity is consistently above 90% and does not vary more
than 10% across the redshift range from 0.1 to 0.4 (Hao et al., 2010). To
further show the purity variation will not produce the observed redshift
dependence of $\gamma$, we choose another subsample of clusters with even
higher purity. We choose clusters with richness greater than 25, which have a
purity of above 95% and vary less than 5% in the redshift range. In Figure 19,
we plot the BCG alignment parameter $\gamma$ from this subsample. Though the
overall signal level increases a little at low redshift, the trend of redshift
dependence does not differ much from the full sample with a lower richness
threshold 15. So, the purity change should not explain the measured strong
redshift dependence of $\gamma$.
Figure 19.— BCG alignment at redshift bins of size 0.05 based on a subsample
with higher purity and lower purity variations across the redshift range. The
results indicate that the redshift dependence are not affected by the purity
of the cluster sample we are using.
On the other hand, satellite galaxies selected using red sequence colors have
different levels of contamination from projected field galaxies as redshift
changes. This is mainly caused by the different degree of overlap between the
red sequence population and the field galaxy population (e.g. see Figure 14 in
Hao et al. (2010)). There are two competing effects that will increase or
decrease the contamination. First, the separation between the red sequence
component and the field galaxy component. As redshift increases, the two
components separate farther, leading to decreased projection contamination in
the red sequence. The second effect is the broadening of the distribution of
both red sequence and field galaxy. As redshift increases, the measured width
of red sequence increases mainly due to the photometric errors (Hao et al.,
2009). This will increase the chance of projected field galaxies being
identified as satellites when we select the satellite galaxies by color.
Therefore, the actual contamination level is the compromise of these two
effects.
We can describe the measured alignment parameters, $\gamma$444Since we did not
see a significant $\delta$ signal, we will consider only $\gamma$ in all the
discussions hereafter. But the method described can also be applied to the
$\delta$ case., as a combination of alignment from real cluster satellites and
projected field galaxies. If we denote the alignment parameters from our
measurements as $\gamma_{m}$, then we can decompose it into two parts as
follows:
$\gamma_{m}=\frac{\sum_{i=0}^{N_{c}}\theta_{i}+\sum_{j=0}^{N_{f}}\theta_{j}}{N_{c}+N_{f}}-45$
(5)
where $N_{c}$ is the number of true cluster satellite galaxies and $N_{f}$ is
the number of projected field galaxies. We can introduce the fraction of real
cluster satellite as $f_{c}(z)=N_{c}/(N_{c}+N_{f})$, the BCG alignment from
true cluster members as $\gamma_{c}=\sum_{i=0}^{N_{c}}\theta_{i}/N_{c}-45$ and
the BCG alignment from the projected field galaxies as
$\gamma_{f}=\sum_{j=0}^{N_{f}}\theta_{j}/N_{f}-45$. Substitute these
definitions into Equation 5 and take ensemble average of the clusters, we will
have:
$\left<\gamma_{m}\right>=\left<f_{c}(z)\right>\left<\gamma_{c}\right>+\left[1-\left<f_{c}(z)\right>\right]\left<\gamma_{f}\right>$
(6)
where $\left<...\right>$ denotes the average over the cluster ensemble. As the
mean alignment signal from the field is consistent with zero, the alignment
parameter $\gamma$ from the true cluster satellites is related to the measured
one through the redshift dependent fraction $f_{c}(z)$. To the first order
approximation, we can separate $f_{c}(z)$ into two parts as
$f_{c}(z)=f_{const}\times f(z)$, where $f_{const}$ is a redshift independent
component of the fraction, indicating the “intrinsic” fraction of true
satellite based on color selection. $f(z)$ is the redshift dependent part,
corresponding to the effect we described above. Then, the redshift dependence
of the measured alignment $\gamma$ will be mainly determined by $f(z)$.
In the GMBCG catalog, we also measured a weighted richness, which takes into
account the different degree of overlaps between red sequence and the field
galaxies at different redshift (Hao et al., 2010). The difference between
weighted richness and the direct member count richness is a good estimator of
the number of projected galaxies due to the effect described above. The
fraction of contamination can therefore be estimated by the ratio of this
difference to the direct member count richness. In Figure 20, we plot the
fraction of contamination ($1-\left<f(z)\right>$) as a function of redshift in
bins of size 0.05. The fraction is almost constant except for the lowest
redshift bin. Again, this cannot explain away the dependence of $\gamma$ on
redshift as shown in Figure 10 and Figure 19. Therefore, after considering all
the possible systematics known to us, the measured redshift dependence of
$\gamma$ still cannot be explained. In Niederste-Ostholt et al. (2010), the
authors also reported a different BCG alignment between one low redshift bin
(0.08 - 0.26) and another high redshift bin (0.26 - 0.44), which is consistent
with the results we find here.
Figure 20.— Fraction of projected field galaxies at different redshift bin of
size 0.05. It maintains constant except in the lowest redshift bin.
### 4.4. Conclusions and Discussions
We measure the satellite alignment and BCG alignment based on a large sample
of photometrically selected galaxy clusters from the SDSS DR7. We detect a
satellite alignment only when we use the isphotal PAs. As we noted in §3.3,
the isophotal PA tends to trace the outer profile of the galaxy while the
model fit PAs tend to trace the inner part of the galaxy. A direct
interpretation of the measurement results could be that the outer part of the
satellite galaxy is more susceptible to the the gravitational torque and thus
shows an orientation preference toward the BCG. However the inner part of the
galaxy is not affected much by the tidal torque and does not show preference
toward the BCG. The measured discrepancy of the satellite alignment from
different PAs could be a manifestation of the twisting of galaxy shape from
inner part to outer part. However, another possibility of this discrepancy
could be that the light from BCG contaminates the measurement of the PA based
on the isophote fit to the outer region of the galaxy and lead to a
“artificial” alignment. By comparing the dependence of $\delta$ on BCG
apparent and absolute magnitudes, we favor the latter explanation. This means
that, though the tidal torque within the galaxy cluster may induce the
satellite alignment, we are not yet able to detect them based on our current
SDSS data. It will be definitely an interesting question to address with the
forthcoming high quality data such as that from the Dark Energy Survey (The
Dark Energy Survey Collaboration, 2005).
For the BCG alignment, by introducing the alignment parameter $\gamma$, we
detect a strong redshift and BCG absolute magnitude dependences of the
alignment. The redshift dependence cannot be explained by our known
systematics. This result implies that the BCGs orientation is a dynamically
evolving process and gets stronger as the cluster system evolves. For the
dependence of $\gamma$ on the absolute magnitude of BCG, our result is
qualitatively consistent with the conclusion that clusters with BCG dominance
show stronger BCG alignment in (Niederste-Ostholt et al., 2010). Furthermore,
based on a subsample of the BCGs whose stellar masses are available, we show
that the BCG alignment signal becomes stronger as the BCG stellar mass
increases. This result indicates that more massive BCGs (with lower absolute
magnitude) are more likely to align with the major axes of clusters.
We must take great caution when interpreting the dependence of $\gamma$ on BCG
absolute magnitude and stellar mass since the purity of the cluster sample may
also depend on the BCG absolute magnitude and stellar mass. As the cluster
purity decreases, the alignment signal will decrease too. The faintest two
bins in Figure 14 show null alignment signal, which may also be due to the
significantly decreased cluster purity. Nevertheless, we can still see a trend
that $\gamma$ increases as the BCG absolute magnitude increases by looking at
the bright end of the sample where we are confident about the cluster purity.
Evaluating the cluster purity variation w.r.t BCG absolute magnitude turns out
to be difficult because it requires a mock galaxy catalog that has BCG
information properly built in. The way the mock catalog is constructed will
impact the results significantly. Therefore, we think the best way to check
this purity variation w.r.t. magnitude is to perform similar analysis with
deeper data in the near future, such as the data from the upcoming Dark Energy
Survey (The Dark Energy Survey Collaboration, 2005).
## References
* Abazajian & Sloan Digital Sky Survey (2008) Abazajian, K., & Sloan Digital Sky Survey, f. t. 2008, ArXiv e-prints
* Agustsson & Brainerd (2006) Agustsson, I., & Brainerd, T. G. 2006, ApJ, 644, L25
* Austin & Peach (1974) Austin, T. B., & Peach, J. V. 1974, MNRAS, 168, 591
* Azzaro et al. (2007) Azzaro, M., Patiri, S. G., Prada, F., & Zentner, A. R. 2007, MNRAS, 376, L43
* Bernstein & Jarvis (2002) Bernstein, G. M., & Jarvis, M. 2002, AJ, 123, 583
* Bernstein & Norberg (2002) Bernstein, G. M., & Norberg, P. 2002, AJ, 124, 733
* Binggeli (1982) Binggeli, B. 1982, A&A, 107, 338
* Brainerd (2005) Brainerd, T. G. 2005, ApJ, 628, L101
* Brown (1938) Brown, F. G. 1938, MNRAS, 98, 218
* Carter & Metcalfe (1980) Carter, D., & Metcalfe, N. 1980, MNRAS, 191, 325
* Catelan et al. (2001) Catelan, P., Kamionkowski, M., & Blandford, R. D. 2001, MNRAS, 320, L7
* Ciotti & Dutta (1994) Ciotti, L., & Dutta, S. N. 1994, MNRAS, 270, 390
* Ciotti & Giampieri (1998) Ciotti, L., & Giampieri, G. 1998, ArXiv Astrophysics e-prints
* Colless et al. (2001) Colless, M., et al. 2001, MNRAS, 328, 1039
* Croom et al. (2009) Croom, S. M., et al. 2009, MNRAS, 392, 19
* Dressler (1978) Dressler, A. 1978, ApJ, 226, 55
* Faltenbacher et al. (2008) Faltenbacher, A., Jing, Y. P., Li, C., Mao, S., Mo, H. J., Pasquali, A., & van den Bosch, F. C. 2008, ApJ, 675, 146
* Faltenbacher et al. (2007) Faltenbacher, A., Li, C., Mao, S., van den Bosch, F. C., Yang, X., Jing, Y. P., Pasquali, A., & Mo, H. J. 2007, ApJ, 662, L71
* Faltenbacher et al. (2009) Faltenbacher, A., Li, C., White, S. D. M., Jing, Y., Shu-DeMao, & Wang, J. 2009, Research in Astronomy and Astrophysics, 9, 41
* Hao et al. (2009) Hao, J., et al. 2009, ApJ, 702, 745
* Hao et al. (2010) —. 2010, ArXiv e-prints
* Hawley & Peebles (1975) Hawley, D. L., & Peebles, P. J. E. 1975, AJ, 80, 477
* Hirata et al. (2007) Hirata, C. M., Mandelbaum, R., Ishak, M., Seljak, U., Nichol, R., Pimbblet, K. A., Ross, N. P., & Wake, D. 2007, MNRAS, 381, 1197
* Kauffmann et al. (2003) Kauffmann, G., et al. 2003, MNRAS, 341, 33
* Kim et al. (2002) Kim, R. S. J., et al. 2002, AJ, 123, 20
* Knebe et al. (2004) Knebe, A., Gill, S. P. D., Gibson, B. K., Lewis, G. F., Ibata, R. A., & Dopita, M. A. 2004, ApJ, 603, 7
* Kuhlen et al. (2007) Kuhlen, M., Diemand, J., & Madau, P. 2007, ApJ, 671, 1135
* Mandelbaum et al. (2006) Mandelbaum, R., Hirata, C. M., Ishak, M., Seljak, U., & Brinkmann, J. 2006, MNRAS, 367, 611
* Niederste-Ostholt et al. (2010) Niederste-Ostholt, M., Strauss, M. A., Dong, F., Koester, B. P., & McKay, T. A. 2010, MNRAS, 405, 2023
* Paz et al. (2011) Paz, D. J., Sgró, M. A., Merchán, M., & Padilla, N. 2011, MNRAS, 502
* Paz et al. (2008) Paz, D. J., Stasyszyn, F., & Padilla, N. D. 2008, MNRAS, 389, 1127
* Pereira & Bryan (2010) Pereira, M. J., & Bryan, G. L. 2010, ApJ, 721, 939
* Pereira et al. (2008) Pereira, M. J., Bryan, G. L., & Gill, S. P. D. 2008, ApJ, 672, 825
* Pereira & Kuhn (2005) Pereira, M. J., & Kuhn, J. R. 2005, ApJ, 627, L21
* Plionis et al. (2003) Plionis, M., Benoist, C., Maurogordato, S., Ferrari, C., & Basilakos, S. 2003, ApJ, 594, 144
* Salim et al. (2007) Salim, S., et al. 2007, ApJS, 173, 267
* Sastry (1968) Sastry, G. N. 1968, PASP, 80, 252
* Siverd et al. (2009) Siverd, R. J., Ryden, B. S., & Gaudi, B. S. 2009, ArXiv e-prints
* Stoughton et al. (2002) Stoughton, C., et al. 2002, AJ, 123, 485
* Strauss et al. (2002) Strauss, M. A., et al. 2002, AJ, 124, 1810
* Struble & Peebles (1985) Struble, M. F., & Peebles, P. J. E. 1985, AJ, 90, 582
* The Dark Energy Survey Collaboration (2005) The Dark Energy Survey Collaboration. 2005, ArXiv Astrophysics e-prints
* Tormen (1997) Tormen, G. 1997, MNRAS, 290, 411
* Tucker & Peterson (1988) Tucker, G. S., & Peterson, J. B. 1988, AJ, 95, 298
* Ulmer et al. (1989) Ulmer, M. P., McMillan, S. L. W., & Kowalski, M. P. 1989, ApJ, 338, 711
* Vitvitska et al. (2002) Vitvitska, M., Klypin, A. A., Kravtsov, A. V., Wechsler, R. H., Primack, J. R., & Bullock, J. S. 2002, ApJ, 581, 799
* Wang et al. (2005) Wang, H. Y., Jing, Y. P., Mao, S., & Kang, X. 2005, MNRAS, 364, 424
* Wang et al. (2009) Wang, Y., Park, C., Yang, X., Choi, Y., & Chen, X. 2009, ApJ, 703, 951
* Wang et al. (2008) Wang, Y., Yang, X., Mo, H. J., Li, C., van den Bosch, F. C., Fan, Z., & Chen, X. 2008, MNRAS, 385, 1511
* Yang et al. (2006) Yang, X., van den Bosch, F. C., Mo, H. J., Mao, S., Kang, X., Weinmann, S. M., Guo, Y., & Jing, Y. P. 2006, MNRAS, 369, 1293
* York et al. (2000) York, D. G., et al. 2000, AJ, 120, 1579
* Zentner et al. (2005) Zentner, A. R., Kravtsov, A. V., Gnedin, O. Y., & Klypin, A. A. 2005, ApJ, 629, 219
## Acknowledgments
JH thanks Scott Dodleson for helpful comments and Eric Switzer for helpful
conversations. Funding for the SDSS and SDSS-II has been provided by the
Alfred P. Sloan Foundation, the Participating Institutions, the National
Science Foundation, the U.S. Department of Energy, the National Aeronautics
and Space Administration, the Japanese Monbukagakusho, the Max Planck Society,
and the Higher Education Funding Council for England. The SDSS Web Site is
http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the
Participating Institutions. The Participating Institutions are the American
Museum of Natural History, Astrophysical Institute Potsdam, University of
Basel, University of Cambridge, Case Western Reserve University, University of
Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the
Japan Participation Group, Johns Hopkins University, the Joint Institute for
Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and
Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences
(LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for
Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico
State University, Ohio State University, University of Pittsburgh, University
of Portsmouth, Princeton University, the United States Naval Observatory, and
the University of Washington.
|
arxiv-papers
| 2011-03-17T20:00:01 |
2024-09-04T02:49:17.745547
|
{
"license": "Public Domain",
"authors": "Jiangang Hao, Jeffrey M. Kubo, Robert Feldmann, James Annis, David E.\n Johnston, Huan Lin, Timothy A. McKay",
"submitter": "Jiangang Hao",
"url": "https://arxiv.org/abs/1103.3500"
}
|
1103.3506
|
# The paleoclassical interpretation of quantum theory
I. Schmelzer ilja.schmelzer@gmail.com ilja-schmelzer.de
###### Abstract.
This interpretation establishes a completely classical ontology – only the
classical trajectory in configuration space – and interpretes the wave
function as describing incomplete information (in form of a probability flow)
about this trajectory. This combines basic ideas of de Broglie-Bohm theory and
Nelsonian stochastics about the trajectory with a Bayesian interpretation of
the wave function.
Various objections are considered and discussed. In particular a regularity
principle for the zeros of the wave function allows to meet the Wallstrom
objection.
Berlin, Germany
###### Contents
1. 1 Introduction
1. 1.1 The Bayesian character of the paleoclassical interpretation
2. 1.2 The realistic character of the paleoclassical interpretation
3. 1.3 The justification
4. 1.4 Objections
5. 1.5 Directions of future research
2. 2 A wave function variant of Hamilton-Jacobi theory
3. 3 The wave function describes incomplete information
1. 3.1 What about the more general case?
2. 3.2 The classical limit
4. 4 What follows from the interpretation
1. 4.1 The relevance of the information contained in the wave function
2. 4.2 The independence condition
3. 4.3 Appropriate reduction to a subsystem
4. 4.4 Summary
5. 5 Incorporating Nelsonian stochastics
6. 6 The character of the wave function
1. 6.1 Complexity of the wave function
2. 6.2 Time dependence of the wave function
3. 6.3 The contingent character of the wave function
4. 6.4 Conclusion
7. 7 The Wallstrom objection
1. 7.1 A solution for this problem
8. 8 A Popperian argument for preference of an information-based interpretation
9. 9 Open problems
1. 9.1 Other restrictions following from the interpretation
2. 9.2 Why is the Schrödinger equation linear?
10. 10 A theoretical possibility to test: The speedup of quantum computers
1. 10.1 The speed of quantum information as another boundary
11. 11 Conclusions
12. A Compatibility with relativity
13. B Pauli’s symmetry argument
14. C Problems with field theories
15. D Why we observe configurations, not wave packets
## 1\. Introduction
The interpretation presented here completely revives classical ontology:
Reality is described completely by a classical trajectory $q(t)$, in the
classical configuration space $Q$.
The wave function is also interpreted in a completely classical way – as a
particular description of a classical probability flow, defined by the
probability distribution $\rho(q)$ and average velocities $v^{i}(q)$. The
formula which defines this connection has been proposed by Madelung 1926 [1]
and is therefore as old as quantum theory itself. It is the polar
decomposition
(1) $\psi(q)=\sqrt{\rho(q)}e^{\frac{i}{\hbar}S(q)},$
with the phase $S(q)$ being a potential of the velocity $v^{i}(q)$:
(2) $v^{i}(q)=m^{ij}\partial_{j}S(q),$
so that the flow is a potential one.111Here, $m^{ij}$ denotes a quadratic form
– a “mass matrix” – on configuration space. I assume in this paper that the
Hamiltonian is quadratic in the momentum variables, thus, (3)
$H(p,q)=\frac{1}{2}m^{ij}p_{i}p_{j}+V(q).$ This is quite sufficient for
relativistic field theory, see app. A.
This puts the interpretation into the classical realist tradition of
interpretation of quantum theory, the tradition of de Broglie-Bohm (dBB)
theory [2], [3] and Nelsonian stochastics [4].
But there is a difference – the probability flow is interpreted as describing
incomplete information about the true trajectory $q(t)$. This puts the
interpretation into another tradition – the interpretation of probability
theory as the logic of plausible reasoning in situations with incomplete
information, as proposed by Jaynes [5]. This objective variant of the Bayesian
interpretation of probability follows the classical tradition of Laplace [6]
(to be distinguished from the subjective variant proposed by de Finetti [7]).
So this interpretation combines two classical traditions – classical realism
about trajectories and the classical interpretation of probability as the
logic of plausible reasoning.
There is also some aspect of novelty – a clear and certain program for
development of subquantum theory. Quantum theory makes sense only as an
approximation for potential probability flows. It has to be generalized to
non-potential flows, described by the flow variables $\rho(q),v^{i}(q)$.
Without a potential $S(q)$ there will be also no wave function $\psi(q)$ in
such a subquantum theory. And the flow variables are not fundamental fields
themself, they also describe only incomplete information. The fundamental
theory has to be one for the classical trajectory $q(t)$ alone.
So this interpretation is also a step toward the development of a subquantum
theory. This is an aspect which invalidates prejudices against quantum
interpretations as pure philosphy leading nowhere. Nonetheless even this
program for subquantum theory has also classical character, making subquantum
theory closer to a classical theory.
Thinking about how to name this interpretation of quantum theory, I have
played around with “neoclassical”, but rejected it, for a simple reason: There
is nothing “neo” in it. Instead, it deserves to be named
“paleo”.222Association with the “paleolibertarian” political direction is
welcome and not misleading – it also revives classical libertarian ideas
considered to be out of date for a long time.
And so I have decided to name this interpretation of quantum theory
“paleoclassical”.
### 1.1. The Bayesian character of the paleoclassical interpretation
The interpretation of the wave function is essentially Bayesian.
It is the objective (information-dependent) variant of Bayesian probability,
as proposed by Jaynes [5], which is used here. It has to be distinguished from
the subjective variant proposed by de Finetti [7], which is embraced by the
Bayesian interpretation of quantum theory proposed by Caves, Fuchs and Schack
[8].
The Bayesian interpretation of the wave function is in conflict with the
objective character assigned to the wave function in dBB theory. Especially
Bell has emphasized that the wave function has to be understood as real
object.
But the arguments in favour of the reality of the wave function, even if
strong, appear insufficient: The effective wave function of small subsystems
depends on the configuration of the environment. This dependence is sufficient
to explain everything which makes the wave function similar to a really
existing object. It is only the wave function of a closed system, like the
whole universe, which is completely interpreted in terms of incomplete
information about the system itself.
Complexity and time dependence, even if they are typical properties of real
things, are characteristics of incomplete information as well.
The Bayesian aspects essentially change the character of the interpretation:
The dBB “guiding equation” (2), instead of guiding the configuration, becomes
part of the definition of the information about the configuration contained in
the wave function.
This has the useful consequence that there is no longer any “action without
reaction” asymmetry: While it is completely natural that the real
configuration does not have an influence on the information available about
it, there is also no longer any “guiding” of the configuration by the wave
function.
### 1.2. The realistic character of the paleoclassical interpretation
On the other hand, there is also a strong classical realistic aspect of the
paleoclassical interpretation: First of all, it is a realistic interpretation.
But, even more, its ontology is completely classical – the classical
trajectory $q(t)$ is the only beable.
This defines an important difference between the paleoclassical interpretation
and other approaches to interprete the wave function in terms of information:
It answers the question “information about what” by explicitly defining the
“what” – the classical trajectory – and even the “how” – by giving explicit
formulas for the probability distribution $\rho(q)$ and the average velocity
$v^{i}(q)$.
The realistic character leads also to another important difference – that of
motivation. For the paleoclassical interpretation, there is no need to solve a
measurement problem – there is none already in dBB theory, and the dBB
solution of this problem – the explicit non-Schrödinger evolution of the
conditional wave function of the measured system, defined by the wave function
of system and measurement device and the actual trajectory of the measurement
device – can be used without modification.
And there is also no intention to save relativistic causality by getting rid
of the non-local collapse – the paleoclassical interpretation accepts a hidden
preferred frame, which is yet another aspect of its paleo character. Anyway,
because of Bell’s theorem, there is no realistic alternative. 333Here I, of
course, have in mind the precise meaning of “realistic” used in Bell’s
theorem, instead of the metaphorical one used in “many worlds”, which is, in
my opinion, not even a well-defined interpretation.
### 1.3. The justification
So nor the realistic approach of dBB theory, which remains influenced by the
frequentist interpretation of probability (an invention of the positivist
Richard von Mises [11]) and therefore tends to objectivize the wave function,
nor the Bayesian approach, which embraces the anti-realistic rejection of
unobservables and therefore rejects the preferred frame, are sufficiently
classical to see the possibility of a complete classical picture.
But it is one thing to see it, and possibly even to like it because of its
simplicity, and another thing to consider it as justified.
The justification of the paleoclassical interpretation presented here is based
on a reformulation of classical mechanics in term of – a wave function. It is
a simple variant of Hamilton-Jacobi theory with a density added, but this
funny reformulation of classical mechanics appears to be the key for the
paleoclassical interpretation. The point is that its exact equivalence to
classical mechanics (in the domain of its validity) and even the very fact
that it shortly becomes invalid (because of caustics) almost force us to
accept – for this classical variant – the interpretation in terms of
insufficient information. And it also provides all the details we use.
But, then, why should we change the ontological interpretation if all what
changes is the equation of motion? Moreover, if (as it appears to be the case)
the Schrödinger equation is simply the linear part of the classical equation,
so that there is not a single new term which could invalidate the
interpretation? And where one equation is the classical limit of the other
one?
We present even more evidence of the conceptual equivalence between the two
equations: A shared explanation, in terms of information, that there should be
a global $U(1)$ phase shift symmetry, a shared explanation of the product rule
for independence, a shared explanation for homogenity of the equation. And
there is, of course, the shared Born probability interpretation. All these
things being equal, why should one even think about giving the two wave
functions a different interpretation?
### 1.4. Objections
There are a lot of objections against this interpretation to care about.
Some of them are quite irrelevant, because they are handled already
appropriately from point of view of de Broglie-Bohm theory, so I have banned
them into appendices: The objection of incompatibility with relativity (app.
A), the Pauli objection that it destroys the symmetry between configuration
and momentum variables (app. B), some doubts about viability of field-
theoretic variants related with overlaps (app. C).
There are the arguments in favour of interpreting the wave function as
describing external real beables. These are quite strong but nonetheless
insufficient: The wave functions of small subsystems – and we have no access
to different ones – _depend_ on real beables external to the system itself,
namely the trajectories of the environment. And complexity and dynamics are
properties of incomplete information as well.
And there is the Wallstrom objection [9], the in my opinion most serious one.
How to justify, in terms of $\rho(q)$ and $v^{i}(q)$, the “quantization
condition”
(4) $\oint m_{ij}v^{i}(q)dq^{j}=\oint\partial_{j}S(q)dq^{j}=2\pi m\hbar,\qquad
m\in\mbox{$\mathbb{Z}$}.$
for closed trajectories around zeros of the wave function, which is, in
quantum theory, a trivial consequence of the wave function being uniquely
defined globally, but which is completely implausible if formulated in terms
of the velocity field?
Fortunately, I have found a solution of this problem in [10], based on an
additional regularity postulate. All what has to be postulated is that _$0
<\Delta\rho(q)<\infty$ almost everywhere where $\rho(q)=0$_. This gives the
necessary quantization condition. Moreover, there are sufficiently strong
arguments that it can be justified by a subquantum theory.
The idea that what we observe are localized wave packets instead of the
configurations themself I reject in app. D. So all the objections I know about
can be answered in a satisfactory way. Or at least I think so.
### 1.5. Directions of future research
Different from many other interpretations of quantum theory, the
paleoclassical interpretation suggests a quite definite program of development
of a more fundamental, subquantum theory: It defines the ontology of
subquantum theory as well as the equation which can hold only approximately –
the potentiality condition for the velocity $v^{i}(q)$. The consideration of
the Wallstrom objection even identifies the domain where modifications of
quantum theory are necessary – the environment of the zeros of the wave
function.
One can identify also another domain where quantum predictions will fail in
subquantum theory – the reliability of quantum computers, in particular their
ability to reach exponential speedup in comparison with classical computers.
Another interesting question is what restrictions follow for
$\rho(q),v^{i}(q)$ from the interpretation as a probability flow for a more
fundamental theory for the trajectory $q(t)$ alone. An answer may be
interesting for finding answers to the “why the quantum” question. A question
which cannot be answered by an interpretation, which is restricted to the
“what is the quantum” question.
## 2\. A wave function variant of Hamilton-Jacobi theory
Do you know that one can reformulate classical theory in terms of a wave
function? With an equation for this wave function which is completely
classical, but, nonetheless, quite close to the Schrödinger equation?
In fact, this is a rather trivial consequence of the mathematics of Hamilton-
Jacobi theory and the insights of Madelung [1], de Broglie [2], and Bohm [3].
All one has to do is to look at them from another point of view. Their aim was
to understand quantum theory, by representing quantum theory in a known, more
comprehensible, classical form – a form remembering the classical Hamilton-
Jacobi equation
(5)
$\partial_{t}S(q)+\frac{1}{2}m^{ij}\partial_{i}S(q)\partial_{j}S(q)+V(q)=0.$
So, the real part of the Schrödinger equation, divided by the wave function,
gives
(6)
$\partial_{t}S(q)+\frac{1}{2}m^{ij}\partial_{i}S(q)\partial_{j}S(q)+V(q)+Q[\rho]=0,$
with only one additional term – the quantum potential
(7) $Q[\rho]=-\frac{\hbar^{2}}{2}\frac{\Delta\sqrt{\rho}}{\sqrt{\rho}}$
The imaginary part of the Schrödinger equation (also divided by $\psi(q)$) is
the continuity equation for $\rho$
(8) $\partial_{t}\rho(q,t)+\partial_{i}(\rho(q,t)v^{i}(q,t))=0$
Now, all what we have to do is to revert the aim – instead of presenting the
Schrödinger equation like a classical equation, let’s present the classical
Hamilton-Jacobi equation like a Schrödinger equation. There is almost nothing
to do – to add a density $\rho(q)$ together with a continuity equation (8) is
trivial. We use the same polar decomposition formula (1) to define the
classical wave function. It remains to do the same procedure in the other
direction. The difference is the same – the quantum potential. So we obtain,
as the equation for the classical wave function, an equation I have named pre-
Schrödinger equation:
(9)
$i\hbar\partial_{t}\psi(q,t)=-\frac{\hbar^{2}}{2}m^{ij}\partial_{i}\partial_{j}\psi(q,t)+(V(q)-Q[\rho])\psi(q,t)=\hat{H}\psi-Q[|\psi|^{2}]\psi.$
Of course, the additional term is a nasty, nonlinear one. But that’s the
really funny point: We can obtain now quantum theory as the linear
approximation of classical theory, in other words, as a simplification.
But there is more than this in this classical equation for the classical wave
equation. The point is that there is an exact equivalence between the
different formulations of classical theory, despite the fact that one is based
on a classical trajectory $q(t)$ only, and the other has a wave function
$\psi(q,t)$ together with the trajectory $q(t)$. And it is this exact
equivalence which can be used to identify the meaning of the classical wave
function.
Because of its importance, let’s formulate this in form of a theorem:
###### Theorem 1 (equivalence).
Assume $\psi(q,t),q(t)$ fulfill the pre-Schrödinger equation (9) for some
Hamilton function $H(p,q)$ of type (3), together with the guiding equation
(2).
Then, whatever the initial values $\psi_{0}(q),q_{0}$ for the wave function
$\psi_{0}(q)=\psi(q,t_{0})$ and the initial configuration $q_{0}=q(t_{0})$,
there exists initial values $q_{0},p_{0}$ so that the Hamilton equation for
$H(p,q)$ with these initial values gives $q(t)$.
###### Proof.
The difficult part of this theorem is classical Hamilton-Jacobi theory, so,
presupposed here as known. The simple modifications of this theory used here –
adding the density $\rho(q)$, with continuity equation (8), and rewriting the
result in terms of the wave function $\psi(q)$ defined by the polar
decomposition (1), copying what has been done by Bohm [3] – do not endanger
the equivalence between Hamiltonian (or Lagrangian) and Hamilton-Jacobi
theory. ∎
## 3\. The wave function describes incomplete information
So let’s evaluate now what follows from the equivalence theorem about the
physical meaning of the (yet classical) wave function.
First, it is the classical (Hamiltonian or Lagrangian) variant which is
preferable as a fundamental theory. There are three arguments to justify this:
* •
Simplicity of the set of fundamental variables: We need only a single
trajectory $q(t)$ instead of trajectory $q(t)$ together with a wave function
$\psi(q,t)$.
* •
Correspondence between fundamental variables and observables: It is only the
trajectory $q(t)$ in configuration space which is observable.
* •
Stability in time: The wave function develops caustics after a short period of
time and becomes invalid. The classical equations of motion does not have such
a problem.
So it is the classical variant which is preferable as a fundamental theory.
Thus, we can identify the true beable of classical theory with the classical
trajectory $q(t)$.
The next observation is that, once $q(t)$ is known and fixed, the wave
function contains many degrees of freedom which are unobservable in principle:
Many different wave functions define the same trajectory $q(t)$. So we can
conclude that these different wave functions do _not_ describe physically
different states, containing additional beables.
On the other hand, this is true only if $q$ is known. What if $q$ is not
known? In this case, the wave function defines simply a subset of all possible
classical trajectories, and a probability measure on this subset.
To illustrate this, a particular most important example of a Hamilton-Jacobi
function is useful: It is the function $S(q_{0},t_{0},q_{1},t_{1})$ defined by
(10) $S(q_{0},t_{0},q_{1},t_{1})=\int_{t_{0}}^{t_{1}}L(q(t),\dot{q}(t),t)dt,$
where the integral is taken over the classical solutions $q(t)$ of this
minimum problem with initial and final values $q(t_{0})=q_{0}$,
$q(t_{1})=q_{1}$. This function fulfills the Hamilton-Jacobi equation in the
variables $q_{0},t_{0}$ as well as $q_{1},t_{1}$. In above versions, it can be
characterized as a Hamilton-Jacobi function $S(q,t)$ which is defined by a
subset of trajectories: The function $S(q_{0},t_{0},q,t)$ describes the subset
of trajectories going through $q_{0}$ at $t_{0}$, while the function
$S(q,t,q_{1},t_{1})$ describes the subset of trajectories going through
$q_{1}$ at $t_{1}$.
We can generalize this. The phase $S(q,t)$ tells us the value of $\dot{q}(t)$
given $q(t)$: _If_ $q(t)=q_{0}$ _then_ $\dot{q}(t)=v_{0}$, with $v_{0}$
defined by the guiding equation (2). So, $S(q,t)$ always distinguishes a
particular subset of classical trajectories.
Even more specific, this subset described by $S(q,t)$ can be uniquely
described in terms of a subset of the possible initial values at the moment of
time $t$ – the configuration $q(t_{0})$ and the momentum $p(t_{0})=\nabla
S(q(t_{0}),t_{0})$, that means, as a subset of the possible values for the
fundamental beables $q,p$ (or $q,\dot{q}$).
The other part – the density $\rho(q)$ – is nothing but a probability density
on this particular subset.
Of course, a subset is nothing but a special delta-like probability measure,
so that the wave function simply defines a probability density on the set of
possible initial values:
(11) $\rho(p,q)dpdq=\rho(q)\delta(p-\nabla S(q))dpdq$
The pre-Schrödinger equation is, therefore, nothing but the evolution equation
for this particular probability distribution.
So our classical, Hamilton-Jacobi wave function is nothing mystical, but
simply a very particular form of a standard classical probability density
$\rho(p,q)$ on the phase space.
In particular, the pre-Schrödinger equation for the wave function is nothing
but a particular case of the Liouville equation, the standard law of evolution
of standard classical probability distributions $\rho(p,q)$, for the
particular ansatz (11), and follows from the fundamental law of evolution for
the true, fundamental classical beables.
Moreover, the Liouville equation also defines the domain of applicability of
the equation for the wave function. This domain is, in fact, restricted. In
terms of $\rho(p,q)$, it will always remain a probability distribution on some
Lagrangian submanifold. But this Lagrangian submanifold will be, after some
time, no longer a graphic of a function $p=\nabla S(q)$ on configuration space
– there may be caustics, and in this case there will be several values of
momentum for the same configuration $q$. If this happens, the wave function is
no longer an adequate description.
Such an effect – restricted validity – is quite natural for the evolution of
information, but not for fundamental beables.
So the wave function variant of Hamilton-Jacobi theory almost forces us to
accept an interpretation of the wave function in terms of incomplete
information. Indeed,
* •
The parts of the wave function, $\rho(q)$ as well as $S(q)$, make sense as
describing a well-defined type of incomplete information about the classical
configuration, namely the probability distribution $\rho(p,q)$ defined by
(11).
* •
The alternative, to interpret the wave function as describing some other,
external beables, does not make sense, given the observational equivalence of
the theory with simple Lagrangian classical mechanics, with $q(t)$ as the only
observable. Additional beables should influence, at least in some
circumstances, the $q(t)$. They don’t.
So it looks like a incomplete information, it behaves like incomplete
information, it becomes invalid like incomplete information – it is incomplete
information.
And, given that we know, in the case of the pre-Schrödinger equation, the
fundamental law of evolution of the beables themself, it also makes no sense
to reify this particular probability distribution as objective. A probability
distribution defines a set of incomplete information about the real
configuration, that’s all.
### 3.1. What about the more general case?
Of course, the considerations above have been based on the equivalence theorem
between the classical evolution and the evolution defined by the pre-
Schrödinger equation. It was this exact equivalence which was able to give us
some certainty, to remove all doubts that there is something else, some
information about other beables, hidden in the wave function.
But what about the more general case, the case where we do not have an exact
equivalence between an equation in terms of $q,\psi(q)$ and an equation purely
in terms of the classical trajectory $q(t)$? In such a situation, the case for
an interpretation of $\psi(q)$ in terms of incomplete information about the
$q(t)$ is, of course, a little bit weaker.
Nonetheless, given such an ideal correspondence for the pre-Schrödinger
equation, the interpretation remains certainly extremely plausible in a more
general situation too. Indeed, why should one change the interpretation, the
ontological meaning, of $\psi(q)$, if all what is changed is that we have
replaced the pre-Schrödinger equation by another equation? The evolution
equation is different, that’s all. In itself, a change in the evolution
equation does not give even a bit of motivation to change the interpretation.
And it should be noted that they have a hard job. The similarity between the
two variants of the Schrödinger equation does not makes it easier: In fact,
the funny observation that the Schrödinger equation is the linear part of the
pre-Schrödinger equation becomes relevant here. If the Schrödinger equation
would contain some new terms, this would open the door for attempts to show
that the new terms do not make sense in the original interpretation. But there
are no new terms in the linear part of the pre-Schrödinger equation. All terms
of the Schrödinger equation are already contained in the pre-Schrödinger
equation. So they all make sense.444It has to be mentioned in this connection
that there is something present in the Schrödinger equation which is not
present in the pre-Schrödinger equation – the dependence on $\hbar$. So one
can at least try to base an argument on this additional dependence. But, as
described below, if one incorporates a Nelsonian stochastic process, the
dependence on $\hbar$ appears in a natural way as connected with the
stochastic process. So to use this difference to justify a modification of the
ontology remains quite nontrivial.
### 3.2. The classical limit
There is also another strong argument for interpreting above theories in the
same way – that classical theory appears as the classical limit of quantum
theory.
The immediate consequence of this is that above theories have the same
intention – the description, as accurate as possible, of the same reality.
So this is not a situation where the same mathematical apparatus is applied to
quite different phenomena, so that it is natural use a different
interpretation even if the same mathematical apparatus is applied. In our
case, we use the same mathematical formalism to describe the same thing. At
least in the classical limit, quantum theory has to describe the same thing –
with the same interpretation – as the classical interpretation.
## 4\. What follows from the interpretation
But there is not only the similarity between the equations, and that the
object is essentially the same, which suggests to use the same interpretation
for above variants of the wave function.
There are also some interesting consequences of the interpretation. And these
consequences, to be shared by all wave functions which follow this
interpretation, are fulfilled by the quantum wave function.
### 4.1. The relevance of the information contained in the wave function
A first consequence of the interpretation can be found considering the
_relevance_ of the information contained in the wave function, as information
about the real beables. In fact, assume that the wave function $\psi(q)$
contains a lot of information which is irrelevant as information about the
$q$. This would open the door for some speculation about the nature of this
additional information. Once it cannot be interpreted as information about the
$q$, it has to contain information about some other real beables. So let’s
consider which of the information contained in the wave function is really
relevant, tells us something about the true trajectory $q(t)$.
Now, knowledge about the probability $\rho(q)$ is certainly relevant if we
don’t know the real value of $q$. And it is relevant in all of its parts.
Then, as we have seen, $S(q)$ gives us, via the “guiding equation” (2), the
clearly relevant information about the value of $\dot{q}$ given the value of
$q$. Given that we don’t know the true value of $q$, this information is
clearly relevant. But, different from $\rho(q)$, the function $S(q)$ contains
also a little bit more: Another function $S^{\prime}=S+c$ would give _exactly_
the same information about the real beables.
As a consequence, the wave function $\psi(q)$ also contains a corresponding
additional, irrelevant information. The polar decomposition (1) defines what
it is – a global constant phase factor.
So we find that all the information contained in $\psi(q)$ – _except for a
global constant phase factor_ – is relevant information about the real beables
$q$.
At a first look, this seems trivial, but I think it is a really remarkable
property of the paleoclassical interpretation.
The irrelevance of the phase factor is a property of the interpretation of the
meaning of the wave function. It follows from this interpretation: We have
considered all the ways how to obtain information about the beables from
$\psi(q)$, and we have found that all the information is relevant, except this
constant phase factor.
And now let’s consider the equations. Above equations considered up to now –
the pre-Schrödinger equation as well as the Schrödinger equation – have the
same symmetry regarding multiplication with a constant phase factor: Together
with $\psi(q,t)$, $c\psi(q,t)$ is a solution of the equations too.
This is, of course, how it should be if the wave function describes incomplete
information about the $q$. If, instead, it would describe some different,
external beables, then there would be no reason at all to assume such a
symmetry. In fact, why should the values of some function, values at points
far away from each other, be connected with each other by such a global
symmetry?
So every interpretation which assigns the status of reality to $\psi(q)$ has,
in comparison, a problem to explain this symmetry. A popular solution is to
assign reality not to $\psi(q)$, but, instead, to the even more complicate
density matrix $|\psi\rangle\langle\psi|$. The flow variable are, obviously,
preferable by simplicity.
### 4.2. The independence condition
It is one of the great advantages of Jaynes’ information-based approach to
plausible reasoning [5] that it contains common sense principles to be applied
in situations with insufficient information. If we have no information which
distinguishs the probability of the six possible outcomes of throwing a die,
we _have_ to assign equal probability to them. Everything else would be
irrational.555The purely subjectivist, de Finetti approach differs here. It
does not make prescriptions about the initial probability distributions – all
what is required is that one updates the probabilities following the rules of
Bayesian updating. This difference is one of the reasons for my preference for
the objective approach proposed by Jaynes [5]
Similar principles work if we consider the question of independence. From a
frequentist point of view, independence is a quite nontrivial physical
assumption, which has to be tested. And, in fact, there is no justification
for it, at least none coming from the frequentist interpretation. From point
of view of the logic of plausible reasoning, the situation is much better:
Once we have no _information_ which justifies any hypothesis of a dependence
between two statements $A$ and $B$, we _have_ to assume their independence.
There is no information which makes $A$ more or less plausible given $B$, so
we have to assign equal plausibilities to them: $P(A|B)=P(A)$. But this is the
condition of independence $P(AB)=P(A)P(B)$.
The different status of plausibilities in comparison with physical hypotheses
makes this obligatory character reasonable. Probabilities are not hypotheses
about reality, but logical conclusions derived from the available information,
derived using the logic of plausible reasoning.
Now, all this is much better explained in [5], so why I talk here about this?
The point is that I want to obtain here a formula which appropriately
describes different independent subsystems. Subsystems are independent if we
have no information suggesting their dependence. A quite typical situation, so
independence is quite typical too.
So assume we have two subsystems, and we have no information suggesting any
dependence between them. What do we have to assume based on the logic of
plausible reasoning?
For the probability distribution itself, the answer is trivial:
(12) $\rho(q_{1},q_{2})=\rho_{1}(q_{1})\rho_{2}(q_{2}).$
But what about the phase function $S(q)$? We have to assume that the velocity
of one system is independent of the state of the other one. So the potential
of that velocity also should not depend on the state of the other system. So
we obtain
(13) $S(q_{1},q_{2})=S_{1}(q_{1})+S_{2}(q_{2}).$
This gives, combined using the polar decomposition (1), the following rule for
the wave function:
(14) $\psi(q_{1},q_{2})=\psi_{1}(q_{1})\psi_{2}(q_{2}).$
Of course, again, the rule is trivial. Nobody would have proposed any other
rule for defining a wave function in case of independence. Nonetheless, I
consider it as remarkable that it has been derived from the very logic of
plausible reasoning and the interpretation of the wave function.
And, again, this property is shared by above variants, the quantum as well as
the classical one. And if one thinks about the applicability of this rule to
the classical variant of the wave function, one has to recognize that it is
nontrivial: From a classical point of view, the rule of combining $\rho(q)$
and $S(q)$ into a wave function $\psi(q)$ is quite arbitrary, and there is no
a priori reason to expect that such an arbitrary combination transforms the
rule for defining independent classical states into a simple rule for
independent wave functions, moreover, into the one used in quantum theory.
### 4.3. Appropriate reduction to a subsystem
While the pre-Schrödinger equation is non-linear, it shares with the
Schrödinger equation the weaker property of homogenity of degree one: The
operator condition
(15) $\Omega(c\psi)=c\Omega(\psi)$
holds not only for the Schrödinger operator, but also for the non-linear pre-
Schrödinger operator, and not only for $|c|=1$, as required by the $U(1)$
symmetry, which we have already justified, but for arbitrary $c$, so that the
$U(1)$ symmetry is, in fact, a $GL(1)$ symmetry of the equations.
So one ingredient of linearity is shared by the pre-Schrödinger equation. This
suggests that for this property there should be a more fundamental
explanation, one which is shared by above equations. And, indeed, such an
explanation is possible. There should be a principle which allows an
appropriate reduction of the equation to subsystems, something like the
following
###### Principle 1 (splitting principle).
There should a be simple “no interaction” conditions for the operators on a
system consisting of two subsystems of type $\Omega=\Omega_{1}+\Omega_{2}$
such that the equation $\partial_{t}\psi=\Omega(\psi)$ splits for independent
product states $\psi(q_{1},q_{2})=\psi_{1}(q_{1})\psi_{2}(q_{2})$ into
independent equation’s for the two subsystems.
Now, the time derivative splits nicely:
(16)
$\partial_{t}\psi(q_{1},q_{2})=\partial_{t}\psi_{1}(q_{1})\psi_{2}(q_{2})+\psi_{1}(q_{1})\partial_{t}\psi_{2}(q_{2}).$
It remains to insert the equations for the whole system
$\partial_{t}\psi=\Omega(\psi)$ as well as for the two subsystems into this
equation. This gives:
(17)
$\Omega(\psi_{1}(q_{1})\psi_{2}(q_{2}))=\Omega_{1}(\psi_{1}(q_{1}))\psi_{2}(q_{2})+\psi_{1}(q_{1})\Omega_{2}(\psi_{2}(q_{2})),$
where the $\Omega_{i}$ are subsystem operators acting only on the $q_{i}$, so
that functions of the other variable are simply constants for them. On the
other hand, in the splitting principle we have assumed
$\Omega=\Omega_{1}+\Omega_{2}$. Comparison gives
(18)
$\begin{split}\Omega_{1}(\psi_{1}(q_{1})\psi_{2}(q_{2}))&=\Omega_{1}(\psi_{1}(q_{1}))\psi_{2}(q_{2}),\\\
\Omega_{2}(\psi_{1}(q_{1})\psi_{2}(q_{2}))&=\psi_{1}(q_{1})\Omega_{2}(\psi_{2}(q_{2})).\end{split}$
This follows from the homogenity condition (15). The weaker $U(1)$ symmetry is
not sufficient, because the values of $\psi_{2}(q_{2})$ may be arbitrary
complex numbers.
Something similar to the splitting property is necessary for any equation
relevant for us – we have not enough information to consider the equation of
the whole universe, thus, have to restrict ourself to small subsystems. And
the equations of these subsystems should be at least approximately independent
of the state of the environment.
So the homogenity of the equations – in itself a quite nontrivial property of
the equations, given the definition of the wave function by polar
decomposition – can be explained by this splitting property.
### 4.4. Summary
So we have found three points, each in itself quite trivial, but each
containing some nontrivial element of explanation based on the paleoclassical
principles: The global $U(1)$ phase shift symmetry of the wave function,
explained by the irrelevance of the phase as information about $q(t)$, the
product rule of independence, explained by the logic of plausible reasoning,
applied to the set of information described by the wave function, and the
homogenity of the equations, explained in terms of a splitting property for
independent equations for subsystems.
All three points follow the same scheme – the interpretation in terms of
incomplete information allows to derive a rule, and this rule is shared by
above theories, quantum theory as well as the wave function variant of
classical theory.
So all three points give additional evidence that the simple, straightforward
proposal to use the same interpretation of the wave function for above
theories is the correct one.
## 5\. Incorporating Nelsonian stochastics
The analogy between pre-Schrödinger and Schrödinger equation is a useful one,
but it is that useful especially because the equations are nonetheless very
different. And one should not ignore these differences.
One should not, in particular, try to interpret the Schrödinger equation as
the linear approximation of the pre-Schrödinger equation: The pre-Schrödinger
equation does not depend on $\hbar$. Real physical effects do depend. And so
there should be also something in the fundamental theory, the theory in terms
of the trajectory $q(t)$, which depends on $\hbar$.
Here, Nelsonian stochastics comes into mind. In Nelsonian stochastics the
development of the configuration $q$ in time is described by a deterministic
drift term $b^{i}(q(t),t)dt$ and a stochastic diffusion term $dB^{i}_{t}$
(19) $dq^{i}(t)=b^{i}(q(t),t)dt+dB^{i}_{t},$
with is a classical Wiener process $B^{i}_{t}$ with expectation $0$ and
variance
(20) $\langle dB^{i}_{t}m_{ij}dB^{j}_{t}\rangle=\hbar dt,$
so that we have a $\hbar$-dependence on the fundamental level. The probability
distribution $\rho(q(t),t)$ has, then, to fulfill the Fokker-Planck-equation:
(21) $\partial_{t}\rho+\partial_{i}(\rho
b^{i})-\frac{\hbar}{2}m^{ij}\partial_{i}\partial_{j}\rho=0$
For the average velocity $v^{i}$ one obtains
(22)
$v^{i}(q(t),t)=b^{i}(q(t),t)-\frac{\hbar}{2}\frac{m^{ij}\partial_{j}\rho(q(t),t)}{\rho(q(t),t)},$
which fulfills the continuity equation. The difference between flow velocity
$b^{i}$ and average velocity $v^{i}$, the osmotic velocity
(23)
$u^{i}(q(t),t)=b^{i}(q(t),t)-v^{i}(q(t),t)=\frac{\hbar}{2}\frac{m^{ij}\partial_{j}\rho(q(t),t)}{\rho(q(t),t)}=\frac{\hbar}{2}m^{ij}\partial_{j}\ln\rho(q(t),t)$
has a potential $\ln\rho(q)$. The average acceleration is given by
(24)
$a^{i}(q(t),t)=\partial_{t}v^{i}+v^{j}\partial_{j}v^{i}-\frac{\hbar^{2}}{2}m^{ij}\partial_{j}\left(\frac{m^{kl}\partial_{k}\partial_{l}\sqrt{\rho(q(t),t)}}{\sqrt{\rho(q(t),t)}}\right)$
For this average acceleration the classical Newtonian law
(25) $a^{i}(q(t),t)=-m^{ij}\partial_{j}V(q(t),t)$
is postulated. Putting this into the equation (24) gives
(26)
$\partial_{t}v^{i}+(v^{j}\partial_{j})v^{i}=-m^{ij}\partial_{j}\left(V-\frac{\hbar^{2}}{2}\frac{\Delta\sqrt{\rho}}{\sqrt{\rho}}\right)=-m^{ij}\partial_{j}(V+Q[\rho]).$
The next postulate is that the average velocity $v^{i}(q)$ has, at places
where $\rho>0$, a potential, that means, a function $S(q)$, so that (2) holds.
Then equation (26) can be integrated (putting the integration constant into
$V(q)$), which gives (6). Finally, one combines the equations for $S(q)$ and
$\rho(q)$ into the Schrödinger equation as in de Broglie-Bohm theory.
So far the basic formulas of Nelsonian stochastics. Now, the beables of
Nelsonian stochastics are quite different from those of the paleoclassical
interpretation. The external flow $b^{i}(q,t)$ does not define some state of
information, but some objective flow which takes away the configuration if it
is at $q$. The configuration is guided in a way close to a swimmer in the
ocean: He can randomly swim into one or another direction, but whereever he
is, he is always driven by the flow of the ocean, a flow which exists
independent of the swimmer.
What would be the picture suggested for a stochastic process by the
paleoclassical interpretation? It would be different, more close to a
spaceship flying in the cosmos. If it changes its place because of some
stochastic process, there will be no predefined, independent velocity
$b^{i}(q)$ at the new place which can guide it. Instead, it would have to
follow its previous velocity. Indeed, the velocity field $v^{i}(q)$, and,
therefore, also $b^{i}(q)$, of the new place describes only information, not a
physical field which could possibly influence the spaceship in its new
position.
But what follows for the information if there is some probability that the
stochastic process causes the particle to change its location to the new one?
It means that the old average velocity has to be recomputed.
Of course, there is such a necessity to recompute only if the velocity at the
new location is different from the old one. So, if
$\partial_{i}v^{k}=m^{jk}\partial_{i}p_{j}=0$, there would be no need for such
a recomputation at all. This condition is clearly too strong, it would cause
the whole velocity field to become constant. Now there is an interesting
subcondition, namely that $\partial_{i}p_{j}-\partial_{j}p_{i}=0$, the
condition that the velocity field is a potential one. So the re-averaging of
the average velocity $v^{i}(q)$ caused by the stochastic process will decrease
the curl.
This gives a first advantage in comparison with Nelsonian stochastics: The
potentiality assumption does not have to be postulated, without any
justification, for the external flow $b^{i}(q,t)$. (It is postulated for
average velocity, but their difference – the osmotic velocity – has a
potential anyway.) In the paleoclassical interpretation we have an independent
motivation for postulating this. But, let’s recognize, there is no fundamental
reason to postulate potentiality: On the fundamental level, the curl may be
nontrivial. All what is reached is that the assumption of potentiality is not
completely unmotivated, but that it appears as a natural consequence of the
necessary re-averaging.
There is another strangeness connected with the external flow picture of
Nelsonian stochastics. The probability distribution $\rho(q)$ already
characterizes the information about the configuration itself – it is a
probability for the swimmer in the ocean, not for the ocean. Once they depend
on $\rho(q)$, as the average velocity $v^{i}(q)$, as the average acceleration
$a^{i}(q)$ also describe information about the swimmer, not about the ocean.
Then, it is postulated that the average acceleration of the swimmer has to be
defined by the Newtonian law. But because this average velocity is,
essentially, already defined by the very process – the flow of the ocean –
together with the initial value for the swimmer, this condition for the
swimmer becomes an equation for the ocean. This is conceptually unsound – as
if the ocean has to care that the swimmer is always correctly accelerated.
But this conceptual inconsistency disappears in the paleoclassical
interpretation. The drift field is now part of the incomplete information
about the configuration itself, as defined by the average velocity and osmotic
velocity. There is no longer any external drift field. And, so, it is quite
natural that a condition for the average acceleration of the configuration
gives an equation for the average velocity $v^{i}(q)$. So the paleoclassical
picture is internally much more consistent.
But is it viable? This is a quite non-trivial question discussed below in sec.
9.1.
## 6\. The character of the wave function
Let’s start with the consideration of the objections against the
paleoclassical interpretation. Given that the basic formulas are not new at
all, I do not have to wait for reactions to this paper – some quite strong
arguments are already well-known.
The first one is the evidence in favour of the thesis that the wave function
describes the behaviour of real degrees of freedom, degrees of freedom which
actually influence the things we can observe immediately. Here, at first
Bell’s argumentation comes into mind – an argumentation for a double ontology
which, I think, has impressed many of those who support today realistic
interpretations:
> Is it not clear fro the smalleness of the scintillation on the screen that
> we have to do with a particle? And is it not clear, from the diffraction and
> interference patterns, that the motion of the particle is directed by a
> wave? ([20] p. 191).
But what are the points which make this argument that impressive? What is it
what motivates us to accept some things as real? Here I see no way to express
this better than Brown and Wallace:
> From the corpuscles’ perspective, the wave-function is just a (time-
> dependent) function on their configuration space, telling them how to
> behave; it superficially appears similar to the Newtonian or Coulomb
> potential field, which is again a function on configuration space. No-one
> was tempted to reify the Newtonian potential; why, then, reify the wave-
> function?
>
> Because the wave-function is a very different sort of entity. It is
> contingent (equivalently, it has dynamical degrees of freedom independent of
> the corpuscles); it evolves over time; it is structurally overwhelmingly
> more complex (the Newtonian potential can be written in closed form in a
> line; there is not the slightest possibility of writing a closed form for
> the wave-function of the Universe.) Historically, it was exactly when the
> gravitational and electric fields began to be attributed independent
> dynamics and degrees of freedom that they were reified: the Coulomb or
> Newtonian ‘fields’ may be convenient mathematical fictions, but the Maxwell
> field and the dynamical spacetime metric are almost universally accepted as
> part of the ontology of modern physics.
>
> We don’t pretend to offer a systematic theory of which mathematical entities
> in physical theories should be reified. But we do claim that the decision is
> not to be made by fiat, and that some combination of contingency, complexity
> and time evolution seems to be a requirement. ([15] p. 12-13)
So, let’s consider the various points in favour of the reality of the wave
function:
### 6.1. Complexity of the wave function
The argument of complexity seems powerful. But, in fact, for an interpretation
in terms of incomplete information _this_ is not a problem at all.
Complexity is, in fact, a natural consequence of incompleteness of
information. The complete information about the truth of a statement is a
single bit: true or false. The incomplete information is much more complex: it
is a real number, the probability.
As well, the complete information about reality in this interpretation is
simple: a single trajectory $q(t)$. But incomplete information requires much
more information: Essentially, we need probabilities for all possible
trajectories.
### 6.2. Time dependence of the wave function
Time dependence is, as well, a natural property of information – complete or
incomplete. The information about where the particle has been yesterday
transforms into some other information about where the particle is now.
This transformation is, moreover, quite nontrivial and complex.
It is also worth to note here that the law of transformation of information is
a derivative of the real, physical law of the behaviour of the real beables.
So it necessarily has all the properties of a physical law.
We can, in particular, use the standard Popperian scientific method (making
hypotheses, deriving predictions from them, testing and falsifying them, and
inventing better hypotheses) to find these laws.
This is, conceptually, a quite interesting point: The laws of probability
themself are best understood, following Jaynes [5], as laws of extended logic,
of the logic of plausible reasoning.
But, instead, the laws of _transformation_ of probabilities in time follow
from the laws of the original beables in time, and, therefore, have the
character of physical laws.
Or, in other words, incomplete information develops in time in a way
indistinguishable from the development in time of real beables. In particular,
we use the same objective scientific methods to find and test them.
So, nor the simple fact that there is a nontrivial time evolution, nor the
very physical character of the dynamical laws, in all details, up to the
scientific method we use to find them, give any argument against an
interpretation in terms of incomplete information. All this is quite natural
for the evolution of incomplete information too.
### 6.3. The contingent character of the wave function
There remains the most powerful argument in favour of the reality of the wave
function: Its contingent character.
There are different wave functions, and these different wave functions lead to
objectively different probability distributions of the observable results.
If we have, in particular, different preparation procedures, leading to
different interference pictures, we really observe different interference
pictures. It is completely implausible that these different interference
pictures – quite objective pictures – could be the result of different sets of
incomplete information about the same reality. The different interference
pictures are, clearly, the result of different things happening in reality.
But, fortunately, there is not even a reason to disagree with this. The very
point is that one has to distinguish the wave function of a small subsystem –
we have no access to any other wave functions – from the wave function of a
closed system. The last is, in fact, only an object of purely theoretical
speculation, because there is no closed system in nature except the whole
universe, but we have no idea about the wave function of the whole universe.
For the wave function of a small subsystem, the situation is quite different.
It does not contain only incomplete information about the subsystem. In fact,
it is only an effective wave function, and there is a nice formula of dBB
theory, which can be used in our paleoclassical interpretation too: The
formula which defines the conditional wave function of a subsystem
$\psi^{S}(q_{S},t)$ in terms of the wave function of the whole system (say the
whole universe) $\psi(q_{S},q_{E},t)$ and the configuration of the environment
$q_{E}(t)$:
(27) $\psi^{S}(q_{S},t)=\psi(q_{S},q_{E}(t),t)$
This is a remarkable formula of dBB theory which contains, in particular, the
solution of the measurement problem: The evolution of $\psi^{S}(q_{S},t)$ is,
in general, not described by a Schrödinger equation – if there is interaction
with the environment, the evolution of $\psi^{S}(q_{S},t)$ is different, but,
nonetheless, completely well-defined. And this different evolution is the
collapse of the wave function caused by measurement.
Let’s note that the paleoclassical interpretation requires to justify this
formula in terms of the information about the subsystem. But this is not a
problem. Indeed, assume the trajectory of the environment $q_{E}(t)$ is known
– say by observation of a classical, macroscopic measurement device. Then the
combination of the knowledge described by the wave function of the whole
system with the knowledge of $q_{E}(t)$ gives exactly the same knowledge as
that described by $\psi^{S}(q_{S},t)$. Indeed, the probability distribution
gives
(28) $\rho^{S}(q_{S},t)=\rho(q_{S},q_{E}(t),t),$
and, similarly, the velocity field defined by $S(q)$ follows the same
reduction principle:
(29) $\nabla S^{S}(q_{S},t)=\nabla S(q_{S},q_{E}(t),t).$
So in the paleoclassical interpretation the dBB formula the conditional wave
function of a subsystem is a logical necessity. This provides yet another
consistency check for the interpretation.
But the reason for considering this formula here was a different one: The
point is that the wave function of the subsystem in fact contains important
information about other real beables – the actual configuration of the whole
environment $q_{E}(t)$. So there are real degrees of freedom, different from
the configuration of the system $q_{S}(t)$ itself, which are distinguished by
different wave functions $\psi^{S}(q_{S},t)$.
And we do not have to object at all if one argues that the wave function
contains such additional degrees of freedom. That’s fine, it really contains
them. These degrees of freedom are those of the configuration of the
environment $q_{E}(t)$. And this is not an excuse, but a logical consequence
of the interpretation itself, a consequence of the definition of the
conditional wave function of the subsystem (27).
### 6.4. Conclusion
So the consideration of the arguments in favour of a beable status for the
wave function, even if they seem to be strong and decisive at a first look,
appear to be in no conflict at all with the interpretation of the wave
function of a closed system in terms of incomplete information about this
system.
The most important point to understand this is, of course, the very fact that
the conditional wave function of the small subsystems of the universe we can
consider really have a different character – they depend on the configuration
of the environment. And, so, the argumentation that these conditional wave
functions describe real degrees of freedom, external to the system itself, is
accepted and even derived from the interpretation.
Nonetheless, the point that nor the complexity of the wave function of the
whole universe, nor its evolution in time, nor the physical character of the
laws of this evolution are in any conflict with an interpretation in terms of
incomplete information is an important insight too.
## 7\. The Wallstrom objection
Wallstrom [9] has made an objection against giving the fields of the polar
decomposition $\rho(q)$, $S(q)$ (instead of the wave function $\psi(q)$
itself) a fundamental role.
The first point is that around the zeros of the quantum mechanical wave
function, the flow has no longer a potential. The quantum flow is a potential
one only where $\rho(q)>0$. But in general there will be submanifolds of
dimension $n-2$ where the wave function is zero. And for a closed path $q(s)$
around such a zero submanifold one finds that
(30) $\oint m_{ij}v^{i}(q)dq^{j}\neq 0.$
This, in itself, is unproblematic for the interpretation: The condition of
potentiality is not assumed to be a fundamental one – the fundamental object
is not $S(q)$ but the $v^{i}(q)$. There will be some mechanism in subquantum
theory which locally reduces violations of potentiality, so we can assume that
the flow is a potential one only as an approximation.
It is also quite natural to assume that such a mechanism works more efficient
for higher densities and fails near the zeros of the density.
So having them localized at the zeros of the density is quite nice – not for a
really fundamental equation, which is not supposed to have any infinities, but
if we consider quantum theory as being only an approximation.
The really problematic part of the Wallstom objection is a different one: It
is that the quantum flow has to fulfill a nontrivial _quantization condition_
, namely
(31) $\oint m_{ij}v^{i}(q)dq^{j}=\oint\partial_{j}S(q)dq^{j}=2\pi
m\hbar,\qquad m\in\mbox{$\mathbb{Z}$}.$
The point is, in particular, that the equations (6), (8) in flow variables are
not sufficient to derive this quantization condition. So, in fact, this set of
equations is _not_ empirically equivalent to the Schrödinger equation.
This is, of course, no wonder, given the fact that the equivalence holds only
for $\rho(q)=|\psi(q)|^{2}>0$. But, however natural, empirical inequivalence
is empirical inequivalence.
Then, this condition looks quite artificial in terms of the $v^{i}$. What is a
triviality in terms of the wave function – that it has to globally uniquely
defined – becomes extremely artificial and strange if formulated in terms of
the $v^{i}(q)$. As Wallstrom [9] writes, to “the best of my knowledge, this
condition [(31)] has not yet found any convincing explanation outside the
context of the Schrödinger equation”.
### 7.1. A solution for this problem
Fortunately I have found a solution for this problem in [10]. I do not claim
that it is a complete one – there is a part which is beyond the scope of an
interpretation, which has to be left to particular proposals for a subquantum
theory. One has to check if the assumptions I have made about such a
subquantum theory are really fulfilled in that particular theory.
The first step of the solution is to recognize that, for empirical equivalence
with quantum theory, it is sufficient to recover only solutions with simple
zeros. Such simple zeros give $m=\pm 1$ in the quantization condition (31).
This is a consequence of the principles of general position: A small enough
modification of the wave function cannot be excluded by observation, but leads
to a wave function in general position, and in general position the zeros of
the wave function are non-degenerated.
The next step is a look at the actual solutions. For the simple, two
dimensional, rotational invariant, zero potential case these solutions are
defined by $S(q)=m\varphi$, $\rho(q)=r^{2|m|}$. And this extends to the
general situation, where $S(q)=m\varphi+\tilde{S}(q)$,
$\rho(q)=r^{2|m|}\tilde{\rho}(q)$, such that $\tilde{S}(q)$ is well-defined in
a whole environment of the zero, and $\tilde{\rho}(0)>0$.
But that means we can replace the problem of justifying an integer $m$ in
$S(q)=m\varphi$, where all values of $m$ seem equally plausible, by the quite
different problem of justifying $\rho(q)=r^{2}$ (once we need only $m=\pm 1$)
in comparison with other $\rho(q)=r^{\alpha}$. This is already a quite
different perspective.
We make the natural conclusion and invent a criterion which prefers
$\rho(q)=r^{2}$ in comparison with other $r^{\alpha}$. This is quite easy:
###### Postulate 1 (regularity of $\Delta\rho$).
If $\rho(q)=0$, then $0<\Delta\rho(q)<\infty$ almost everywhere.
This postulate already solves the inequivalence argument. The equations (6),
(8) for $\rho(q)>0$, together with postulate 1, already defines a theory
empirically equivalent to quantum theory (even if the equivalence is not
exact, because only solutions in general position are recovered).
It remains to invent a justification for this postulate.
The next step is to rewrite equation (6) for stable states in form of a
balance of energy densities. In particular, we can rewrite the densitized
quantum potential as
(32) $Q[\rho]\rho=\frac{1}{2}\rho u^{2}-\frac{1}{4}\Delta\rho,$
with the “osmotic velocity” $u(q)=\frac{1}{2}\nabla\ln\rho(q)$. Then the
energy balance looks like
(33) $\frac{1}{2}\rho v^{2}+\frac{1}{2}\rho u^{2}+V(q)=\frac{1}{4}\Delta\rho.$
So, the operator we have used in the postulate is not an arbitrary expression,
but a meaningful term, which appears in an important equations – an energy
balance. This observation is already sufficient to justify the
$\Delta\rho(q)<\infty$ part of the condition. There may be, of course,
subquantum theories which allow for infinities in energy densities, but it is
hardly a problem for a subquantum theory to justify that expressions which
appear like energy densities in energy balances have to be finite.
Last but not least, subquantum theory has to allow for nonzero curl
$\nabla\times v$, but has to suppress it to obtain a quantum limit. One way to
suppress it is to add a penalty term $U(\rho,\nabla\times v)$ which increases
with $|\nabla\times v|$. This would give
(34) $\frac{1}{2}\rho v^{2}+\frac{1}{2}\rho u^{2}+V(q)+U(\rho,\nabla\times
v)=\frac{1}{4}\Delta\rho.$
Moreover subquantum theory has to regularize the infinities of $v$ and $u$ at
the zeros of the density. One can plausibly expect that this gives finite but
large values of $|\nabla\times v|$ at zero which decrease sufficiently fast
with $r$. Now, a look at the energy balance shows that, if the classical
potential term $V(q)$ is neglected (for example assuming that it changes only
smoothly) the only term which can balance $U(\rho,\nabla\times v)$ at zero is
the $\Delta\rho$ terms, which, therefore, has to be finite but nonzero. Or, at
least, it would not be difficult to modify the definition of
$U(\rho,\nabla\times v)$ in such a way that the extremal value $\Delta\rho=0$
(we have necessarily $\Delta\rho\geq 0$ at the minima) has to be excluded.
So the postulate seems nicely justifiable. For some more details I refer to
[10]. What remains is the particular job of particular proposals for
subquantum theories – they have to check if the way to justify the postulate
really works in this theory, or if it may be justified in some other way. But
this is beyond the scope of the interpretation. What has to be done by the
interpretation – in particular, to obtain empirical equivalence with quantum
theory – has been done.
## 8\. A Popperian argument for preference of an information-based
interpretation
One of Popper’s basic ideas was that we should prefer – as long as possible
without conflict with experience – theories which are more restrictive, make
more certain predictions, depend on less parameters. And, while this criterion
has been formulated for theories, it should be applied, for the same reasons,
to more general principles of constructing theories too.
This gives an argument in preference for an interpretation in terms of
incomplete information.
Indeed, let’s consider, from this point of view, the difference between
interpretations of fields $\rho(q)$, $v^{i}(q)$ in terms of a probability for
some real trajectories $q(t)$, and interpretations which reify them as
describing some external reality, different from $q(t)$, which influences the
trajectory $q(t)$.
It is quite clear and obvious which of the two approaches is more restrictive.
Designing theories of the first type, we are restricted, for the real physics,
to theories for single trajectories $q(t)$. Then, given that we have
identified the connection between the fields $\rho(q)$, $v^{i}(q)$ and $q(t)$
as those of a probability flow, everything else follows. There is no longer
any freedom of choice for the field equations. If we have fixed the
Hamiltonian evolution for $p(t),q(t)$, the Liouville field equation for
$\rho(p,q)$ is simply a logical consequence. Similarly, the continuity
equation (8) is a law of logic, it cannot be modified, is no longer a subject
of theoretical speculation. It is fixed by the interpretation of $\rho(q)$,
$v^{i}(q)$ as a probability flow, in the same way as $\rho(q)\geq 0$ is fixed.
In the second case, we have much more freedom – the full freedom of
speculation about field theories in general. In particular, the continuity
equation can be modified, introducing, say, some creation and destruction
processes, which are quite natural if $\rho(q)$ describes a density of some
external objects.
The derivation of the $U(1)$ global phase shift symmetry is another particular
example of such an additional logical law following from the interpretation.
So there are some consequences of the interpretation which have purely logical
character, including the continuity equation, $\rho(q)\geq 0$, and the $U(1)$
global phase shift symmetry. But these will not be the only consequences. The
other equations will be restricted, in comparison with field theories, too,
but in a less clear and obvious way. There is, last but not least, a large
freedom of choice for the equations of the real beables $q(t)$, which
correspondence to a similarly large freedom of choice of the resulting
equations for $\rho(q)$, $v^{i}(q)$. But this freedom of choice will be,
nonetheless, much smaller than the completely arbitrariness of a general field
theory.
This consideration strongly indicates that we have to prefer the
interpretation in terms of incomplete information until it has been falsified,
until it appears incompatible with observation.
The immediate, sufficiently trivial logical consequences we have found yet are
compatible with the Schrödinger equation and therefore with observation. So we
should prefer this interpretation.
## 9\. Open problems
Instead of using such a Popperian argumentation, I could have, as well, used
simply Ockham’s razor: Don’t multiply entities without necessity. Once, given
this interpretation, there is no necessity for more than a single classical
trajectory $q(t)$, one should not introduce other real entities like really
existing wave functions.
But the Popperian consideration has the advantage that it implicitly defines
an interesting research program.
### 9.1. Other restrictions following from the interpretation
In fact, given the restrictive character of the interpretation, there may be
other, additional, more subtle restrictions of the equations for probility
flows $\rho(q)$, $v^{i}(q)$, restrictions which we have not yet identified,
but which plausibly exist.
So what are these additional restrictions for equations for probability flows
$\rho(q)$, $v^{i}(q)$ in comparison with four general, unspecific fields
fulfilling a continuity equation? I have no answer.
This is clearly an interesting question for future research. It is certainly
also a question interesting in itself, interesting from point of view of pure
mathematics, for a better understanding of probability theory.
The consequences may be fatal for this approach – it may be that we find that
the Schrödinger equation does not fit into this set of restrictions. This
possibility of falsifiction is, of course, the very point of the Popperian
consideration. I’m nonetheless not afraid that this happens, but this is only
a personal opinion.
The situation may be, indeed, much better: That this subclass of theories
contains the Schrödinger equation, but appears to be heavily restricted by
some additional, yet unknown, conditions. Then, all of these additional
restrictions give us partial answers to the “why the quantum” question.
### 9.2. Why is the Schrödinger equation linear?
The most interesting question which remains open is why the Schrödinger
equation is linear. We have found only some part of the answer – an
explanation for the global $U(1)$ phase symmetry based on the informational
content and of the homogeneity based on the reduction of the equation to
subsystems.
But, given that the pre-Schrödinger equation is non-linear, but interpreted in
the same way, the linearity of the Schrödinger equation cannot follow from the
interpretation taken alone. Some other considerations are necessary to obtain
linearity.
One idea is to justify linearity as an approximation. Last but not least,
linearization is a standard way to obtain approximations.
The problem of stability in time may be also relevant here. The pre-
Schrödinger equation becomes invalid after a short period of time, then the
first caustic appears. There is no such problem in quantum theory, which has a
lot of stable solutions. Then, there should be not only stable solutions, but
also slowly changing solutions: It doesn’t even matter if the fundamental time
scale is Planck time or something much larger – even if it is only the time
scale of strong interactions, all the things changing around us are changing
in an extremely slow way in comparison with this fundamental time scale. But
the linear character of the Schrödinger equation gives us a way to obtain
solutions slowly changing in time by combining different stable solutions with
close energies.
## 10\. A theoretical possibility to test: The speedup of quantum computers
There is an idea which suggests, at least in principle, a way to distinguish
observationally the paleoclassical interpretation (or, more accurate, the
class of all more fundamental theories compatible with the paleoclassical
interpretation) from the minimal interpretation.
The idea is connected with the theory of quantum computers. If quantum
computers really work as predicted by quantum theory, these abilities will
provide fascinating tests of the accuracy of quantum theory. In the case of
Simon’s algorithm, the speed-up is exponential over any classical algorithm.
It may be a key for the explanation of this speed-up that the state space
(phase space) of a composite classical system is the Cartesian product of the
state spaces of its subsystems, while the state space of a composite quantum
system is the tensor product of the state spaces of its subsystems. For $n$
qubits, the quantum state space has $2^{n}$ instead of $n$ dimensions. So the
information required to represent a general state increases exponentially with
$n$ (see, for example, [14]). There is also the idea “that a quantum
computation is something like a massively parallel classical computation, for
all possible values of a function. This appears to be Deutsch’s view, with the
parallel computations taking place in parallel universes.” [14].
It is this _exponential_ speedup which suggests that the predictions of
standard QM may differ from those of the paleoclassical interpretation. An
exact quantum computer would have all beables contained in the wave function
as degrees of freedom. A quantum computer in the paleoclassical interpretation
has only the resources provided by its beables. But these beables are,
essentially, only the classical states of the universe. Given the exponential
difference between them, $n$ vs. $2^{n}$ dimensions for qubits instead of
classical bits, an exact quantum computer realizable at least in principle in
a laboratory on Earth can have more computational resources than a the
corresponding computer of the paleoclassical interpretation, which can use
only the classical degrees of freedom, even if these are the classical degrees
of freedom of the whole universe.
But if we distort a quantum computer, even slightly, the result will be fatal
for the computation. In particular, if this distortion is of the type of the
paleoclassical interpretation, which replaces an exact computer with a
$2^{n}$-dimensional state space by an approximate one with only $N$
dimensions, then even for quite large $N\gg n$ the approximate computer will
be simply unable to do the exact computations, even in principle. There simply
are no parallel universes in the paleoclassical interpretation to make the
necessary parallel computations.
So, roughly speaking, the prediction of the paleoclassical interpretation is
that a sufficiently large quantum computer will fail to give the promised
exponential speedup. The exponential speedup will work only up to a certain
limit, defined by the logarithm of the relation between the size of the whole
universe and the size of the quantum computer.
Of course, we do not know the size of the universe. It may be much larger than
the size of the observable universe, or even infinite. Nonetheless, this
argument, applied to any finite model of the universe, shows that the true
theory, the theory in the configurational beables alone, cannot be exactly
quantum theory. This is in my opinion the most interesting conclusion.
But let’s see if we can, nonetheless, make even testable (at least in
principle) predictions. So let’s presuppose that the universe is finite, and,
moreover, let’s assume that this size is not too many orders larger than the
its observable part. This would be already sufficient to obtain some numbers
about the number of qubits such that the $2^{n}$ exponential speedup is no
longer possible. This number will be sufficiently small, small enough that a
quantum computer in a laboratory on Earth will be sufficient to reach this
limit.
And, given the logarithmic dependence on $N$, increasing $N$ does not help
much. If it is possible to build a quantum computer with $n$ qubits, why not
with $2n$? This would already move the size of the universe into completely
implausible regions.
### 10.1. The speed of quantum information as another boundary
Instead of caring about the size of the universe, it may be more reasonable to
care about the size of the region which can causally influence us. Here I do
not have in mind the limits given by of relativity, by the speed of light.
Given the violation of Bell’s inequality, there has to be (from a realist’s
point of view) a hidden preferred frame where some other sort of information –
quantum information – is transferred with a speed much larger than the speed
of light. But if we assume that the true theory has some locality properties,
even if only in terms of a much larger maximal speed, the region which may be
used by a quantum computer for its computational speedup decreases in
comparison with the size of the universe.
So if we assume that there is such a speed limit for quantum information, then
we obtain in the paleoclassical interpretation even more restrictive limits
for the speedup reachable by quantum computers, limits which depend
logarithmically on the speed limit for quantum information.
Nonetheless, I personally don’t believe that quantum computers will really
reach large speedups. I think the general inaccuracy of human devices will
prevent us from constructing quantum computers which can really use the full
$2^{n}$ power for large enough $n$. I would guess that the accuracy
requirements necessary to obtain a full $2^{n}$ speedup will also grow
exponentially. So I guess that quantum computers will fail already on a much
smaller scale.
## 11\. Conclusions
So it’s time to summarize:
* •
The unknown, true theory of the whole universe is a theory defined on the
classical configuration space $Q$, with the configuration $q(t)$ evolving in
absolute time $t$ as a complete description of all beables.
* •
The wave function of the whole universe is interpreted as a consistent set of
incomplete information about these fundamental beables.
* •
In particular, $\rho(q)$ defines not some “objective” probability, but an
incomplete set of information about the real position $q$, described, as
required by the logic of plausible reasoning, by a probability distribution
$\rho(q)dq$.
* •
The phase $S(q)$ describes, via the “guiding equation”, the expectation value
$\langle\dot{q}\rangle$ of the velocity given the actual configuration $q$
itself. So the “guiding equation” is not a physical equation, but has to be
interpreted as part of the definition of $S(q)$, which describes which
information about $q$ is contained in $S(q)$.
* •
Only a constant phase factor of $\psi(q)$ does not contain any relevant
information about the trajectory $q(t)$. Therefore, the equations for
$\psi(q)$ should not depend on such a factor.
* •
The Schrödinger equation is interpreted as an approximate equation. More is
not to be expected, given that it describes the evolution of an incomplete set
of information.
* •
The linear character of the Schrödinger equation is interpreted as an
additional hint that it is only an approximate equation.
* •
The interpretation can be used to reinterpret Nelsonian stochastics. The
resulting picture is conceptually more consistent than the original proposal.
* •
The Wallstrom objection appears much less serious than expected. The
quantization condition for simple zeros (which is sufficient because it is the
general position) can be derived from the much simpler regularity postulate
that $0<\Delta\rho(q)<\infty$ if $\rho(q)=0$. While a final justification of
this condition has to be left to a more fundamental theory, it is, as shown in
[10], plausible that this is not a problem for such theories.
* •
If the true theory of the universe is defined on classical configurations, and
the whole universe is finite, quantum computers can give their promised
exponential speedup only up to an upper bound for the number of qubits, which
is much smaller than the available qubits of the universe. This argument shows
that the Schrödinger equation has to be approximate.
* •
The dBB problem with of the “action without reaction” asymmetry is solved: For
effective wave function, the collapse defines the back-reaction, for the wave
function of the whole universe there should be no such back-reaction – it is
only an equation about incomplete information about reality, not about reality
itself.
* •
The wave functions of small subsystems obtain a seemingly objective, physical
character only because they, as conditional wave functions, depend on the
physical beables of the environment.
From point of view of simplicity, the paleoclassical interpretation is
superior to all alternatives. The identification of the fundamental beables
with the classical configuration space trajectory $q(t)$ is sufficient for
this point.
It has also the additional advantage that it leads to strong restrictions of
the properties of a more fundamental, sub-quantum theory: It has to be a
theory completely defined on the classical configuration space. Moreover, it
has to be a theory which, in its statistical variant, leads to a Fokker-
Planck-like equations for the probability flow defined by the classical flow
variables $\rho(q)$ and $v^{i}(q)$.
## Appendix A Compatibility with relativity
Most physicists consider the problem of compatibility with relativity as the
major problem of dBB-like interpretations – sufficient to reject them
completely. But I have different, completely independent reasons for accepting
a preferred frame, so that I don’t worry about this.
There are two parts of this compatibility problem, a physical and a
philosophical one, which should not be mingled:
The physical part is that we need a dBB version of relativistic quantum field
theories, in particular of the standard model of particle physics – versions
which do not have to change the fundamental scheme of dBB, and, therefore, may
have a hidden preferred frame.
The philosophical part is the incompatibility of a hidden preferred frame with
relativistic metaphysics.
The physical problem is heavily overestimated, in part because of the way dBB
theory is often presented: As a theory of many particles. I think it should be
forbidden to introduce dBB theory in such a way. The appropriate way is to
present it as a general theory in terms of an abstract configuration space
$Q$, and to recognize that field theories as well as their lattice
regularizations fit into this scheme. The fields are, of course, fields on
three-dimensional space $\mbox{$\mathbb{R}$}^{3}$ changing in time, and their
lattice regularizations live on three-dimensional spatial lattices
$\mbox{$\mathbb{Z}$}^{3}$, not four-dimensional space-time lattices. But this
violation of manifest relativistic symmetry is already part of the second,
philosophical problem.
The simple, seemingly non-relativistic Hamiltonian (3), with $p^{2}$ instead
of $\sqrt{p^{2}+m^{2}}$, is also misleading: For relativistic field theories
the quadratic Hamiltonian is completely sufficient. Indeed, a relativistic
field Lagrangian is of type
(35)
$\mathscr{L}=\frac{1}{2}((\partial_{t}\varphi)^{2}-(\partial_{i}\varphi)^{2})-V(\varphi).$
This gives momentum fields $\pi=\partial_{t}\varphi$ and the Hamiltonian
(36)
$\mathscr{H}=\frac{1}{2}(\pi^{2}+(\partial_{i}\varphi)^{2})+V(\varphi)=\frac{1}{2}\pi^{2}+\tilde{V}(\varphi)$
quadratic in $\pi$, thus, the straightforward field generalization of the
standard Hamiltonian (3). And a for lattice regularization, the Hamiltonian is
already exactly of the form (3). So, whatever one thinks about the dBB
problems with other relativistic fields, it is certainly not relativity itself
which causes the problem.
The problem with fermions and gauge fields is certainly more subtle. Here, my
proposal is described in [25]. It heavily depends on a preferred frame, but
for completely different reasons – interpretations of quantum theory are not
even mentioned. Nonetheless, fermion fields are obtained from field theories
of type (35), and gauge-equivalent states are interpreted as fundamentally
different beables, so that no BRST factorization procedure is necessary.
Another part of the physical problem is compatibility with relativistic
gravity. Here I argue that it is the general-relativistic concept of
background-freedom which is incompatible with quantum theory and has to be
given up. I use a quantum variant of the classical hole argument for this
purpose [27]. As a replacement, I propose a theory of gravity with background
and preferred frame [26].
So there remains only the philosophical part. But here the violation of Bell’s
inequality gives a strong argument in favour of a preferred frame: Every
realistic interpretation needs it. Moreover, the notion of metaphysical
realism presupposed by “realistic interpretation” is so weak that Norsen [24]
has titled a paper “against realism”, arguing that one should not mention
realism at all in this context. The metaphysical notion of realism used there
is so weak that to give it up does not save Einstein locality at all – it is
presupposed in this notion too.
## Appendix B Pauli’s symmetry argument
There is also another symmetry argument against dBB theory, which goes back to
Pauli [12], which deserves to be mentioned:
> …the artificial asymmetry introduced in the treatment of the two variables
> of a canonically conjugated pair characterizes this form of theory as
> artificial metaphysics. ([12], as quoted by [13]),
>
> “…the Bohmian corpuscle picks out by fiat a preferred basis (position) …”
> [15]
Here my counterargument is presented in [28]. I construct there an explicit
counterexample, based on the KdV equation, that the Hamilton operator alone,
without a specification which operator measures position, is not sufficient to
fix the physics. It follows that the canonical operators have to be part of
the complete definition of a quantum theory and so have to be distinguished by
the interpretation as something special, different from the other similar
pairs of operators.
The Copenhagen interpretation makes such a difference – this is one of the
roles played by the classical part. But attempts to get rid of the classical
part of the Copenhagen interpretation, without adding something else as a
replacement, are not viable [29]. One has to introduce a replacement.
Recognizing that the configuration space has to be part of the definition of
the physics gives more power to an old argument in favour of the pilot wave
approach, made already by de Broglie at the Solvay conference 1927:
> “It seems a little paradoxical to construct a configuration space with the
> coordinates of points which do not exist.” [2].
## Appendix C Problems with field theories
It has been argued that fields are problematic as beables in general for dBB
theory, a point which could be problematic for the paleoclassical
interpretation too.
In particular, the equivalence proof between quantum theory and dBB theory
depends on the fact that the overlap of the wave function for different
macroscopic states is irrelevant. But it appeared in field theory that for
one-particle states there is always a non-trivial overlap, even if these field
states are localized far away from each other.
But, as I have shown in [30], the overlap decreases sufficiently fast
(approximately exponentially) with greater particle numbers.
## Appendix D Why we observe configurations, not wave packets
In the many worlds community there is a quite popular argument against dBB
theory – that it is many worlds in denial (for example, see [15]). But this
argument depends on the property of dBB theory that the wave function is a
beable, a really existing object. So it cannot be applied against the
paleoclassical interpretation, where the wave function is no longer a beable.
But in fact it is invalid also as an argument against dBB theory. In fact,
already in dBB theory it is the configuration $q(t)$ which is observable and
not the wave function.
This fact is sometimes not presented in a clear enough way, so that
misrepresentations become possible. For example Brown and Wallace [15] find
support for another interpretation even in Bohm’s original paper [3]:
> …even in his hidden variables paper II of 1952, Bohm seems to associate the
> wavepacket chosen by the corpuscles as the representing outcome of the
> measurement – the role of the corpuscles merely being to point to it. ([15]
> p. 15)
and support their claim with the following quote from Bohm
> Now, the packet entered by the apparatus variable $y$ determines the actual
> result of the measurement, which the observer will obtain when he looks at
> the apparatus. ([3] p. 118)
This quote may, indeed, lead to misunderstandings about this issue. So, maybe
we obserse only the wave packet containing the configuration, instead of
configuration itself?
My answer is a clear no. I don’t believe into the existence of sufficiently
localized wave packets to construct some effective reality out of them, as
assumed by many worlders.
Today they use decoherence to justify their belief that wave packets will be
sufficiently localized. But decoherence presupposes another structure – a
decomposition of the world into systems. Only from point of view of such a
subdivision of $q$ into, say, $(x,y)$, a non-localized wave function like
$e^{-(x-y)^{2}/2}$ may look similar to a superposition, for different $a$, of
product states localized in $x$ and $y$ like $e^{-(x-a)^{2}}\cdot
e^{-(y-a)^{2}}$.
But where does this subdivision into systems come from? The systems around us
– observers, planets, measurement devices – cannot be used for this purpose.
They do not exist on the fundamental level, as a predefined structure on the
configuration space. But the subdivision into systems has to, once we need it
to construct localized objects. Else, the whole construction would be
circular.
So one would have to postulate something else as a fundamental subdivision
into systems. This something else is undefined in the interpretations
considered here, so an interpretation based on it is simply another
interpretation, with another, additional fundamental structure – a fundamental
subdivision into systems.
## References
* [1] E. Madelung, Quantentheorie in hydrodynamischer Form, Z. Phys. 40, 322-326 (1926)
* [2] de Broglie, L., La nouvelle dynamique des quanta, in J. Bordet, Gauthier-Villars (eds.), Electrons et Photons: Rapports et Discussions du Cinquieme Conseil de Physique, Paris, 105-132 (1928), English translation in: Bacciagaluppi, G., Valentini, A.: “Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference”, Cambridge University Press, and arXiv:quant-ph/0609184 (2006)
* [3] Bohm, D: A suggested interpretation of the quantum theory in terms of “hidden” variables, Phys. Rev. 85, 166-193 (1952)
* [4] E. Nelson, Derivation of the Schrödinger Equation from Newtonian Mechanics, Phys.Rev. 150, 1079-1085 (1966)
* [5] E. T. Jaynes. Probability Theory: The Logic of Science, Cambridge University Press (2003), online at bayes.wustl.edu
* [6] Laplace, Théorie analytique des probabilités (1812)
* [7] B. de Finetti, Theory of Probability, Wiley, New York (1990)
* [8] C. M. Caves, C. A. Fuchs, R. Schack, Quantum probabilities as Bayesian probabilities, Phys. Rev. A 65, 022305 (2002), arXiv:arXiv:quant-ph/0106133v2
* [9] T. C. Wallstrom, Phys. Rev. A 49, 1613 (1994)
* [10] Schmelzer, I.: A solution for the Wallstrom problem of Nelsonian stochastics, arXiv:1101.5774v2
* [11] R. von Mises, Probability, Statistics and Truth, 2nd edition. New York: The Macmillan Company, 1957
* [12] Wolfgang Pauli, Remarques sur le problème des paramètres cachés dans la mécanique quantique et sur la théorie de l’onde pilote, in André George, ed., Louis de Broglie – physicien et penseur (Paris, 1953), 33-42
* [13] Freire Jr., O.: Science and exile: David Bohm, the hot times of the Cold War, and his struggle for a new interpretation of quantum mechanics, Historical Studies on the Physical and Biological Sciences 36(1), 1-34, arXiv:quant-ph/0508184 (2005)
* [14] Jeffrey Bub, Quantum Information and Computation, arXiv:quant-ph/0512125v2 (2005)
* [15] Brown, H.R., Wallace, D.: Solving the measurement problem: de Broglie-Bohm loses out to Everett, Foundations of Physics, Vol. 35, No. 4, 517 (2005) arXiv:quant-ph/0403094
* [16] Wallace, D.: The quantum measurement problem: state of play, arXiv:0712.0149 (2007)
* [17] Goldstein, S. and S. Teufel (2000). Quantum spacetime without observer: ontological clarity and the conceptual foundations of quantum gravity. In C. Callender and N. Huggett (Eds.), Physics meets philosophy at the Planck scale. Cambridge: Cambridge University Press; pp. 275–289. arXiv:quant-ph/9902018
* [18] Dürr, D., S. Goldstein, and N. Zanghi (1996). Bohmian Mechanics and the Meaning of the Wave Function, in R. S. Cohen, M. Horne, and J. Stachel (Eds.), Experimental Metaphysics: quantum mechanical studies in honour of Abner Shimony. Dordrecht: Kluwer; pp. 25–38, arXiv:quant-ph/9512031
* [19] J. Butterfield, On Hamilton-Jacobi Theory as a Classical Root of Quantum Theory, in: Elitzur, A.C., Dolev, S., Kolenda, N. (eds.), Quo Vadis Quantum Mechanics? Possible Developments in Quantum Theory in the 21st Century, New York: Springer, 2004, arXiv:quant-ph/0210140v2
* [20] J. S. Bell, Speakable and unspeakable in quantum mechanics, Cambridge University Press 1987
* [21] V. I. Arnold, Mathematical methods of classical mechanics, 2nd ed. Springer, New York, 1989
* [22] A. Valentini, Signal-locality, uncertainty, and the subquantum H-theorem: I and II. Phys. Lett. A 156 5–11; 158 1–8, 1991
* [23] A. Valentini, H. Westman, Dynamical Origin of Quantum Probabilities, Proc. R. Soc. A 461 253–272, 2005, arXiv:quant-ph/0403034v2
* [24] T. Norsen, Against ‘realism’, arXiv:quant-ph/0607057v2
* [25] Schmelzer, I.: A Condensed Matter Interpretation of SM Fermions and Gauge Fields, Found. Phys. vol. 39, 1, p. 73-107 (2009), arXiv:0908.0591
* [26] Schmelzer, I.: A theory of gravity with preferred frame and condensed matter interpretation, arXiv:1003.1446v2
* [27] Schmelzer, I.: The background as a quantum observable: Einstein’s hole argument in a quasiclassical context arXiv:0909.1408v2
* [28] Schmelzer, I.: Why the Hamilton operator alone is not enough, Found. Phys. vol.39, 5, 486-498 (2009), arXiv:0901.3262
* [29] Schmelzer, I.: Pure quantum interpretations are not viable, Found Phys vol. 41, 2, 159-177 (2011), arXiv:0903.4657v2
* [30] Schmelzer, I.: Overlaps in pilot wave field theories, Found. Phys. vol. 40, 3, 289–300 (2010), arXiv:0904.0764
|
arxiv-papers
| 2011-03-17T20:02:37 |
2024-09-04T02:49:17.753427
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "I. Schmelzer",
"submitter": "Ilja Schmelzer",
"url": "https://arxiv.org/abs/1103.3506"
}
|
1103.3593
|
# Sub-nanosecond Electro-optic Modulation of Triggered Single Photons from a
Quantum Dot
Matthew T. Rakher matthew.rakher@gmail.com Current Address: Departement
Physik, Universität Basel, Klingelbergstrasse 82, CH-4056 Basel, Switzerland
Kartik Srinivasan Center for Nanoscale Science and Technology, National
Institute of Standards and Technology, Gaithersburg, MD 20899, USA
###### Abstract
Control of single photon wave-packets is an important resource for developing
hybrid quantum systems which are composed of different physical systems
interacting via photons. Here we extend this control to triggered photons
emitted by a quantum dot, temporally shaping single photon wave-packets on
timescales fast compared to their radiative decay by electro-optic modulation.
In particular, telecommunications-band single photons resulting from the
recombination of an exciton in a quantum dot with exponentially decaying wave-
packets are synchronously modulated to create Gaussian-shaped single photon
wave-packets. We explore other pulse-shapes and investigate the feasibility of
this technique for increasing the indistinguishability of quantum dot
generated single photons.
###### pacs:
78.67.Hc, 42.50.Ar, 42.50.Dv
Single photons are an integral part of many protocols in quantum information
science such as quantum key distribution Gisin et al. (2002) and quantum
computation Knill et al. (2001); Raussendorf and Briegel (2001). One of the
most promising sources of single photons are single, self-assembled quantum
dots (QDs)Shields (2007). Because they can be formed in commonly-used
optoelectronic materials, they also offer the ability to control their
emission properties by designing monolithic cavity Santori et al. (2001);
Strauf et al. (2007) or waveguide micro-structures Claudon et al. (2010);
Davanço and Srinivasan (2009). The temporal shape of the wave-packet of these
photons is determined by the recombination process of carriers confined in the
QDs which usually results in an exponentially-decaying amplitude over a
timescale of 0.5 ns to 2 ns. However, this pulse shape is not ideal for
interacting with other two-level quantum systems such as atoms, ions, or other
QDs Cirac et al. (1997); Gorshkov et al. (2007) as part of a large-scale
quantum network Kimble (2008). In this Letter, we perform amplitude modulation
of single photons from a single quantum dot in the telecommunications-band to
create Gaussian-shaped wave-packets and other non-trivial pulse shapes by
synchronized, sub-nanosecond, electro-optic modulation. Previously, single
photon electro-optic amplitude and phase modulation have been performed on
single photons emitted by an atomic ensembleKolchin et al. (2008) and by a
trapped atom in a cavitySpecht et al. (2009), but in both cases the photon
wave-packet was $\approx$150 ns in width. In addition, control of single
photon waveforms has been demonstrated using a lambda-type level system in an
atomMcKeever et al. (2004) and an ionKeller et al. (2004), but this technique
would be difficult to apply to a QD. We extend electro-optic modulation to QD-
generated single photons and reduce the required modulation timescales by more
than two orders of magnitude, leading to a robust and flexible single photon
source capable of efficient interactions in a quantum network.
Figure 1: (a) Schematic of the experimental setup for the generation and
manipulation of triggered signal photons from a quantum dot. Definition of
acronyms: VOA=variable optical attenuator, FTW=fiber taper waveguide,
EOM=electro-optic modulator, TBF=tunable bandpass filter, SPAD=single photon
counting avalanche photodiode, EF=edgepass filter, PL = photoluminescence. (b)
PL spectrum of the emission of the single QD used in this work. The spectrum
was taken for an integration time of 60 s under an excitation power of 10 nW.
Single photons at 1.3 $\mu$m are generated by photoluminescence (PL) from a
single InAs quantum dot. The QDs used in this work are grown by molecular beam
epitaxy where they are made to self-assemble in an InGaAs quantum well
contained in a 256 nm thick GaAs device layer. The sample is etched to create
isolated mesas of $\approx$2.5 $\mu$m diameter in the device layer by a
nanofabrication process consisting of electron-beam lithography, optical
lithography, and wet and dry etching. This isolation enables efficient, near-
field optical coupling to a fiber taper waveguide (FTW) Srinivasan et al.
(2007). For this work, the FTW is used for efficient excitation and collection
of emission from QDs into single mode fiber, which has been estimated to be
$\approx$0.1 $\%$ in previous experimentsSrinivasan et al. (2007); Rakher et
al. (2010). As shown in Fig. 1(a), the sample containing QDs resides in a
liquid-Helium flow cryostat at a temperature of $\approx$7 K. Optical
excitation by a 50 ps, 780 nm pulsed laser diode operating at 50 MHz is
introduced into the cryostat by fiber-optic feedthroughs after attenuation by
a variable optical attenuator (typical excitation powers were $\approx$10 nW).
Subsequent carrier relaxation and recombination inside the QDs generates
photoluminescence (PL) which is efficiently captured by the FTW. As depicted
in Fig. 1(a), the fiber output of the cryostat can then be connected to a
monochromator with a liquid N2-cooled InGaAs array after spectral filtering by
an edge pass filter to remove excess excitation light. This enables PL
spectroscopy to identify a candidate QD that is both spectrally and spatially
separated from nearby QDs. Figure 1(b) depicts such a PL spectrum of the QD
used in this work and shows two sharp transitions, corresponding to the
positively charged exciton, $X^{+}$, near 1301.5 nm and the neutral exciton,
$X^{0}$, near 1302.5 nm. This identification is based on polarization-resolved
spectroscopy and is not shown here. Furthermore, in Ref. Rakher et al., 2010
this QD was explicitly shown through photon antibunching experiments to emit
single photons and the measurement is not repeated here.
Electro-optic modulation is performed by directing the single photon PL into a
fiber-coupled, LiNbO3 electro-optic intensity modulator (EOM) after
polarization manipulation. The DC bias of the EOM was controlled by a low-
noise DC supply while the rf input was connected to an externally-triggered
pulse generator capable of producing pulses as short as $\tau_{mod}\approx$300
ps with sufficient amplitude to reach the $V_{\pi}\approx 4$ V of the EOM. Of
critical importance is the synchronization of the EOM to the incoming single
photon pulses. While in Ref. Kolchin et al., 2008 this was done through
detection of a heralded photon of a biphoton pair, here the repetition period,
$T_{rep}$, of the 780 nm pulsed laser excitation source determines the single
photon arrival times as shown schematically in Fig. 2(a). Thus, the timing of
the EOM pulse is set to overlap with the arrival of the incoming photon at a
specified time by using a delay generator triggered by the electronic
synchronization signal of the excitation laser. The delay generator serves as
a master clock (Fig. 1), allowing the electro-optic modulation to be
controllably delayed with respect to the incoming photon by an amount $\Delta
T_{mod}$. It also ensures synchronization with respect to the 50 ns detection
window of the telecommunications-band InGaAs/InP single photon counting
avalanche photodiode (SPAD), which must be operated in a gated detection mode
to avoid strong afterpulsing Hadfield (2009) in contrast to Si-based SPADs
used for shorter wavelength detection. The SPAD gate is triggered at 5 MHz by
a signal from the delay generator so as to coincide with the arrival of every
tenth optical pulse, which at this point has been modulated and spectrally
isolated using a tunable bandpass filter so that only the single photon
emission from the $X^{0}$ transition at 1302.5 nm is present. Using an
electronic pulse from the delay generator and the electronic pulse from the
SPAD ($\approx 250$ ps timing jitter), a time resolved histogram of detection
events can be formed using a time-correlated single photon counting system.
Figure 2: (a) Schematic of the timing sequence of the experiment. $T_{rep}$
corresponds to the repetition period of the excitation laser, $\tau_{sp}$ is
the spontaneous emission lifetime, $\tau_{mod}$ is the width of the electro-
optic modulation, $\Delta T_{mod}$ is the delay of the modulation with respect
to the incoming photon, and $T_{gate}$ is the width of the detection window.
(b) Unmodulated and modulated single photon waveforms for different modulation
delays. The temporal profile of the modulation is shown in the inset and
corresponds to a full-width half-maximum of 720 ps $\pm$ 18 ps. (c) Same
traces shown in (b) but separated for clarity. The modulation delays are
$\Delta T_{mod}=\\{$0.0, 0.8, 1.6, 2.4, 3.2, 4.0$\\}$ ns respectively. The
unmodulated single photon waveform is shown for comparison in blue. All traces
are integrated for 1200 s.
Before directing the QD single photons into the modulation setup, we first
synchronize the electronics and characterize the temporal width of the
modulation by using a continuous-wave laser at 1302.5 nm, the same wavelength
as the neutral exciton transition. This laser was attenuated and passed
through the EOM and measured by the SPAD. The temporal histogram of the
resulting waveform is shown in the inset of Fig. 2(b) as measured by the time
correlation system. The extracted full-width-half-maximum (with uncertainty
given by the 95 $\%$ confidence level from fit) of the Gaussian-like pulse for
this rf setting was 720 ps $\pm$ 18 ps with an extinction ratio of $\gtrsim
20$ dB. Next, the temporal shape of the QD single photon wave-packet was
measured under no modulation. In this case, the pulse generator was turned off
and the DC bias was set to maximum transmission. The trace was integrated for
1200 s and is shown in blue in Fig. 2(b) and Fig. 2(c). The curve is a single
exponential decay with time constant 1.4 ns $\pm$ 0.1 ns.
To modulate the exponentially-decaying single photon wave-packet, the EOM was
setup exactly as in the inset of Fig. 2(b). The delay between the arrival of
the QD single photon and the triggering of the EOM, $\Delta T_{mod}$, was
varied in intervals of 0.8 ns and the resulting waveforms were integrated for
1200 s. The histograms are shown together in Fig. 2(b) and separated in Fig.
2(c) in red, corresponding to temporal delays of $\\{$0.0, 0.8, 1.6, 2.4, 3.2,
4.0$\\}$ ns respectively. The modulated waveform heights nicely follow the
contour of the exponential decay shown in blue. This data clearly demonstrates
the modulation of triggered single photon wave-packets from an exponentially-
decaying amplitude to a more convenient Gaussian-shaped pulse.
Figure 3: (a) Single photon modulated (red) waveform for a modulation width,
$\tau_{mod}$, of 520 ps $\pm$ 13 ps along with the unmodulated waveform (blue)
for comparison. The temporal profile of the modulation is shown in the inset.
(b) Single photon modulated waveforms (red) for the inverted Gaussian
modulation profile ($\tau_{mod}$=770 ps $\pm$ 19 ps) shown in the inset of (i)
for modulation delays of $\Delta T_{mod}=\\{$0.0, 0.8, 1.6, 2.4$\\}$ ns
respectively. The unmodulated waveform (blue) is shown for comparison.
The flexibility of our setup also easily allowed for other waveforms by simply
changing the settings of the pulse generator. Figure 3(a) shows the modulated
(red) and unmodulated (blue) single-photon waveforms for the amplitude
modulation shown in the inset. In this case, the modulation profile was a
Gaussian with width $\tau_{mod}$=520 ps $\pm$ 13 ps and each trace was
integrated for 1200 s. The modulation profile could also be inverted, as shown
in the inset of the first panel of Fig. 3(b), to create a Gaussian-shaped
notch of width $\tau_{mod}=$770 ps $\pm$ 19 ps. The resulting single photon
waveforms under this modulation are shown in red in the panels of Fig. 3(b)
for modulation delays of $\Delta T_{mod}=\\{$0.0, 0.8, 1.6, 2.4$\\}$ ns
respectively. More complex waveforms are possible provided the pulse generator
and electro-optic modulator are capable of producing them. In this work, the
timing was limited by the pulse generator to $\approx$300 ps but the 10 GHz
EOM could in principle reach pulse widths on the order of 100 ps and 40 GHz
EOMs are readily available at 1300 nm. Such waveforms could be useful to
encode more information in a single photon Broadbent et al. (2009) or to hide
and recover a single photon in a noisy signal Belthangady et al. (2010).
It is important to note that single photons produced by a QD are generally not
transform-limited, so that subsequent photon events are not completely
indistinguishable. The additional dephasing leads to a photon coherence time
that is $\approx$280 ps for these QDs emitting at 1.3 $\mu$m Srinivasan et al.
(2008), yielding a photon indistinguishability of $\approx 10.0$ $\%$ Bylander
et al. (2003). In our experiments, this timescale implies that while each
photon is modulated by the EOM, they are not all modulated in the same way as
in Ref. Kolchin et al., 2008 or Specht et al., 2009. While this caveat is
important to correctly interpret the data presented here, the methods and
techniques would work just as well for a QD whose lifetime has been reduced by
Purcell enhancement Santori et al. (2002); Varoutsis et al. (2005) in order to
restore the indistinguishability completely. Furthermore, electro-optic
modulation could itself be used to improve the indistinguishability at the
cost of losing some of the photons by post-selection Patel et al. (2008), but
without the narrow spectral bandwidth characteristic of the Purcell Effect. In
fact for the QD used in this work, with a lifetime of 1.4 ns and a photon
coherence time of $\approx$280 ps, the indistinguishability could be increased
to near-unity by using a Gaussian-shaped modulation with $\tau_{mod}\approx
140$ ps. The fraction of photons that would be lost due to the small temporal
width would be $\approx$90 $\%$, yielding a maximum single photon count rate
of $\approx 6.8\times 10^{4}$ s-1 after accounting for the collection
efficiency of the FTW (see supporting material). For a QD with a longer photon
coherence time (580 ps was measured in Ref. Flagg et al., 2010) the
transmitted count rate can be much higher, resulting in a significant amount
of identical single photons with sub-nanosecond pulse durations all in a
single mode fiber.
In conclusion, we have clearly demonstrated sub-nanosecond modulation of
triggered single photons from a quantum dot with a variety of waveforms. Given
the need for efficient interaction between single photons and local quantum
systems, such single photon control will be a useful resource for quantum
networks. In addition, amplitude modulation may also prove to be useful for
increasing the indistinguishability of triggered single photons from a quantum
dot.
The authors acknowledge technical assistance from Alan Band at the CNST and
useful discussions with S. E. Harris at Stanford University.
## References
* Gisin et al. (2002) N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. 74, 145 (2002).
* Knill et al. (2001) E. Knill, R. Laflamme, and G. J. Milburn, Nature 409, 46 (2001).
* Raussendorf and Briegel (2001) R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. 86, 5188 (2001).
* Shields (2007) A. J. Shields, Nature Photonics 1, 215 (2007).
* Santori et al. (2001) C. Santori, M. Pelton, G. Solomon, Y. Dale, and Y. Yamamoto, Phys. Rev. Lett. 86, 1502 (2001).
* Strauf et al. (2007) S. Strauf, N. G. Stoltz, M. T. Rakher, L. A. Coldren, P. M. Petroff, and D. Bouwmeester, Nature Photonics 1, 704 (2007).
* Claudon et al. (2010) J. Claudon, J. Bleuse, N. S. Malik, M. Bazin, P. Jaffrennou, N. Gregersen, C. Sauvan, P. Lalanne, and J. Gérard, Nature Photonics 4, 174 (2010).
* Davanço and Srinivasan (2009) M. Davanço and K. Srinivasan, Opt. Lett. 34, 2542 (2009).
* Cirac et al. (1997) J. I. Cirac, P. Zoller, H. J. Kimble, and H. Mabuchi, Phys. Rev. Lett. 78, 3221 (1997).
* Gorshkov et al. (2007) A. V. Gorshkov, A. André, M. Fleischhauer, A. S. Sørensen, and M. D. Lukin, Phys. Rev. Lett. 98, 123601 (2007).
* Kimble (2008) H. J. Kimble, Nature (London) 453, 1023 (2008).
* Kolchin et al. (2008) P. Kolchin, C. Belthangady, S. Du, G. Y. Yin, and S. E. Harris, Phys. Rev. Lett. 101, 103601 (2008).
* Specht et al. (2009) H. P. Specht, J. Bochmann, M. Mücke, B. Weber, E. Figueroa, D. L. Moehring, and G. Rempe, Nature Photonics 3, 469 (2009).
* McKeever et al. (2004) J. McKeever, A. Boca, A. D. Boozer, R. Miller, J. R. Buck, A. Kuzmich, and H. J. Kimble, Science 303, 1992 (2004).
* Keller et al. (2004) M. Keller, B. Lange, K. Hayaska, W. Lange, and H. Walther, Nature 431, 1075 (2004).
* Srinivasan et al. (2007) K. Srinivasan, O. Painter, A. Stintz, and S. Krishna, Appl. Phys. Lett. 91, 091102 (2007).
* Rakher et al. (2010) M. T. Rakher, L. Ma, O. Slattery, X. Tang, and K. Srinivasan, Nature Photonics 4, 786 (2010).
* Hadfield (2009) R. H. Hadfield, Nature Photonics 3, 696 (2009).
* Broadbent et al. (2009) C. J. Broadbent, P. Zerom, H. Shin, J. C. Howell, and R. W. Boyd, Phys. Rev. A 79, 033802 (2009).
* Belthangady et al. (2010) C. Belthangady, C.-S. Chuu, I. A. Yu, G. Y. Yin, J. M. Kahn, and S. E. Harris, Phys. Rev. Lett. 104, 223601 (2010).
* Srinivasan et al. (2008) K. Srinivasan, C. P. Michael, R. Perahia, and O. Painter, Phys. Rev. A 78, 033839 (2008).
* Bylander et al. (2003) J. Bylander, I. Robert-Philip, and I. Abram, Eur. Phys. J. D 22, 295 (2003).
* Santori et al. (2002) C. Santori, D. Fattal, J. Vuckovic, G. Solomon, and Y. Yamamoto, Nature 419, 594 (2002).
* Varoutsis et al. (2005) S. Varoutsis, S. Laurent, P. Kramper, A. Lemaître, I. Sagnes, I. Robert-Philip, and I. Abram, Phys. Rev. B 72, 041303 (2005).
* Patel et al. (2008) R. B. Patel, A. J. Bennett, K. Cooper, P. Atkinson, C. A. Nicoll, D. A. Ritchie, and A. J. Shields, Phys. Rev. Lett. 100, 207405 (2008).
* Flagg et al. (2010) E. B. Flagg, A. Muller, S. V. Polyakov, A. Ling, A. Migdall, and G. S. Solomon, Phys. Rev. Lett. 104, 137401 (2010).
|
arxiv-papers
| 2011-03-18T10:32:12 |
2024-09-04T02:49:17.763782
|
{
"license": "Public Domain",
"authors": "Matthew T. Rakher and Kartik Srinivasan",
"submitter": "Matthew Rakher",
"url": "https://arxiv.org/abs/1103.3593"
}
|
1103.3629
|
# Extraction of Electron Self-Energy and Gap Function in the Superconducting
State of Bi2Sr2CaCu2O8 Superconductor via Laser-Based Angle-Resolved
Photoemission
Wentao Zhang1, Jin Mo Bok2, Jae Hyun Yun2, Junfeng He1, Guodong Liu1, Lin
Zhao1, Haiyun Liu1, Jianqiao Meng1, Xiaowen Jia1, Yingying Peng1, Daixiang
Mou1, Shanyu Liu1, Li Yu1, Shaolong He1, Xiaoli Dong1, Jun Zhang1, J. S. Wen3,
Z. J. Xu3, G. D. Gu3, Guiling Wang4, Yong Zhu4, Xiaoyang Wang4, Qinjun Peng4,
Zhimin Wang4, Shenjin Zhang4, Feng Yang4, Chuangtian Chen4, Zuyan Xu4, H.-Y.
Choi2, C. M. Varma5 and X. J. Zhou 1,∗
1National Laboratory for Superconductivity, Beijing National Laboratory for
Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences,
Beijing 100080, China
2Department of Physics and Institute for Basic Science Research, SungKyunKwan
University, Suwon 440-746, Korea.
3Condensed Matter Physics and Materials Science Department, Brookhaven
National Laboratory, Upton, New York 11973, USA
4Technical Institute of Physics and Chemistry, Chinese Academy of Sciences,
Beijing 100080, China
5Department of Physics and Astronomy, University of California, Riverside,
California 92521
(March 18, 2011)
###### Abstract
Super-high resolution laser-based angle-resolved photoemission measurements
have been performed on a high temperature superconductor Bi2Sr2CaCu2O8. The
band back-bending characteristic of the Bogoliubov-like quasiparticle
dispersion is clearly revealed at low temperature in the superconducting
state. This makes it possible for the first time to experimentally extract the
complex electron self-energy and the complex gap function in the
superconducting state. The resultant electron self-energy and gap function
exhibit features at $\sim$54 meV and $\sim$40 meV, in addition to the
superconducting gap-induced structure at lower binding energy and a broad
featureless structure at higher binding energy. These information will provide
key insight and constraints on the origin of electron pairing in high
temperature superconductors.
###### pacs:
74.72.Gh, 74.25.Jb, 79.60.-i, 74.20.Mn
The mechanism of high temperature superconductivity in the copper-oxide
compounds (cuprates) remains an outstanding issue in condensed matter physics
after its first discovery more than two decades agoBednorz . It has been
established that, in cuprate superconductors, the electrons are paired with
opposite spins and opposite momentum to form a spin-singlet stateCEGough , as
the Cooper pairing in the conventional superconductorsBCSTheory . It has been
further established that the superconducting gap in the cuprate
superconductors has a predominantly d-wave symmetryTunnelingJunction ,
distinct from the s-wave form in the conventional superconductors. The center
of debate lies in the origin of the electron pairing in the cuprate
superconductors. In the conventional superconductors, it is known from the BCS
theory of superconductivity that the exchange of phonons gives rise to the
formation of Cooper pairingBCSTheory . In the cuprate superconductors, the
question becomes whether the electron-electron interaction can automatically
cause the pairingAndersonScience , or a distinct collective mode (a glue)
remains essential in mediating the pairing as in the conventional
superconductors and what the nature of the glue isScalapino ; ZXShen ; Aji .
In the conventional superconductors, the extraction of the electron self-
energy $\Sigma(\omega)$ and gap function $\phi(\omega)$ in the superconducting
state played a critical role in proving that phonons provide the glue for
Cooper pairsScalapinoPark . Experimentally, these two fundamental quantities
were extracted from the tunneling experimentsIGiaever . On the one hand,
theoretical models were examined by simulating the two quantities to compare
with the experimentally extracted onesSchriefferSimu . On the other hand, the
underlying bosonic spectral function associated with the pairing glue was
directly inverted from the gap function via the Eliashberg
equationsMcMillanInversion . The striking resemblance between the bosonic
spectral function thus extracted and the phonon density of states directly
measured from the neutron scattering provided an overwhelming evidence of the
phonons as a pairing glue in the conventional superconductorsScalapinoPark .
It is natural to ask whether similar procedures can be applied in high
temperature cuprate superconductors, which necessitates reliable extraction of
the electron self-energy and gap function in the superconducting stateVekhter
. However, the ordinary tunneling experiments cannot be used for this purpose
due to the complications from the strong anisotropy of the electronic
structure and d-wave superconducting gapHTSCTunneling ; DavisSTM .
Figure 1: Observation of two branches of Bogoliubov quasiparticle-like
dispersions in the superconducting state of the slightly underdoped Bi2212
(Tc=89 K). (a). Simulated single particle spectral function A(k,$\omega$) in
the superconducting state, taking a superconducting gap of $\Delta$=10 meV and
a linewidth broadening of $\Gamma$=5 meV. The simulated A(k,$\omega$)
multiplied by the Fermi distribution function f($\omega$) at a temperature of
70 K is shown in (b) and at 20 K is shown in (c). (e1-e5). Photoemission
images taken at different temperatures along the momentum cut marked as in
(d). This cut nearly points towards ($\pi$, $\pi$) with an angle $\theta$ as
defined. (f1-f5). Photoemission images in (e1-e5) divided by a Fermi
distribution function at the corresponding temperatures.
In this paper, we report the first experimental extraction of the electron
self-energy $\Sigma(\omega)$ and gap function $\phi(\omega)$ in the
superconducting state of a high temperature superconductor Bi2Sr2CaCu2O8
(Bi2212) from angle-resolved photoemission (ARPES) measurements. This is
accomplished by carrying out super-high resolution laser-based ARPES
measurements on Bi2212 which clearly reveals the band back-bending behavior of
the Bogoliubov-like quasiparticle dispersion at low temperature. The extracted
$\Sigma(\omega)$ and $\phi(\omega)$ show clear features at $\sim$54 meV and
$\sim$40 meV, in addition to the superconducting gap-induced structure at
lower energies, and a broad background at higher energies. The accurate and
detailed determination of $\Sigma(\omega)$ and $\phi(\omega)$ will provide key
information on the pairing mechanism of high temperature superconductivity.
The angle-resolved photoemission measurements were carried out on our vacuum
ultra-violet (VUV) laser-based ARPES systemLiuIOP . The photon energy of the
laser is 6.994 eV with a bandwidth of 0.26 meV. The energy resolution of the
electron energy analyzer (Scienta R4000) was set at 1 meV, giving rise to an
overall energy resolution of $\sim$1.2 meV which is significantly improved
from 7$\sim$15 meV from some previous ARPES measurementsHMatsui_Bi2223 ;
AVB_BogoAngle ; WSLee ; JC_Bendingback ; HBYang_CooperPair . The angular
resolution is $\sim$0.3∘, corresponding to a momentum resolution $\sim$0.004
$\AA$-1 at the photon energy of 6.994 eV, more than twice improved from 0.009
$\AA$-1 at a regular photon energy of 21.2 eV for the same angular resolution.
The Fermi level is referenced by measuring on a clean polycrystalline gold
that is electrically connected to the sample. Slightly underdoped Bi2212
(Tc=89 K) single crystals were cleaved in situ and measured in vacuum with a
base pressure better than 5 $\times$ 10-11 Torr.
According to the BCS theory of superconductivityBCSTheory , in the
superconducting state, the low energy excitations are Bogoliubov
quasiparticles representing the coherent mixture of electron and hole
components. The BCS spectral function can be written as:
$A_{BCS}(k,\omega)=\frac{1}{\pi}[\frac{|u_{k}|^{2}\Gamma}{(\omega-
E_{k})^{2}+\Gamma^{2}}+\frac{|v_{k}|^{2}\Gamma}{(\omega+E_{k})^{2}+\Gamma^{2}}],$
where uk and vk are coherence factors, $\Gamma$ is a linewidth parameter, and
Ek=($\epsilon_{k}^{2}+|\Delta(k)|^{2})^{1/2}$ where $\epsilon$(k) is the
normal state band dispersion and $\Delta$(k) is the gap function. Two
prominent characteristics are noted in the simulated spectral function in the
superconducting state (Fig. la). The first is the existence of two branches of
Bogliubov quasiparticle dispersions above and below the Fermi level EF,
separated by the superconducting gap 2$|\Delta|$ and satisfying a relation
A(k,$\omega$)=A(-k,-$\omega$). The second is that, for a given branch, there
is a band back-bending right at the Fermi momentum kF. Since ARPES measures a
single particle spectral function A(k,$\omega$) weighted by the photoemission
matrix element and the Fermi distribution function f($\omega$,T):
$I(\mathbf{k},\omega)=I_{0}(\mathbf{k},\nu,A)f(\omega,T)A(\mathbf{k},\omega),$
it probes mainly the occupied states due to $f(\omega,T)$. When the
temperature (T) is sufficiently low, the upper branch of the dispersion is
suppressed (Fig. 1c). On the other hand, when the temperature is relatively
high in the superconducting state, a few electronic states above the Fermi
level can be thermally populated and become visible (Fig. 1b).
Figure 2: Observation of band back-bending in the lower branch of the
Bogoliubov quasiparticle dispersion in the superconducting state of Bi2212.
(a). Photoemission image taken at 16 K along the cut shown in Fig. 1d. (b).
Photoemission image taken at 16 K divided by the image taken at 107 K along
the same cut. (c1-c5). Representative MDCs in the normal state at 107 K (red
circles) and superconducting state at 16 K (blue circles) at five different
binding energies. The solid lines represent fitted results. (d1-d5). MDCs at
16 K divided by MDCs at 107 K at different binding energies (black circles).
The first characteristic of the Bogoliubov quasiparticle, i.e., the existence
of two dispersion branches above and below the Fermi level in the
superconducting state, has been reported in the previous ARPES measurements on
cuprate superconductorsHMatsui_Bi2223 ; AVB_BogoAngle ; WSLee ;
HBYang_CooperPair . It shows up more clearly in our super-high resolution
ARPES measurements (Fig. 1(e1-e5)). As seen in Fig. 1e1, at low temperature
(16 K), the spectral weight above the Fermi level is almost invisible.
However, at relatively high temperatures while the sample remains in the
superconducting state, there are clear features present above the Fermi level,
as seen from the 70 K (Fig. 1e2) and 80 K (Fig. 1e3) measurements. By dividing
out the corresponding Fermi distribution function from the measured
photoemission data in Fig. 1(e1-e5), part of the upper dispersion branch near
the Fermi level can be recovered, as seen in the 70 K (Fig. 1f2) and 80 K
(Fig. 1f3) data. Indeed, the upper branch and the lower branch are nearly
centro-symmetric with respect to the Fermi momentum at the Fermi level. Above
Tc, the band recovers to the normal state dispersion with no gap opening (97 K
data in Figs. 1e4 and 1f4, and 107 K data in Figs. 1e5 and 1f5).
Figure 3: Momentum dependence of the lower branch of the Bogoliubov
quasiparticle-like dispersion in Bi2212. (a1-a6). Photoemission images
measured at 16 K for different momentum cuts with their location shown in the
upper-right inset. (b1-b6). Photoemission images measured at 16 K divided by
the corresponding images measured at 107 K. The white arrows in (b5) and (b6)
mark an additional feature that is likely due to umklapp bands in Bi2212.
One major result of the present work is the revelation of the other
characteristic of the Bogoliubov quasiparticles in the superconducting state,
i.e., the band back-bending expected in the lower branch at the Fermi momentum
(Fig. 1c). In this case, a momentum distribution curve (MDC) (red thick line
in Fig. 1c) at an energy slightly below the gap $\Delta$ (red thin line in
Fig. 1c) is expected to exhibit two peaks: a strong peak from the original
main band, and the other weak peak (or shoulder, marked by black arrow in Fig.
1c) from the superconductivity-induced additional band. Indeed, as seen from
Fig. 2a which was taken on Bi2212 at 16 K in the superconducting state, a band
back-bending behavior can be clearly observed. The MDC at a typical binding
energy of 20 meV shows a clear shoulder in addition to the main peak (Fig.
2c2). To highlight the superconductivity-induced change across the
superconducting transition, we also show in Fig. 2b the photoemission image
taken at 16 K divided by the one taken at 107 K in the normal state. The
superconductivity-induced band back-bending becomes more pronounced (Fig. 2b)
and the corresponding MDCs at some binding energies like 20 meV show two clear
peaks (Fig. 2d2). To the best of our knowledge, this is the first time that
this second characteristic of the Bogoliubov quasiparticles in the
superconducting state is revealed so clearly, mainly due to the much improved
resolution of our laser-based ARPES measurements.
Figure 4: The electron self-energy and gap function obtained from the ARPES
measurements of Bi2212 in the superconducting state. (a) and (b) show real
part and imaginary part, respectively, of the electron self-energy
$\Sigma(\omega)$ obtained for three different momentum cuts (Cuts $\\#$3,
$\\#$4 and $\\#$5 in upper-right inset of Fig. 3); (c) and (d) show the real
part and imaginary part, respectively, of the gap function $\phi(\omega)$ for
the three momentum cuts.
Fig. 3(a1-a6) show the momentum dependence of the photoemission images
measured at 16 K; the corresponding photoemission images at 16 K divided by
those at 107 K are shown in Figs. 3(b1-b6) to highlight the change across the
superconducting transition. The band back-bending behavior is clearly observed
and appears to get more pronounced as the momentum cuts move away from the
nodal region towards the antinodal region. The energy position of the bending
points moves to a larger binding energy as the momentum cuts move towards the
antinodal region; this is consistent with the increasing superconducting gap
size because the location of the band-bending top corresponds to the
superconducting gap $\Delta$ at the Fermi momentum (Fig. 1a).
The observation of the two major characteristics of the Bogoliubov
quasiparticle-like behavior in Bi2212 indicates that its superconducting state
is qualitatively consistent with the BCS formalism. More importantly, the
clear revelation of the band back-bending behavior at low temperature offers a
new opportunity to go beyond the BCS approximation by using the Eliashberg
formalism to directly extract the electron self-energy and the gap function
from the ARPES data. The single particle spectral function A(k,$\omega$),
measured by ARPES, is proportional to the imaginary part of the Green’s
function
$A(\mathbf{k},\omega)=-\frac{1}{\pi}Im\\{G(\mathbf{k},\omega)\\}$ (1)
Considering a cylindrical Fermi surface in the superconducting state, it can
be written as
$G(k,\omega)=\frac{Z(\omega)\omega+\epsilon_{k}}{(Z(\omega)\omega)^{2}-\epsilon^{2}_{k}-\phi^{2}(\theta,\omega)}$
(2)
where $\epsilon_{k}$ is the bare band, $Z(\omega)$ is a renormalization
parameter, and the $\phi(\theta,\omega)=\phi(\omega)\cos(2\theta)$ is a d-wave
gap function. This form is applicable when the momentum cut points towards
$(\pi,\pi)$, as is the case for our measurements (see Fig. 1d and inset of
Fig. 3). Z$(\omega)$ and $\phi(\omega)$ are extracted by fitting the MDCs at
different binding energies using Eqs. (1) as shown in Fig. 2c. In the fitting
procedure, the bare bands $\epsilon_{k}$ are taken from the tight binding
modelRSM_TightBinding which are the same as used beforeHYPRB . Different
selection of the bare bands would affect the absolute value of the fitted
quantities (Fig. 4) but has little effect on the main features that will be
discussed below. All the MDCs are well fitted by the combined Eqs. (1) and
(2), as shown in Fig. 2c for MDCs (16 K, blue lines and circles) at several
typical binding energies. The same equations also fit the normal state data by
taking $\phi(\omega)$=0 in Eqs. (2), as shown in Fig. 2c for the 107 K
dataHYPRB .
Fig. 4 shows the obtained real and imaginary parts, $\Sigma^{\prime}(\omega)$
and $\Sigma^{\prime\prime}(\omega)$, of the electron self-energy, and the real
and imaginary parts, $\phi_{1}(\omega)$ and $\phi_{2}(\omega)$, of the gap
function for three typical momentum cuts (cuts $\\#$3, $\\#$4, and $\\#$5 in
upper-right inset of Fig. 3). While the gap function $\phi(\omega)$ is
obtained directly from the above fitting procedure, the electron self-energy
$\Sigma(\omega)$ is obtained from the fitted renormalization parameter
Z($\omega$) by $\Sigma(\omega)=[1-Z(\omega)]\omega$. Since the
superconductivity-induced change occurs most obviously in a small energy range
near the Fermi level (Fig. 3b), we confine our fitting results within 100 meV
energy window near the Fermi level. The features below $\sim$20 meV are mainly
related to the opening of superconducting gap (for these three cuts, the
corresponding superconducting gap is between 10 and 15 meV). At higher
energies, two main features can be identified: one at $\sim$54 meV showing as
a robust hump in the real part of the electron self-energy (Fig. 4a), and the
other at $\sim$40 meV showing as a dip in both the imaginary part of the
electron self-energy (Fig. 4b) and the imaginary part of the gap function
(Fig. 4d). We note that the $\sim$54 meV feature is close to the bosonic mode
observed in the tunneling experimentDavisSTM and is also close to the energy
scale of the well-known nodal dispersion kink in cupratesNodalKink . The 40
meV feature is close to the antinodal kink found in Bi2212, with its energy
close to either the resonance mode or the B1g phonon modeTCuk . Further work
are needed to pin down the exact origin of these energy scales and their role
in causing superconductivity. We also note that $\Sigma(\omega)$ at higher
energies (above 50 meV) show a featureless background that is also observed in
the normal stateHYPRB .
In summary, by taking advantage of the high precision ARPES measurements on
Bi2212, we have resolved clearly both characteristics of the Bogoliubov
quasiparticle-like dispersions in the superconducting state. In particular,
the revelation of the band back-bending behavior of the lower dispersion
branch at low temperature makes it possible for the first time to extract the
complex electron self-energy and complex gap function of Bi2212 superconductor
in the superconducting state. The experimental extraction of the electron
self-energy and the gap function in the superconducting state will provide key
information and constraints on the pairing mechanism in high temperature
superconductors. First, like in the conventional superconductors, it can
provide examinations on various pairing theories by computing these two
quantities to compare with the experimentally determined ones. Second, also
like in the conventional superconductors, if it is possible to directly
perform inversion of these two quantities to obtain the underlying bosonic
spectral function that is responsible for superconductivity, it may provide
key information on the nature of the electron pairing mechanism. We hope our
present work will stimulate further efforts along these directions.
XJZ thanks the funding support from NSFC (Grant No. 10734120) and the MOST of
China (973 program No: 2011CB921703).
∗Corresponding author (XJZhou@aphy.iphy.ac.cn)
## References
* (1) J. G. Bednorz et al., Z. Phys. B 64, 189 (1986).
* (2) C. E. Gough et al., Nature (London) 326, 855 (1987).
* (3) J. Bardeen et al, Phys. Rev. 108, 1175 (1957).
* (4) D. J. Van Harlingen, Rev. Mod. Phys. 67, 515 (1995); C. C. Tusei et al., Rev. Mod. Phys. 72, 969 (2000).
* (5) P. W. Anderson, Science 316, 1705 (2007).
* (6) D. J. Scalapino, Phys. Reports 250, 329 (1995).
* (7) Z. X. Shen et al., Philosophical Magazine B 82, 1349 (2002).
* (8) V. Aji, et al., Phys. Rev. B 81, 064515 (2010).
* (9) D. J. Scalapino, in Superconductivity, R. D. Parks, Ed. (Dekker, New York, 1969), pp.449-560.
* (10) I. Giaever et al., Phys. Rev. 126, 941 (1962); J. M. Rowell et al., Phys. Rev. Lett. 10, 334 (1963).
* (11) J. R. Schrieffer et al., Phys. Rev. Lett. 10, 336 (1963).
* (12) W. L. McMillan et al., Phys. Rev. Lett. 14, 108 (1965).
* (13) I. Vekhter et al., Phys. Rev. Lett. 90, 237003 (2003).
* (14) J. F. Zasadzinski et al., Phys. Rev. Lett. 96, 017004 (2006).
* (15) J. Lee et al., Nature 442, 546 (2006).
* (16) G. D Liu et al., Rev. Sci. Instruments 79, 023105 (2008).
* (17) H. Matsui et al., Phys. Rev. Lett. 90, 217002 (2003).
* (18) A. V. Balatsky et al., Phys. Rev. B. 79, 020505(2009).
* (19) W. S. Lee et al., Nature (London) 450, 81 (2007).
* (20) J. C. Compuzano et al., Phys. Rev. B. 53, 14737 (1996).
* (21) H. B. Yang et al., Nature (London) 456, 77 (2008).
* (22) R. S. Markiewicz et al., Phys. Rev. B 72, 054519 (2005).
* (23) J. M. Bok et al., Phys. Rev. B 81, 174516 (2010).
* (24) A. Lanzara et al., Nature(London) 412,510(2001).
* (25) A. D. Gromko et al., Phys. Rev. B 68, 174520(2003); T. Cuk et al., Phys. Rev. Lett. 93, 117003 (2004).
|
arxiv-papers
| 2011-03-18T13:54:01 |
2024-09-04T02:49:17.769485
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Wentao Zhang, Jin Mo Bok, Jae Hyun Yun, Junfeng He, Guodong Liu, Lin\n Zhao, Haiyun Liu, Jianqiao Meng, Xiaowen Jia, Yingying Peng, Daixiang Mou,\n Shanyu Liu, Li Yu, Shaolong He, Xiaoli Dong, Jun Zhang, J. S. Wen, Z. J. Xu,\n G. D. Gu, Guiling Wang, Yong Zhu, Xiaoyang Wang, Qinjun Peng, Zhimin Wang,\n Shenjin Zhang, Feng Yang, Chuangtian Chen, Zuyan Xu, H.-Y. Choi, C. M. Varma\n and X. J. Zhou",
"submitter": "Xingjiang Zhou",
"url": "https://arxiv.org/abs/1103.3629"
}
|
1103.3643
|
# Scattering Lens Resolves sub-$100$ nm Structures with Visible Light
E.G. van Putten Complex Photonic Systems, Faculty of Science and Technology
and MESA+ Institute for
Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede, The
Netherlands D. Akbulut Complex Photonic Systems, Faculty of Science and
Technology and MESA+ Institute for
Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede, The
Netherlands J. Bertolotti University of Florence, Dipartimento di Fisica,
50019 Sesto Fiorentino, Italy Complex Photonic Systems, Faculty of Science
and Technology and MESA+ Institute for
Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede, The
Netherlands W.L. Vos Complex Photonic Systems, Faculty of Science and
Technology and MESA+ Institute for
Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede, The
Netherlands A. Lagendijk Complex Photonic Systems, Faculty of Science and
Technology and MESA+ Institute for
Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede, The
Netherlands FOM Institute for Atomic and Molecular Physics (AMOLF), Science
Park 104, 1098 XG Amsterdam, The Netherlands A.P. Mosk Complex Photonic
Systems, Faculty of Science and Technology and MESA+ Institute for
Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede, The
Netherlands
(August 27, 2024)
###### Abstract
The smallest structures that conventional lenses are able to optically resolve
are of the order of $200$ nm. We introduce a new type of lens that exploits
multiple scattering of light to generate a scanning nano-sized optical focus.
With an experimental realization of this lens in gallium phosphide we have
succeeded to image gold nanoparticles at $97$ nm optical resolution. Our work
is the first lens that provides a resolution in the nanometer regime at
visible wavelengths.
Many essential structures in nanoscience and nanotechnology, such as cellular
organelles, nanoelectronic circuits, and photonic structures, have spatial
features in the order of $100$ nm. The optical resolution of conventional
lenses is limited to approximately $200$ nm by their numerical aperture and
therefore they cannot resolve nanostructure. With fluorescence based imaging
methods it is possible to reconstruct an image of objects that are a
substantial factor smaller than the focus size by exploiting the photophysics
of extrinsic fluorophores.Hell and Wichmann (1994); Dyba and Hell (2002);
Betzig et al. (2006); Rust et al. (2006); Hell (2007) Their resolution
strongly depends on the shape of the optical focus, which is determined by
conventional lens systems. This dependence makes them vulnerable to focal
distortion by scattering. Moreover, its not always feasible or desirable to
dope the object under study. Other imaging methods improve their resolution by
reconstructing the evanescent waves that decay exponentially with distance
from the object. Intricate near field microscopes bring fragile nano-sized
probes in close proximity of the object where the evanescent field is still
measurable.Pohl and Courjon (1993) With this technique it is hard to quantify
the interaction between the short-lived tip and the structure. Metamaterials,
which are meticulously nanostructured artificial composites, can be engineered
to access the evanescent waves and image sub-wavelength structuresPendry
(2000) as demonstrated with superlensesFang et al. (2005) and hyperlensesLiu
et al. (2007) in the UV. These materials physically decrease the focus size,
which brings the possibility for improvement of both linear and non-linear
imaging techniques. In the especially relevant visible range of the spectrum,
plasmonic metamaterials can be used to produce nano-sized isolated hot
spotsStockman et al. (2002); Aeschlimann et al. (2007); Kao et al. (2011) but
the limited control over their position makes them unsuitable for imaging. Up
to now, a freely scannable nano-sized optical focus has not been demonstrated.
Figure 1: (A) Principle of light coupling to high transversal k-vectors into a
high-index material. Without scattering refraction would strongly limit the
angular range to which light could be coupled. By exploiting strong scattering
at the interface, incident light $k_{\text{in}}$ is coupled to all outgoing
angles $k_{\text{out}}$ in the high index material. (B) Schematic of a HIRES-
lens that uses light scattering to achieve a high optical resolution. This
HIRES-lens consists of a slab of gallium phosphide (GaP) on top of a strongly
scattering porous layer. By controlling the incident wavefront, a small focus
is made in the object plane of the HIRES-lens. (C) Overview of the setup. A
$\lambda_{0}=561$ nm laser beam is expanded and illuminates a phase only
spatial light modulator. The modulated reflected beam is first imaged onto a
two-axis steering mirror and then onto the porous surface of the GaP HIRES-
lens. A variable aperture controls the extent of the illuminated area and a
light stop places the setup in a dark field configuration by blocking the
center of the light beam. We image the object plane onto a CCD camera using an
oil immersion microscope objective.
In this Letter we introduce a new type of lens that generates a scanning nano-
sized optical focus. We used this lens to image a collection of gold
nanoparticles at 97 nm optical resolution. The lens exploits multiple
scattering of light in a porous high refractive index material to increase the
numerical aperture of the system; a principle we name High Index Resolution
Enhancement by Scattering (HIRES).
A HIRES-lens consists of a homogenous slab of high-index material on top of a
strongly disordered scattering layer. The disordered layer breaks the
translational invariance of the interface, which enables incident light to be
coupled to all propagating angles inside the high-refractive-index material as
is shown in Fig. 1A. Yet multiple scattering also scrambles the wavefront
creating a speckle-like pattern on the object plane that itself cannot be used
for imaging. Therefore we manipulate the incident wavefront in order to force
constructive interference of the scattered light at a position in the object
plane of our HIRES-lens. The wavefront is controlled using a feedback based
methodVellekoop and Mosk (2007) that is conceptionally related to phase
conjugationLeith and Upatnieks (1966) and time reversalFink et al. (2000). As
a result, a perfectly spherical wave emerges from the porous layer and
converges towards the object plane to form a sharp optical focus (Fig. 1B).
Whereas in conventional optics (e.g. solid immersion lensesWu et al. (1999) or
total internal reflection microscopesAxelrod et al. (1984)) any inevitable
surface roughness causes a distortion of the wavefront and a concomitant loss
of resolution, the inherent random nature makes a HIRES-lens robust for these
abberations. Any wavefront error is distributed randomly over all outgoing
directions, slightly reducing the contrast but not the resolutionVellekoop et
al. (2010). In order to use the HIRES-lens for high resolution imaging, the
focus is easily moved around in the object plane by steering the incident
wavefront directly exploiting the angular correlations in the scattered light;
an effect well known as the optical memory effect.Feng et al. (1988); Freund
et al. (1988); Vellekoop and Aegerter (2010) By raster scanning the focus
across an object we acquire an abberation-free high resolution image. The
robust scanning high resolution focus makes the HIRES-lens excellently suited
for optical imaging of nanostructures.
To demonstrate an experimental implementation of our HIRES-lens we fabricate
it in gallium phosphide (GaP). GaP is transparent in a large part of the
visible spectrum ($\lambda_{0}>550$ nm) and has a maximum refractive index of
n=$3.41$, higher than any other transparent material in this wavelength
range.Aspnes and Studna (1983) Electrochemically etching GaP with sulfuric
acid (H2SO4) creates macroporous networks resulting in one of the strongest
scattering photonic structures ever observed.Schuurmans et al. (1999) Using
this etching process we create a $d=2.8\leavevmode\nobreak\ \mu$m thick porous
layer on one side of a crystalline GaP wafer. This layer is thick enough to
completely randomize the incident wavefront and to suppress any unscattered
background light.
The optical memory effect allows us to shift the scattered light in the object
plane of the HIRES-lens over a distance $r\approx 1.8L\lambda/(2\pi n^{2}d)$
before the intensity correlation decreases to $1/e$Feng et al. (1988), where
$L=400\leavevmode\nobreak\ \mu$m is the thickness of the wafer. The loss of
correlation only affects the intensity in the focus (not its shape) making it
easy to correct for this effect without losing resolution. Due to the high
refractive index contrast on the flat GaP-air interface, a large fraction of
the light is internally reflected. The reflected light interferes with the
light that comes directly from the porous layer. This interference causes a
background signal that is $3$ times larger than the focus intensity. We have
therefore strongly suppressed the internal reflections by depositing an
approximately $200$ nm thick anti-internal-reflection coating of amorphous
silicon on the surface. The amorphous silicon is nearly index matched with the
GaP and strongly absorbs the light that would otherwise be internally
reflected. As a result of this layer, the background signal is significantly
reduced to only $0.04$ times the focus intensitySup (2011). The resulting
field of view of our coated HIRES-lens is measured to be $r=1.7\pm
0.1\leavevmode\nobreak\ \mu$m in radius; $85\%$ of the theoretical limit
determined by the optical memory effect. In the center of the surface we
created a small window of about $10\leavevmode\nobreak\ \mu$m in diameter by
locally removing the anti-internal-reflection coating. We use this window to
place objects onto our HIRES-lens. As a test sample we have deposited a random
configuration of gold nanoparticles with a specified diameter of $50$ nm
inside this window.
An overview of our setup is shown in Fig. 1C. We use a CW laser with a
wavelength of $\lambda_{0}=561$ nm just below the GaP bandgap of $2.24$ eV
($550$ nm) where the refractive index is maximal and the absorption is still
negligible.Aspnes and Studna (1983) We spatially partition the wavefront into
square segments of which we independently control the phase using a spatial
light modulator (SLM). The SLM is first imaged onto a two-axis fast steering
mirror and then onto the porous surface of the HIRES-lens. With a variable
aperture we set the radius $R_{\text{max}}$ of the illuminated surface area
between $0\leavevmode\nobreak\ \mu$m and $400\leavevmode\nobreak\ \mu$m. The
visibility of the gold nanoparticles is maximized by blocking the central part
of the illumination ($R\leavevmode\nobreak\ <\leavevmode\nobreak\
196\leavevmode\nobreak\ \mu$m), placing the system in a dark field
configuration. At the back of the HIRES-lens a high-quality oil immersion
microscope objective (NA = $1.49$) images the object plane onto a CCD camera.
This objective is used to efficiently collect all the light scattered from the
object plane and to obtain an reference image which is used as a comparison
for our HIRES-lens. Notice that in our scheme the resolution is determined by
the HIRES-lens itself and does not depend on the imaging optics at the back.
We first synthesize the wavefront that, after being scattered, creates a focus
in the object plane. We use light scattered from one of the gold nanoparticles
in the object plane as a feedback signal to obtain a set of complex amplitudes
that describe the propagation from different incident positions on the porous
layer towards the nanoparticleVellekoop and Mosk (2007). By reversing the
phase of these complex amplitudes we force the light waves to interfere
constructively at the exact location of the nanoparticle. The focus is moved
around in the image plane by rotating every contributing k-vector over a
corresponding angle. We apply these rotations by adding a deterministic phase
pattern to the incident wavefront. In the paraxial limit, a simple tilt of the
wavefront would suffice to displace the focus.Vellekoop and Aegerter (2010);
Hsieh et al. (2010) For our high resolution focus, which lies beyond this
limit, an additional position dependent phase correction is required that we
apply using the SLM.Sup (2011) The addition of this correction is essential
for a proper displacement of the focus.
Figure 2: Experimental imaging demonstration with a GaP HIRES-lens. (A) A
reference image taken with conventional oil immersion microscope
($\text{NA}=1.49$). The image shows a blurred collection of gold
nanoparticles. The scale bar represents $300$ nm. (B) A high resolution image
acquired with our GaP HIRES-lens. The image was obtained by scanning a small
focus over the objects while monitoring the amount of scattered light and
deconvoluted with Eq. 1Sup (2011). (C) A vertical cross section through the
center of the left sphere in A and B shows the increase in resolution. The
dashed lines are Gaussian fits to the data points.
In Fig. 2 we show the imaging capabilities of the GaP HIRES-lens. First a
reference image was acquired with the high-quality microscope behind the
HIRES-lens (Fig. 2A). Because the size of the gold nanoparticles is much
smaller than the resolution limit of this conventional oil immersion
microscope the image of the nanoparticles is blurred. Next we used our HIRES-
lens to construct a high-resolution image. By manipulating the wavefront a
focus was generated on the leftmost nanoparticle. We raster scanned the focus
across the object plane while we constantly monitored the amount of scattered
light. In Fig. 2B the result of the scan is shownSup (2011). A cross section
through the center of the left sphere (Fig. 2C) clearly shows the improvement
in resolution we obtained with our HIRES-lens, confirming our expectations
that the resolution of this image is far better than that of the conventional
high-quality detection optics.
Figure 3: Optical resolution of a GaP HIRES-lens for different radii,
$R_{\text{max}}$, of the illumination area. Red circles: measured resolutions
of the HIRES-lens. Solid blue line: expected theoretical resolution deduced
from Eq. 1. Green squares: measured resolution of the oil immersion
microscope. Dashed green line: mean measured resolution. Black arrow: expected
resolution for an infinitely large illumination area. By increasing the
illumination area the effective numerical aperture of the lens increases
thereby improving the resolution.
For a more quantitative study of the obtained resolution, we study the shape
of the focus in the HIRES-lens. The radial intensity distribution of the focus
is directly calculated from a plane wave decomposition of the contributing
waves,
$I(r)=I_{0}\left[k_{\text{max}}^{2}\frac{J_{1}(k_{\text{max}}r)}{k_{\text{max}}r}-k_{\text{min}}^{2}\frac{J_{1}(k_{\text{min}}r)}{k_{\text{min}}r}\right]^{2}$
(1)
where $J_{1}$ is a Bessel function of the first kind. The minimum and maximum
coupled transversal k-vectors, $k_{\text{min}}$ and $k_{\text{max}}$, are
directly related to the inner and outer radius, $R_{\text{min}}$ and
$R_{\text{max}}$, of the illuminated area:
$k_{\text{max}}=nk_{0}\left(1+L^{2}/R_{\text{max}}^{2}\right)^{-\frac{1}{2}}$
(and similar for $k_{\text{min}}$). To confirm this dependence, we imaged the
objects for different values of the illumination radius $R_{\text{max}}$. For
each measurement the resolution is determined by modeling the resulting image
of a single $50$ nm gold nanoparticle with Eq. 1. Since it is hard to quantify
the resolution from the width of a non-Gaussian focal shape we use Sparrow’s
criterion which defines the resolution as the minimal distance at which two
separate objects are still discernible, see e.g. Hecht (1998). In Fig. 3 the
measured resolution versus $R_{\text{max}}$ is shown. As a reference we also
plotted the measured resolution of the high-quality oil immersion microscope.
We see that the resolution improves as we increase the illuminated area. The
measured resolutions are in excellent correspondence with the expected
resolution obtained from the calculated intensity profile. The resolution of
the HIRES-lens is much better than the high-quality conventional oil immersion
microscope. The highest resolution we measured is $97\pm 2$ nm, which
demonstrates imaging in the nanometer regime with visible wavelengths.
A GaP HIRES-lens has the potential to reach even better optical resolutions up
to $72$ nm. It is then possible to resolve objects placed in each others near
field at distances of $\lambda_{0}/2\pi$. To achieve these resolutions a wider
area of the scattering porous layer has to be illuminated and as a result
light has to be scattered at increasingly higher angles from the porous layer.
Here advances could benefit from investigations in the field of thin film
solar cells where high angle scattering is beneficial for optimal light
harvestingYablonovitch and Cody (1982).
Our results open the way to improve resolution in a wide range of optical
imaging techniques. The robustness of a HIRES-lens against distortion and
abberation, together with their ease to manufacture, makes them ideal for the
imaging of fluorescent labeled biological samples or for the efficient
coupling to metamaterialsFang et al. (2005); Liu et al. (2007) and plasmonic
nanostructuresStockman et al. (2002); Aeschlimann et al. (2007); Kao et al.
(2011). Recent developments in spatio-temporal control of waves in disordered
materialsAulbach et al. (2011); Katz et al. (2011); McCabe et al. (2011)
suggest the possibility for HIRES-lenses to create ultrashort pulses in a
nano-sized focus. The fact that a HIRES-lens is a linear technique opens the
possibility to use it for resolution improvement of a large range of existing
linear and non-linear imaging techniques, such as confocal microscopy,
STEDDyba and Hell (2002), PALMBetzig et al. (2006), and STORMRust et al.
(2006).
## I Acknowledgements
The authors would like to acknowledge Hannie van den Broek, Cock Harteveld,
Léon Woldering, Willem Tjerkstra, Ivo Vellekoop, Christian Blum, and Vinod
Subramaniam for their support and insightful discussions. This work is part of
the research program of the “Stichting voor Fundamenteel Onderzoek der Materie
(FOM)”, which is financially supported by the “Nederlandse Organisatie voor
Wetenschappelijk Onderzoek (NWO)”. JB is partially financed by the FIRB-MIUR
”Futuro in Ricerca” project RBFR08UH60. WLV thanks NWO-Vici and APM is
supported by a Vidi grant from NWO.
## References
* Hell and Wichmann (1994) S. W. Hell and J. Wichmann, Opt. Lett. 19, 780 (1994).
* Dyba and Hell (2002) M. Dyba and S. W. Hell, Phys. Rev. Lett. 88, 163901 (2002).
* Betzig et al. (2006) E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, Science 313, 1642 (2006).
* Rust et al. (2006) M. J. Rust, M. Bates, and X. Zhuang, Nat. Meth. 3, 793 (2006), ISSN 1548-7091.
* Hell (2007) S. W. Hell, Science 316, 1153 (2007).
* Pohl and Courjon (1993) D. Pohl and D. Courjon, _Near Field Optics_ (Kluwer, Dordrecht, 1993).
* Pendry (2000) J. B. Pendry, Phys. Rev. Lett. 85, 3966 (2000).
* Fang et al. (2005) N. Fang, H. Lee, C. Sun, and X. Zhang, Science 308, 534 (2005).
* Liu et al. (2007) Z. Liu, H. Lee, Y. Xiong, C. Sun, and X. Zhang, Science 315, 1686 (2007).
* Stockman et al. (2002) M. I. Stockman, S. V. Faleev, and D. J. Bergman, Phys. Rev. Lett. 88, 067402 (2002).
* Aeschlimann et al. (2007) M. Aeschlimann, M. Bauer, D. Bayer, T. Brixner, F. J. Garcia de Abajo, W. Pfeiffer, M. Rohmer, C. Spindler, and F. Steeb, Nature 446, 301 (2007), ISSN 0028-0836.
* Kao et al. (2011) T. S. Kao, S. D. Jenkins, J. Ruostekoski, and N. I. Zheludev, Phys. Rev. Lett. 106, 085501 (2011).
* Vellekoop and Mosk (2007) I. M. Vellekoop and A. P. Mosk, Opt. Lett. 32, 2309 (2007).
* Leith and Upatnieks (1966) E. N. Leith and J. Upatnieks, J. Opt. Soc. Am. 56, 523 (1966).
* Fink et al. (2000) M. Fink, D. Cassereau, A. Derode, C. Prada, P. Roux, M. Tanter, J.-L. Thomas, and F. Wu, Rep. Prog. Phys. 63, 1933 (2000).
* Wu et al. (1999) Q. Wu, G. D. Feke, R. D. Grober, and L. P. Ghislain, Appl. Phys. Lett. 75, 4064 (1999).
* Axelrod et al. (1984) D. Axelrod, T. P. Burghardt, and N. L. Thompson, Annu. Rev. Biophys. Bioeng. 13, 247 268 (1984).
* Vellekoop et al. (2010) I. Vellekoop, A. Lagendijk, and A. Mosk, Nat Photon 4, 320 (2010).
* Feng et al. (1988) S. Feng, C. Kane, P. A. Lee, and A. D. Stone, Phys. Rev. Lett. 61, 834 (1988).
* Freund et al. (1988) I. Freund, M. Rosenbluh, and S. Feng, Phys. Rev. Lett. 61, 2328 (1988).
* Vellekoop and Aegerter (2010) I. Vellekoop and C. Aegerter, Opt. Lett. 35, 1245 (2010).
* Aspnes and Studna (1983) D. E. Aspnes and A. A. Studna, Phys. Rev. B 27, 985 (1983).
* Schuurmans et al. (1999) F. J. P. Schuurmans, D. Vanmaekelbergh, J. van de Lagemaat, and A. Lagendijk, Science 284, 141 (1999).
* Sup (2011) _Details on materials and methods are forthcoming._ (2011).
* Hsieh et al. (2010) C.-L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, Opt. Express 18, 20723 (2010).
* Hecht (1998) E. Hecht, _Optics_ (Addison Wesley Longman, Inc., 1998).
* Yablonovitch and Cody (1982) E. Yablonovitch and G. D. Cody, IEEE Trans. Electron Devices 29, 300 (1982).
* Aulbach et al. (2011) J. Aulbach, B. Gjonaj, P. M. Johnson, A. P. Mosk, and A. Lagendijk, Phys. Rev. Lett. 106, 103901 (2011).
* Katz et al. (2011) O. Katz, Y. Bromberg, E. Small, and Y. Silberberg, arXiv:1012.0413 (2011).
* McCabe et al. (2011) D. J. McCabe, A. Tajalli, D. R. Austin, P. Bondareff, I. A. Walmsley, S. Gigan, and B. Chatel, arXiv:1101.0976 (2011).
|
arxiv-papers
| 2011-03-18T14:59:20 |
2024-09-04T02:49:17.774204
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "E.G. van Putten, D. Akbulut, J. Bertolotti, W.L. Vos, A. Lagendijk,\n and A.P. Mosk",
"submitter": "E.G. van Putten",
"url": "https://arxiv.org/abs/1103.3643"
}
|
1103.3690
|
# Investigation of the field-induced ferromagnetic phase transition in spin
polarized neutron matter: a lowest order constrained variational approach
G.H. Bordbar 1,2 111Corresponding author. E-mail: bordbar@physics.susc.ac.ir,
Z. Rezaei1 and Afshin Montakhab1 1Department of Physics, Shiraz University,
Shiraz 71454, Iran222Permanent address,
and
2Research Institute for Astronomy and Astrophysics of Maragha,
P.O. Box 55134-441, Maragha, Iran
###### Abstract
In this paper, the lowest order constrained variational (LOCV) method has been
used to investigate the magnetic properties of spin polarized neutron matter
in the presence of strong magnetic field at zero temperature employing
$AV_{18}$ potential. Our results indicate that a ferromagnetic phase
transition is induced by a strong magnetic field with strength greater than
$10^{18}\ G$, leading to a partial spin polarization of the neutron matter. It
is also shown that the equation of state of neutron matter in the presence of
magnetic field is stiffer than the case in absence of magnetic field.
###### pacs:
21.65.-f, 26.60.-c, 64.70.-p
## I INTRODUCTION
The magnetic field of neutron stars most probably originates from the
compression of magnetic flux inherited from the progenitor star Reisen . Using
this point of view, Woltjer has predicted a magnetic field strength of order
$10^{15}\ G$ for the neutron stars Woltjer . The field can be distorted or
amplified by some mixture of convection, differential rotation and magnetic
instabilities Tayler ; Spruit . The relative importance of these ingredients
depend on the initial field strength and rotation rate of the star. For both
convection and differential rotation, the field and its supporting currents
are not likely to be confined to the solid crust of the star, but distributed
in most of the stellar interior which is mostly a fluid mixture of neutrons,
protons, electrons, and other more exotic particles Reisen . Thompson et al.
Thompson argued that the newborn neutron stars probably combine vigorous
convection and differential rotation making it likely that a dynamo process
might operate in them. They expected fields up to $10^{15}-10^{16}\ G$ in
neutron stars with few millisecond initial periods. On the other hand,
according to the scalar virial theorem which is based on Newtonian gravity,
the magnetic field strength is allowed by values up to $10^{18}\ G$ in the
interior of a magnetar Lai . However, general relativity predicts the allowed
maximum value of neutron star magnetic field to be about $10^{18}-10^{20}\ G$
shap . By comparing with the observational data, Yuan et al. Yuan obtained a
magnetic field strength of order $10^{19}\ G$ for the neutron stars.
The strong magnetic field could have important influences on the interior
matter of a neutron star. Many works have dealt with study of the magnetic
properties and equation of state of the neutron star matter zhang ; Brod0 ;
SUH ; Chen ; Yue6 ; Brod2 ; Chakra7 ; Isayev ; Isayev1 ; Garcia8 ; Garcia9 ;
Garcia10 and quark star matter ANAND ; Ghosh ; Chakra6 ; Gupta ; Bandyo ;
bord-pey in the presence of strong magnetic fields. Some authors have
considered the influence of strong magnetic fields on the neutron star matter
within the mean field approximation zhang ; Chen . Yuan et al. zhang using
the nonlinear $\sigma-\omega$ model, showed that the equation of state of
neutron star matter becomes softer as the magnetic field increases. Also,
Broderick et al. Brod0 employing a field theoretical approach in which the
baryons interact via the exchange of $\sigma-\omega-\rho$ mesons, observed
that the softening of the equation of state caused by Landau quantization is
overwhelmed by stiffening due to the incorporation of the anomalous magnetic
moments of the nucleons. It has been shown that the strong magnetic field
shifts $\beta$-equilibrium and increases the proton fraction in the neutron
star matter Brod0 ; SUH ; Chen . Yue et al. Yue6 have studied the neutron
star matter in the presence of strong magnetic field using the quark-meson
coupling (QMC) model. Their results indicate that the Landau quantization of
charged particles causes a softening in the equation of state, whereas the
inclusion of nucleon anomalous magnetic moments lead to a stiffer equation of
state. The effects of the magnetic field on the neutron star structure,
through its influence on the metric has been studied by Cardall et al. Cardall
. Their results show that the maximum mass, in a static configuration for
neutron star with magnetic field, is larger than the maximum mass obtained by
uniform rotation. Through a field theoretical approach (at the mean field
level) in which the baryons interact via the exchange of $\sigma-\omega-\rho$
mesons, Broderick et al. Brod2 have considered the effects of magnetic field
on the equation of state of dense baryonic matter in which hyperons are
present. They found that when the hyperons appear, the pressure becomes
smaller than the case of pure nucleonic matter for all fields. Within a
relativistic Hartree approach in the linear $\sigma-\omega-\rho$ model, the
effects of magnetic field on cold symmetric nuclear matter and the nuclear
matter in $\beta$-equilibrium have been investigated by Chakrabarty et al.
Chakra7 . Their results suggest that the neutron star mass is practically
insensitive to the effects of the magnetic fields, whereas the radius
decreases in intense fields .
In some studies, the neutron star matter was approximated by a pure neutron
matter. Isayev et al. Isayev considered the neutron matter in a strong
magnetic field with the Skyrme effective interaction and analyzed the
resultant self-consistent equations. They found that the thermodynamically
stable branch extends from the very low densities to the high density region
where the spin polarization parameter is saturated, and neutrons become
totally spin polarized. Perez-Garcia et al. Garcia8 ; Garcia9 ; Garcia10
studied the effects of a strong magnetic field on the pure neutron matter with
effective nuclear forces within the framework of the nonrelativistic Hartree-
Fock approximation. They showed that in the Skyrme model there is a
ferromagnetic phase transition at $\rho\sim 4\rho_{0}$($\rho_{0}=0.16fm^{-3}$
is the nuclear saturation density), whereas it is forbidden in the $D1P$ model
Garcia8 . Beside these, they found that the neutrino opacity of magnetized
matter decreases compared to the nonmagnetized case for the magnetic field
greater than $10^{17}\ G$ Garcia9 . However, more realistically, for the
problem of the neutron star matter in astrophysics context, it is necessary to
consider the finite temperature Isayev1 ; Garcia8 ; Chakra6 ; Gupta and
finite proton fraction effects Brod2 ; Chakra7 ; zhang ; Brod0 ; SUH ; Chen ;
Yue6 . Isayev et al. Isayev1 have shown that the influence of finite
temperatures on spin polarization remains moderate in the Skyrme model, at
least up to temperatures relevant for protoneutron stars. It has been also
shown that for $SLy4$ effective interaction, even small admixture of protons
to neutron matter leads to a considerable shift of the critical density of the
spin instability to lower values. For $SkI5$ force, however, a small admixture
of protons to neutron matter does not considerably change the critical density
of the spin instability and increases its value Isayev2 .
In our previous works, we have studied the spin polarized neutron matter
Bordbar75 , symmetric nuclear matter Bordbar76 , asymmetric nuclear matter
Bordbar77 , and neutron star matter Bordbar77 at zero temperature using LOCV
method with the realistic strong interaction in the absence of magnetic field.
We have also investigated the thermodynamic properties of the spin polarized
neutron matter Bordbar78 , symmetric nuclear matter Bordbar80 , and asymmetric
nuclear matter Bordbar81 at finite temperature with no magnetic field. In the
above calculations, our results do not show any spontaneous ferromagnetic
phase transition for these systems. In the present work, we study the magnetic
properties of spin polarized neutron matter at zero temperature in the
presence of the strong magnetic field using LOCV technique employing $AV_{18}$
potential.
## II LOCV formalism for spin polarized neutron matter
We consider a pure homogeneous spin polarized neutron matter composed of the
spin-up $(+)$ and spin-down $(-)$ neutrons. We denote the number densities of
spin-up and spin-down neutrons by $\rho^{(+)}$ and $\rho^{(-)}$, respectively.
We introduce the spin polarization parameter ($\delta$) by
$\displaystyle\delta=\frac{\rho^{(+)}-\rho^{(-)}}{\rho},$ (1)
where $-1\leq\delta\leq 1$, and $\rho=\rho^{(+)}+\rho^{(-)}$ is the total
density of system.
In order to calculate the energy of this system, we use LOCV method as
follows: we consider a trial many-body wave function of the form
$\displaystyle\psi=F\phi,$ (2)
where $\phi$ is the uncorrelated ground-state wave function of $N$ independent
neutrons, and $F$ is a proper $N$-body correlation function. Using Jastrow
approximation Jastrow , $F$ can be replaced by
$\displaystyle F=S\prod_{i>j}f(ij),$ (3)
where $S$ is a symmetrizing operator. We consider a cluster expansion of the
energy functional up to the two-body term,
$\displaystyle
E([f])=\frac{1}{N}\frac{\langle\psi|H|\psi\rangle}{\langle\psi|\psi\rangle}=E_{1}+E_{2}\cdot$
(4)
Now, we calculate the energy per particle up to the two-body term for two
cases in the absence and presence of the magnetic field in two separate
sections.
### II.1 Energy calculation for the spin polarized neutron matter in the
absence of magnetic field
The one-body term $E_{1}$ for spin polarized neutron matter in the absence of
magnetic field $(B=0)$ is given by
$\displaystyle
E_{1}^{(B=0)}=\sum_{i=+,-}\frac{3}{5}\frac{\hbar^{2}k_{F}^{(i)^{2}}}{2m}\frac{\rho^{(i)}}{\rho},$
(5)
where $k_{F}^{(i)}=(6\pi^{2}\rho^{(i)})^{\frac{1}{3}}$ is the Fermi momentum
of a neutron with spin projection $i$.
The two-body energy $E_{2}$ is
$\displaystyle E_{2}^{(B=0)}$ $\displaystyle=$
$\displaystyle\frac{1}{2N}\sum_{ij}\langle ij\left|\nu(12)\right|ij-
ji\rangle,$ (6)
where
$\nu(12)=-\frac{\hbar^{2}}{2m}[f(12),[\nabla_{12}^{2},f(12)]]+f(12)V(12)f(12).$
In the above equation, $f(12)$ and $V(12)$ are the two-body correlation
function and nuclear potential, respectively. In our calculations, we employ
the $AV_{18}$ two-body potential Wiringa ,
$\displaystyle V(12)$ $\displaystyle=$
$\displaystyle\sum^{18}_{p=1}V^{(p)}(r_{12})O^{(p)}_{12}.$ (7)
where
$\displaystyle O^{(p=1-18)}_{12}$ $\displaystyle=$ $\displaystyle
1,\sigma_{1}.\sigma_{2},\tau_{1}.\tau_{2},(\sigma_{1}.\sigma_{2})(\tau_{1}.\tau_{2}),S_{12},S_{12}(\tau_{1}.\tau_{2}),$
(8)
$\displaystyle\textbf{L}.\textbf{S},\textbf{L}.\textbf{S}(\tau_{1}.\tau_{2}),\textbf{L}^{2},\textbf{L}^{2}(\sigma_{1}.\sigma_{2}),\textbf{L}^{2}(\tau_{1}.\tau_{2}),\textbf{L}^{2}(\sigma_{1}.\sigma_{2})(\tau_{1}.\tau_{2}),$
$\displaystyle(\textbf{L}.\textbf{S})^{2},(\textbf{L}.\textbf{S})^{2}(\tau_{1}.\tau_{2}),\textbf{T}_{12},(\sigma_{1}.\sigma_{2})\textbf{T}_{12},S_{12}\textbf{T}_{12},(\tau_{z1}+\tau_{z2}).$
In the above equation,
$S_{12}=[3(\sigma_{1}.\hat{r})(\sigma_{2}.\hat{r})-\sigma_{1}.\sigma_{2}]$
is the tensor operator and
$\textbf{T}_{12}=[3(\tau_{1}.\hat{r})(\tau_{2}.\hat{r})-\tau_{1}.\tau_{2}]$
is the isotensor operator. The above $18$ components of the $AV_{18}$ two-body
potential are denoted by the labels $c$, $\sigma$, $\tau$, $\sigma\tau$, $t$,
$t\tau$, $ls$, $ls\tau$, $l2$, $l2\sigma$, $l2\tau$, $l2\sigma\tau$, $ls2$,
$ls2\tau$, $T$, $\sigma T$, $tT$, and $\tau z$, respectively Wiringa . In the
LOCV formalism, the two-body correlation function $f(12)$ is considered as
follows Owen ,
$\displaystyle f(12)$ $\displaystyle=$
$\displaystyle\sum^{3}_{k=1}f^{(k)}(r_{12})P^{(k)}_{12},$ (9)
where
$\displaystyle P^{(k=1-3)}_{12}$ $\displaystyle=$
$\displaystyle(\frac{1}{4}-\frac{1}{4}O_{12}^{(2)}),\
(\frac{1}{2}+\frac{1}{6}O_{12}^{(2)}+\frac{1}{6}O_{12}^{(5)}),\
(\frac{1}{4}+\frac{1}{12}O_{12}^{(2)}-\frac{1}{6}O_{12}^{(5)}).$ (10)
The operators $O_{12}^{(2)}$ and $O_{12}^{(5)}$ are given in Eq. (8). Using
the above two-body correlation function and potential, after doing some
algebra, we find the following equation for the two-body energy:
$\displaystyle E_{2}^{(B=0)}$ $\displaystyle=$
$\displaystyle\frac{2}{\pi^{4}\rho}\left(\frac{\hbar^{2}}{2m}\right)\sum_{JLSS_{z}}\frac{(2J+1)}{2(2S+1)}[1-(-1)^{L+S+1}]\left|\left\langle\frac{1}{2}\sigma_{z1}\frac{1}{2}\sigma_{z2}\mid
SS_{z}\right\rangle\right|^{2}$ (11) $\displaystyle\int
dr\left\\{\left[{f_{\alpha}^{(1)^{{}^{\prime}}}}^{2}{a_{\alpha}^{(1)}}^{2}(k_{f}r)\right.\right.\left.\left.+\frac{2m}{\hbar^{2}}(\\{V_{c}-3V_{\sigma}+V_{\tau}-3V_{\sigma\tau}+2(V_{T}-3V_{\sigma
T})-2V_{\tau z}\\}{a_{\alpha}^{(1)}}^{2}(k_{f}r)\right.\right.$
$\displaystyle\left.\left.+[V_{l2}-3V_{l2\sigma}+V_{l2\tau}-3V_{l2\sigma\tau}]{c_{\alpha}^{(1)}}^{2}(k_{f}r))(f_{\alpha}^{(1)})^{2}\right]+\sum_{k=2,3}\left[{f_{\alpha}^{(k)^{{}^{\prime}}}}^{2}{a_{\alpha}^{(k)}}^{2}(k_{f}r)\right.\right.$
$\displaystyle\left.\left.+\frac{2m}{\hbar^{2}}(\\{V_{c}+V_{\sigma}+V_{\tau}+V_{\sigma\tau}+(-6k+14)(V_{t\tau}+V_{t})-(k-1)(V_{ls\tau}+V_{ls})\right.\right.$
$\displaystyle\left.\left.+2[V_{T}+V_{\sigma T}+(-6k+14)V_{tT}-V_{\tau
z}]\\}{a_{\alpha}^{(k)}}^{2}(k_{f}r)\right.\right.$
$\displaystyle\left.\left.+[V_{l2}+V_{l2\sigma}+V_{l2\tau}+V_{l2\sigma\tau}]{c_{\alpha}^{(k)}}^{2}(k_{f}r)+[V_{ls2}+V_{ls2\tau}]{d_{\alpha}^{(k)}}^{2}(k_{f}r)){f_{\alpha}^{(k)}}^{2}\right]\right.$
$\displaystyle\left.+\frac{2m}{\hbar^{2}}\\{V_{ls}+V_{ls\tau}-2(V_{l2}+V_{l2\sigma}+V_{l2\sigma\tau}+V_{l2\tau})-3(V_{ls2}+V_{ls2\tau})\\}b_{\alpha}^{2}(k_{f}r)f_{\alpha}^{(2)}f_{\alpha}^{(3)}\right.$
$\displaystyle\left.+\frac{1}{r^{2}}(f_{\alpha}^{(2)}-f_{\alpha}^{(3)})^{2}b_{\alpha}^{2}(k_{f}r)\right\\},$
where $\alpha=\\{J,L,S,S_{z}\\}$ and the coefficient ${a_{\alpha}^{(1)}}^{2}$,
etc., are defined as
$\displaystyle{a_{\alpha}^{(1)}}^{2}(x)=x^{2}I_{L,S_{z}}(x),$ (12)
$\displaystyle{a_{\alpha}^{(2)}}^{2}(x)=x^{2}[\beta I_{J-1,S_{z}}(x)+\gamma
I_{J+1,S_{z}}(x)],$ (13) $\displaystyle{a_{\alpha}^{(3)}}^{2}(x)=x^{2}[\gamma
I_{J-1,S_{z}}(x)+\beta I_{J+1,S_{z}}(x)],$ (14) $\displaystyle
b_{\alpha}^{(2)}(x)=x^{2}[\beta_{23}I_{J-1,S_{z}}(x)-\beta_{23}I_{J+1,S_{z}}(x)],$
(15) $\displaystyle{c_{\alpha}^{(1)}}^{2}(x)=x^{2}\nu_{1}I_{L,S_{z}}(x),$ (16)
$\displaystyle{c_{\alpha}^{(2)}}^{2}(x)=x^{2}[\eta_{2}I_{J-1,S_{z}}(x)+\nu_{2}I_{J+1,S_{z}}(x)],$
(17)
$\displaystyle{c_{\alpha}^{(3)}}^{2}(x)=x^{2}[\eta_{3}I_{J-1,S_{z}}(x)+\nu_{3}I_{J+1,S_{z}}(x)],$
(18)
$\displaystyle{d_{\alpha}^{(2)}}^{2}(x)=x^{2}[\xi_{2}I_{J-1,S_{z}}(x)+\lambda_{2}I_{J+1,S_{z}}(x)],$
(19)
$\displaystyle{d_{\alpha}^{(3)}}^{2}(x)=x^{2}[\xi_{3}I_{J-1,S_{z}}(x)+\lambda_{3}I_{J+1,S_{z}}(x)],$
(20)
with
$\displaystyle\beta=\frac{J+1}{2J+1},\ \gamma=\frac{J}{2J+1},\
\beta_{23}=\frac{2J(J+1)}{2J+1},$ (21) $\displaystyle\nu_{1}=L(L+1),\
\nu_{2}=\frac{J^{2}(J+1)}{2J+1},\ \nu_{3}=\frac{J^{3}+2J^{2}+3J+2}{2J+1},$
(22) $\displaystyle\eta_{2}=\frac{J(J^{2}+2J+1)}{2J+1},\
\eta_{3}=\frac{J(J^{2}+J+2)}{2J+1},$ (23)
$\displaystyle\xi_{2}=\frac{J^{3}+2J^{2}+2J+1}{2J+1},\
\xi_{3}=\frac{J(J^{2}+J+4)}{2J+1},$ (24)
$\displaystyle\lambda_{2}=\frac{J(J^{2}+J+1)}{2J+1},\
\lambda_{3}=\frac{J^{3}+2J^{2}+5J+4}{2J+1},$ (25)
and
$\displaystyle I_{J,S_{z}}(x)=\int dq\ q^{2}P_{S_{z}}(q)J_{J}^{2}(xq)\cdot$
(26)
In the last equation $J_{J}(x)$ is the Bessel function and $P_{S_{z}}(q)$ is
defined as
$\displaystyle P_{S_{z}}(q)$ $\displaystyle=$
$\displaystyle\frac{2}{3}\pi[(k_{F}^{\sigma_{z1}})^{3}+(k_{F}^{\sigma_{z2}})^{3}-\frac{3}{2}((k_{F}^{\sigma_{z1}})^{2}+(k_{F}^{\sigma_{z2}})^{2})q$
(27) $\displaystyle-$
$\displaystyle\frac{3}{16}((k_{F}^{\sigma_{z1}})^{2}-(k_{F}^{\sigma_{z2}})^{2})^{2}q^{-1}+q^{3}]$
for
$\frac{1}{2}|k_{F}^{\sigma_{z1}}-k_{F}^{\sigma_{z2}}|<q<\frac{1}{2}|k_{F}^{\sigma_{z1}}+k_{F}^{\sigma_{z2}}|$,
$\displaystyle P_{S_{z}}(q)=\frac{4}{3}\pi
min((k_{F}^{\sigma_{z1}})^{3},(k_{F}^{\sigma_{z2}})^{3})$ (28)
for $q<\frac{1}{2}|k_{F}^{\sigma_{z1}}-k_{F}^{\sigma_{z2}}|$, and
$\displaystyle P_{S_{z}}(q)=0$ (29)
for $q>\frac{1}{2}|k_{F}^{\sigma_{z1}}+k_{F}^{\sigma_{z2}}|$, where
$\sigma_{z1}$ or $\sigma_{z2}=+1,-1$ for spin up and down, respectively.
### II.2 Energy calculation of spin polarized neutron matter in the presence
of magnetic field
Now we consider the case in which the spin polarized neutron matter is under
the influence of a strong magnetic field. Taking the uniform magnetic field
along the $z$ direction, $B=B\widehat{k}$, the spin up and down particles
correspond to parallel and antiparallel spins with respect to the magnetic
field. Therefore, the contribution of magnetic energy of the neutron matter is
$\displaystyle E_{M}=-M_{z}B,$ (30)
where $M_{z}$ is the magnetization of the neutron matter which is given by
$\displaystyle M_{z}=N\mu_{n}\delta.$ (31)
In the above equation, $\mu_{n}=-1.9130427(5)$ is the neutron magnetic moment
(in units of the nuclear magneton). Consequently, the energy per particle up
to the two-body term in the presence of magnetic field can be written as
$\displaystyle E([f])=E_{1}^{(B=0)}+E_{2}^{(B=0)}-\mu_{n}B\delta,$ (32)
where $E_{1}^{(B=0)}$ and $E_{2}^{(B=0)}$ are given by Eqs. (5) and (11),
respectively. It should be noted that in usual thermodynamic treatments the
external magnetic field energy ($\frac{1}{8\pi}\int dV\ B^{2}$) is usually
left out since it does not affect the thermodynamic properties of matter
callen . In fact the magnetic field energy arises only from the magnetostatic
energy in the absence of matter, but we are interested in the contribution of
internal energy which excludes the energy of magnetic field. Therefore, the
magnetic field contribution, $E_{mag}=\frac{B^{2}}{8\pi}$, which is the
_energy density_ (or “magnetic pressure”) of the magnetic field in the absence
of matter is usually omitted callen ; Isayev .
Now, we minimize the two-body energy with respect to the variations in the
function $f_{\alpha}^{(i)}$ subject to the normalization constraint Bordbar57
,
$\displaystyle\frac{1}{N}\sum_{ij}\langle
ij\left|h_{S_{z}}^{2}-f^{2}(12)\right|ij\rangle_{a}=0,$ (33)
where in the case of spin polarized neutron matter, the function
$h_{S_{z}}(r)$ is defined as follows,
$\displaystyle h_{S_{z}}(r)$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{ll}\left[1-9\left(\frac{J_{J}^{2}(k_{F}^{(i)}r)}{k_{F}^{(i)}r}\right)^{2}\right]^{-1/2}&;~{}~{}S_{z}=\pm
1\\\ \\\ 1&;~{}~{}S_{z}=0\end{array}\right.$ (37)
From minimization of the two-body cluster energy, we get a set of coupled and
uncoupled differential equations which are the same as those presented in Ref.
Bordbar57 , with the coefficients replaced by those indicated in Eqs.
(12)$-$(20). By solving these differential equations, we can obtain
correlation functions to compute the two-body energy.
## III RESULTS and DISCUSSION
Our results for the energy per particle of spin polarized neutron matter
versus the spin polarization parameter for different values of the magnetic
field at $\rho=0.2\ fm^{-3}$ have been shown in Fig. 1. We have found that for
the values of magnetic field below $10^{18}\ G$, the corresponding energies of
different magnetic fields are nearly identical. This shows that the effect of
magnetic field below $B\sim 10^{18}\ G$ is nearly insignificant. From Fig. 1,
we can see that the spin polarization symmetry is broken when the magnetic
field is present and a minimum appears at $-1<\delta<0$. By increasing the
magnetic field strength from $B\sim 10^{18}\ G$ to $B\sim 10^{19}\ G$, the
value of spin polarization corresponding to the minimum point approaches $-1$.
We also see that by increasing the magnetic field, the energy per particle at
minimum point(ground state energy) decreases, leading to a more stable system.
For each density, we have found that above a certain value of the magnetic
field, the system reaches a saturation point and the minimum energy occurs at
$\delta=-1$. For example at $\rho=0.2\ fm^{-3}$, for $B\gtrsim 1.8\times
10^{19}\ G$, the minimum energy occurs at $\delta=-1$. However, this threshold
value of the magnetic field increases by increasing the density. In Fig. 2, we
have presented the ground state energy per particle of spin polarized neutron
matter as a function of the density for different values of the magnetic
field. For each value of the magnetic field, it is shown that the energy per
particle increases monotonically by increasing the density. However, the
increasing rate of energy versus density increases by increasing the magnetic
field. This indicates that at higher magnetic fields, the increasing rate of
the contribution of magnetic energy versus density is more than that at lower
magnetic fields. In order to clarify this behavior, we have presented the
energy contribution of spin polarized neutron matter up to the two-body term
in the cluster expansion ($E_{1}+E_{2}$), and the magnetic energy contribution
($E_{M}$) separately, as a function of density in Fig. 3. This figure shows
that for the spin polarized neutron matter, the difference between the
magnetic energy contributions ($E_{M}$) of different magnetic fields is
substantially larger than that for the energy contribution ($E_{1}+E_{2}$).
Fig. 4 shows the ground state energy per particle of spin polarized neutron
matter as a function the magnetic field for different values of density. We
can see that by increasing the magnetic field up to a value about $10^{18}\
G$, the energy per particle slowly decreases, and then it rapidly decreases
for the magnetic fields greater than this value. This indicates that above
$B\sim 10^{18}\ G$, the effect of magnetic field on the energy construction of
the spin polarized neutron matter becomes more important.
In Fig. 5, the spin polarization parameter corresponding to the equilibrium
state of the system is plotted as a function of density for different values
of the magnetic field. It is seen that at each magnetic field, the magnitude
of spin polarization parameter decreases by increasing the density. Fig. 5
also shows that for the magnetic fields below $10^{18}\ G$, at high densities,
the system nearly becomes unpolarized. However, for higher magnetic fields,
the system has a substantial spin polarization, even at high densities. In
Fig. 6, we have plotted the spin polarization parameter at the equilibrium as
a function of the magnetic field at different values of density. This figure
shows that below $B\sim 10^{18}\ G$, no anomaly is observed and the neutron
matter can only be partially polarized. This partial polarization is maximized
at lower densities and amounts to about $14\%$ of its maximum possible value
of $-1$ . From Fig. 6, we can also see that below $B\sim 10^{17}\ G$, the spin
polarization parameter is nearly zero. This clearly confirms the absence of
the magnetic ordering for the neutron matter up to $B\sim 10^{17}\ G$. For the
magnetic fields greater than about $10^{18}\ G$, it is shown that the
magnitude of spin polarization rapidly increases by increasing the magnetic
field. This shows a ferromagnetic phase transition in the presence of a strong
magnetic field. For each density, we can see that at high magnetic fields, the
value of spin polarization parameter is close to $-1$. The corresponding value
of the magnetic field increases by increasing the density.
The magnetic susceptibility ($\chi$) which characterizes the response of a
system to the magnetic field, is defined by
$\displaystyle\chi(\rho,B)={\left(\frac{\partial M_{z}(\rho,B)}{\partial
B}\right)_{\rho}}$ (38)
In Fig. 7, we have plotted the ratio $\chi/N|\mu_{n}|$ for the spin polarized
neutron matter versus the magnetic field at three different values of the
density. As can be seen from Fig. 7, for each density, this ratio shows a
maximum at a specific magnetic field. This result confirms the existence of
the ferromagnetic phase transition induced by the magnetic field. We see that
the magnetic field at phase transition point, $B_{m}$, depends on the density
of the system. Fig. 8 shows the phase diagram for the spin polarized neutron
matter. We can see that by increasing the density, $B_{m}$ grows
monotonically. It explicitly means that at higher densities, the phase
transition occurs at higher values of the magnetic field.
From the energy of spin polarized neutron matter, at each magnetic field, we
can evaluate the corresponding pressure ($P_{kinetic}$) using the following
relation,
$\displaystyle P_{kinetic}(\rho,B)=\rho^{2}{\left(\frac{\partial
E(\rho,B)}{\partial\rho}\right)_{B}}$ (39)
Our results for the kinetic pressure of spin polarized neutron matter versus
the density for different values of the magnetic field have been shown in Fig.
9. It is obvious that with increasing the density, the difference between the
pressure of spin polarized neutron matter at different magnetic field becomes
more appreciable. Fig. 9 shows that the equation of state of the spin
polarized neutron matter becomes stiffer as the magnetic field strength
increases. This stiffening is due to the inclusion of neutron anomalous
magnetic moments. This is in agreement with the results obtained in Refs.
Brod0 ; Yue6 . It should be noted here that to find the total pressure related
for the neutron star structure, the contribution from the magnetic field,
$P_{mag}=\frac{B^{2}}{8\pi}$, should be added to the kinetic pressure Brod0 ;
Brod2 . However, in this work we are not interested in the neutron star
structure and have thus omitted the contribution of “magnetic pressure” in our
calculations for neutron matter Isayev . This term, if included, simply adds a
constant amount to the curves depicted in Fig. 9.
## IV Summary and Concluding Remarks
We have recently calculated several properties of the spin polarized neutron
matter in the absence of magnetic field using the lowest order constrained
variational method with $AV_{18}$ potential. In this work, we have generalized
our calculations for spin polarized neutron matter in the presence of strong
magnetic field at zero temperature using this method. We have found that the
effect of magnetic fields below $B\sim 10^{18}\ G$ is almost negligible. It
was shown that in the presence of magnetic field, the spin polarization
symmetry is broken and the energy per particle shows a minimum at
$-1<\delta<0$, depending on the strength of the magnetic field. We have shown
that the ground state energy per particle decreases by increasing the magnetic
field. This leads to a more stable system. It is seen that the increasing rate
of energy versus density increases by increasing the magnetic field. Our
calculations show that above $B\sim 10^{18}\ G$, the effect of magnetic field
on the properties of neutron matter becomes more important. In the study of
spin polarization parameter, we have shown that for a fixed magnetic field,
the magnitude of spin polarization parameter at the minimum point of energy
decreases with increasing density. At strong magnetic fields with strengths
greater than $10^{18}\ G$, our results show that a field-induced ferromagnetic
phase transition occurs for the neutron matter. By investigating the magnetic
susceptibility of the spin polarized neutron matter, it is clear that as the
density increases, the phase transition occurs at higher values of the
magnetic field. Through the calculation of pressure as a function of density
at different values of the magnetic field, we observed the stiffening of the
equation of state in the presence of the magnetic field.
Finally, we would like to address the question of thermodynamic stability of
such neutron stars at ultra-high magnetic fields. One may wonder if the effect
of magnetic pressure, $P_{mag}=\frac{B^{2}}{8\pi}$, which we have omitted
here, is added to the kinetic pressure $P_{kinetic}$, then at ultra-strong
magnetic fields, the system might become gravitationally unstable due to
excessive outward pressure. For the fields considered in this work (up to
$10^{20}\ G$), this scenario does not seem likely shap . We note that the
increase of magnetic field leads to stiffening of the equation of state (Fig.
9) which in turn leads to larger mass and radius for the neutron star
bordbarnew . This in turn increases the effect of gravitational energy,
offsetting the increased pressure. We also note that the existence of a well-
defined thermodynamic energy minimum for all fields considered in our work
indicates the thermodynamic stability of our system. The existence of such
well-defined minimum energy is unaffected by the addition of magnetic energy.
The detailed analysis of such situations along with accompanying change in
proton fraction is a possible avenue for future research.
###### Acknowledgements.
We would like to thank two anonymous referees for constructive criticisms.
This work has been supported by Research Institute for Astronomy and
Astrophysics of Maragha. We wish to thank Shiraz University Research Council.
## References
* (1) A. Reisenegger, Astron. Nachr. 328, 1173 (2007).
* (2) L. Woltjer, Astrophys. J. 140, 1309 (1964).
* (3) R.J. Tayler, MNRAS 161, 365 (1973).
* (4) H. Spruit, Astron. Astrophys. 381, 923 (2002).
* (5) C. Thompson and R. C. Duncan, Astrophys. J. 408, 194 (1993).
* (6) D. Lai and S. L. Shapiro, Astrophys. J. 383, 745 (1991).
* (7) S. Shapiro and S. Teukolsky, _Black Holes, White Dwarfs and Neutron Stars_ , (Wiley-New York, 1983).
* (8) Y. F. Yuan and J. L. Zhang , Astron. Astrophys. 335, 969 (1998).
* (9) Y. F. Yuan and J. L. Zhang, Astrophys. J. 525, 950 (1999).
* (10) A. Broderick, M. Prakash and J. M. Lattimer, Astrophys. J. 537, 351 (2000).
* (11) I.S. Suh and G. J. Mathews, Astrophys. J. 546, 1126 (2001).
* (12) W. Chen, P. Q. Zhang and L. G. Liu, Mod. Phys. Lett. A 22, 623 (2007).
* (13) P. Yue and H. Shen, Phys. Rev. C 74, 045807 (2006).
* (14) A. Broderick, M. Prakash, and J. M. Lattimer, Phys. Lett. B 531, 167 (2002).
* (15) S. Chakrabarty, D. Bandyopadhyay, and S. Pal, Phys. Rev. Lett. 78, 2898 (1997).
* (16) A. A. Isayev and J. Yang, Phys. Rev. C 80, 065801 (2009).
* (17) A. A. Isayev and J. Yang, J. Korean Astronom. Soc. 43, 161 (2010).
* (18) M. A. Perez-Garcia, Phys. Rev. C 77, 065806 (2008).
* (19) M. A. Perez-Garcia, Phys. Rev. C 80, 045804 (2009).
* (20) M. A. Perez-Garcia, J. Navarro, and A. Polls, Phys. Rev. C 80, 025802 (2009).
* (21) J. D. Anand, N. Chandrika Devi, V. K. Gupta, and S. Singh, Astrophys. J. 538, 870 (2000).
* (22) S. Ghosh and S. Chakrabarty, Pramana 60, 901 (2002).
* (23) S. Chakrabarty, Phys. Rev. D 54, 1306 (1996).
* (24) V.K.Gupta, A. Gupta, S.Singh and J.D.Anand, Int. J. Mod. Phys. D 11, 545 (2002).
* (25) D. Bandyopadhyay, S. Chakrabarty and S. Pal, Phys. Rev. Lett. 79, 2176 (1997).
* (26) G. H. Bordbar and A. Peyvand (2010) submitted for publication.
* (27) C.Y. Cardall, M. Prakash and J.M. Lattimer, Astrophys. J. 554, 322 (2001).
* (28) A. A. Isayev, Phys. Rev. C 74, 057301 (2006).
* (29) G. H. Bordbar and M. Bigdeli, Phys. Rev. C 75, 045804 (2007).
* (30) G. H. Bordbar and M. Bigdeli, Phys. Rev. C 76, 035803 (2007).
* (31) G. H. Bordbar and M. Bigdeli, Phys. Rev. C 77, 015805 (2008).
* (32) G. H. Bordbar and M. Bigdeli, Phys. Rev. C 78, 054315 (2008).
* (33) M. Bigdeli, G. H. Bordbar and Z. Rezaei, Phys. Rev. C 80, 034310 (2009).
* (34) M. Bigdeli, G. H. Bordbar and A. Poostforush, Phys. Rev. C 82, 034309 (2010).
* (35) J. W. Clark, Prog. Part. Nucl. Phys. 2, 89 (1979).
* (36) R. B. Wiringa, V. Stoks, and R. Schiavilla, Phys. Rev. C 51, 38 (1995).
* (37) J. C. Owen, R. F. Bishop, and J. M. Irvine, Nucl. Phys. A 277, 45 (1977).
* (38) H. B. Callen, _Thermodynamics and an Introduction to Thermostatistics_ , (John Wiley $\&$ Sons, Inc, 1985).
* (39) G. H. Bordbar and M. Modarres, Phys. Rev. C 57, 714 (1998).
* (40) G. H. Bordbar and M. Hayati, Int. J. Mod. Phys. A 21, 1555 (2006).
Figure 1: The energy per particle versus the spin polarization parameter
$(\delta)$ for different values of the magnetic field ($B$) at $\rho=0.2\
fm^{-3}$.
Figure 2: The ground state energy per particle as a function of the density at
different values of the magnetic field ($B$).
Figure 3: The energy contribution of spin polarized neutron matter in the
cluster expansion up to the two body term ($E_{1}+E_{2}$) for the magnetic
fields $B=10^{18}\ G$ (solid curve) and $B=10^{19}\ G$ (dashed dotted curve),
and the contribution of magnetic energy ($E_{M}$) for magnetic fields
$B=10^{18}\ G$ (dashed curve) and $B=10^{19}\ G$ (dashed dotted dotted curve).
Figure 4: The ground state energy per particle as a function of the magnetic
field ($B$) at different values of the density ($\rho$).
Figure 5: The spin polarization parameter at the equilibrium state of the
system as a function of the density at different values of the magnetic field
($B$).
Figure 6: The spin polarization parameter corresponding to the equilibrium
state of the system as a function of the magnetic field ($B$) at different
values of the density ($\rho$).
Figure 7: The magnetic susceptibility ($\chi/N|\mu_{n}|$) as a function of
the magnetic field ($B$) at different values of the density ($\rho$).
Figure 8: Phase diagram for the spin polarized neutron matter in the presence
of strong magnetic field.
Figure 9: The equation of state of spin polarized neutron matter for different
values of the magnetic field ($B$).
|
arxiv-papers
| 2011-03-18T19:10:13 |
2024-09-04T02:49:17.779418
|
{
"license": "Public Domain",
"authors": "G.H. Bordbar, Z. Rezaei and Afshin Montakhab",
"submitter": "Gholam Hossein Bordbar",
"url": "https://arxiv.org/abs/1103.3690"
}
|
1103.3738
|
# A Path Algorithm for Constrained Estimation
Hua Zhou
Department of Statistics
North Carolina State University
Raleigh, NC 27695-8203
Phone: 919-515-2570
E-mail: hua_zhou@ncsu.edu
Kenneth Lange
Departments of Biomathematics,
Human Genetics, and Statistics
University of California
Los Angeles, CA 90095-1766
Phone: 310-206-8076
E-mail: klange@ucla.edu
###### Abstract
Many least squares problems involve affine equality and inequality
constraints. Although there are variety of methods for solving such problems,
most statisticians find constrained estimation challenging. The current paper
proposes a new path following algorithm for quadratic programming based on
exact penalization. Similar penalties arise in $l_{1}$ regularization in model
selection. Classical penalty methods solve a sequence of unconstrained
problems that put greater and greater stress on meeting the constraints. In
the limit as the penalty constant tends to $\infty$, one recovers the
constrained solution. In the exact penalty method, squared penalties are
replaced by absolute value penalties, and the solution is recovered for a
finite value of the penalty constant. The exact path following method starts
at the unconstrained solution and follows the solution path as the penalty
constant increases. In the process, the solution path hits, slides along, and
exits from the various constraints. Path following in lasso penalized
regression, in contrast, starts with a large value of the penalty constant and
works its way downward. In both settings, inspection of the entire solution
path is revealing. Just as with the lasso and generalized lasso, it is
possible to plot the effective degrees of freedom along the solution path. For
a strictly convex quadratic program, the exact penalty algorithm can be framed
entirely in terms of the sweep operator of regression analysis. A few well
chosen examples illustrate the mechanics and potential of path following.
Keywords: exact penalty, $l_{1}$ regularization, shape restricted regression
## 1 Introduction
When constraints appear in estimation by maximum likelihood or least squares
estimation, statisticians typically resort to sophisticated commercial
software or craft specific optimization algorithms for specific problems. In
this article, we develop a simple path algorithm for a general class of
constrained estimation problems, namely quadratic programs with affine
equality and inequality constraints. Besides providing constrained estimates,
our new algorithm also delivers the whole solution path between the
unconstrained and the constrained estimates. This is particularly helpful when
the goal is to locate a solution between these two extremes based on criteria
such as prediction error in cross-validation.
In recent years several path algorithms have been devised for specific $l_{1}$
regularization problems. The solution paths generated vividly illustrate the
tradeoffs between goodness of fit and sparsity. For example, a modification of
the least angle regression (LARS) procedure can handle lasso penalized
regression (Efron et al., 2004). Rosset and Zhu (2007) give sufficient
conditions for a solution path to be piecewise linear and expand its
applications to a wider range of loss and penalty functions. Friedman (2008)
derives a path algorithm for any objective function defined by the sum of a
convex loss and a separable penalty (not necessarily convex). The separability
restriction on the penalty term excludes many of the problems studied here.
Tibshirani and Taylor (2011) devise a path algorithm for generalized lasso
problems. Their formulation is similar to ours, but there are two fundamental
differences. First, inequality constraints are excluded in their formulation.
Our new path algorithm handles both equality and inequality constraints
gracefully. Second, they pass to the dual problem and then translate the
solution path of the dual problem back to the solution path of the primal
problem. In our view, attacking the primal problem directly leads to a simpler
algorithm, indeed one driven entirely by the classical sweep operator of
regression analysis. These gains in conceptual clarity and implementation ease
constitute major pluses for statisticians. As we will show, the degrees of
freedom formula derived for the lasso (Efron et al., 2004; Zou et al., 2007)
and generalized lasso (Tibshirani and Taylor, 2011) apply equally well in the
presence of inequality constraints.
Our object of study will be minimization of the quadratic function
$\displaystyle f(\boldsymbol{x})$ $\displaystyle=$
$\displaystyle\frac{1}{2}\boldsymbol{x}^{t}\boldsymbol{A}\boldsymbol{x}+\boldsymbol{b}^{t}\boldsymbol{x}+c$
(1)
subject to the affine equality constraints
$\boldsymbol{V}\boldsymbol{x}=\boldsymbol{d}$ and the affine inequality
constraints $\boldsymbol{W}\boldsymbol{x}\leq\boldsymbol{e}$. Throughout our
discussion we assume that the feasible region is nontrivial and that the
minimum is attained. If the symmetric matrix $\boldsymbol{A}$ has a negative
eigenvalue $\lambda$ and corresponding unit eigenvector $\boldsymbol{u}$, then
$\lim_{r\to\infty}f(r\boldsymbol{u})=-\infty$ because the quadratic term
$\frac{1}{2}(r\boldsymbol{u})^{t}\boldsymbol{A}(r\boldsymbol{u})=\frac{\lambda}{2}r^{2}$
dominates the linear term $r\boldsymbol{b}^{t}\boldsymbol{u}$. To avoid such
behavior, we initially assume that all eigenvalues of $\boldsymbol{A}$ are
positive. This makes $f(\boldsymbol{x})$ strictly convex and coercive and
guarantees a unique minimum point subject to the constraints. In linear
regression $\boldsymbol{A}=\boldsymbol{X}^{t}\boldsymbol{X}$ for some design
matrix $\boldsymbol{X}$. In this setting $\boldsymbol{A}$ is positive definite
provided $\boldsymbol{X}$ has full column rank. The latter condition is only
possible when the number of cases equals or exceeds the number of predictors.
If $\boldsymbol{A}$ is positive semidefinite and singular, then adding a small
amount of ridge regularization $\epsilon\boldsymbol{I}$ to it can be helpful
(Tibshirani and Taylor, 2011). Later we indicate how path following extends to
positive semidefinite or even indefinite matrices $\boldsymbol{A}$.
In multi-task models in machine learning, the response is a $d$-dimensional
vector $\boldsymbol{Y}\in\mathbb{R}^{d}$, and one minimizes the squared
Frobenius deviation
$\displaystyle\frac{1}{2}\|\boldsymbol{Y}-\boldsymbol{X}\boldsymbol{B}\|_{\text{F}}^{2}$
(2)
with respect to the $p\times d$ regression coefficient matrix
$\boldsymbol{B}$. When the constraints take the form
$\boldsymbol{V}\boldsymbol{B}\leq\boldsymbol{D}$ and
$\boldsymbol{W}\boldsymbol{B}=\boldsymbol{E}$, the problem reduces to
quadratic programming as just posed. Indeed, if we stack the columns of $Y$
with the vec operator, then the problem reduces to minimizing
$\frac{1}{2}\|\text{vec}(\boldsymbol{Y})-\boldsymbol{I}\otimes\boldsymbol{X}\text{vec}(\boldsymbol{B})\|_{2}^{2}$.
Here the identity
$\text{vec}(\boldsymbol{X}\boldsymbol{B})=\boldsymbol{I}\otimes\boldsymbol{X}\text{vec}(\boldsymbol{B})$
comes into play involving the Kronecker product and the identity matrix
$\boldsymbol{I}$. The same identity allows to rewrite the constraints as
$\boldsymbol{I}\otimes\boldsymbol{V}\text{vec}(\boldsymbol{X})=\text{vec}(\boldsymbol{D})$
and
$\boldsymbol{I}\otimes\boldsymbol{W}\text{vec}(\boldsymbol{X})\leq\text{vec}(\boldsymbol{E})$.
As an illustration, consider the classical concave regression problem
(Hildreth, 1954). The data consist of a scatter plot $(x_{i},y_{i})$ of $n$
points with associated weights $w_{i}$ and predictors $x_{i}$ arranged in
increasing order. The concave regression problem seeks the estimates
$\theta_{i}$ that minimize the weighted sum of squares
$\displaystyle\sum_{i=1}^{n}w_{i}(y_{i}-\theta_{i})^{2}$ (3)
subject to the concavity constraints
$\displaystyle\frac{\theta_{i}-\theta_{i-1}}{x_{i}-x_{i-1}}$
$\displaystyle\geq$
$\displaystyle\frac{\theta_{i+1}-\theta_{i}}{x_{i+1}-x_{i}},\hskip
10.84006pti=2,\ldots,n-1.$ (4)
The consistency of concave regression is proved by Hanson and Pledger (1976);
the asymptotic distribution of the estimates and their rate of convergence are
studied in subsequent papers (Mammen, 1991; Groeneboom et al., 2001). Figure 1
shows a scatter plot of 100 data points. Here the $x_{i}$ are uniformly
sampled from the interval [0,1], the weights are constant, and
$y_{i}=4x_{i}(1-x_{i})+\epsilon_{i}$, where the $\epsilon_{i}$ are i.i.d.
normal with mean 0 and standard deviation $\sigma=0.3$. The left panel of
Figure 1 gives four snapshots of the solution path. The original data points
$\hat{\theta}_{i}=y_{i}$ provide the unconstrained estimates. The solid line
shows the concavity constrained solution. The dotted and dashed lines
represent intermediate solutions between the unconstrained and constrained
solutions. The degrees of freedom formula derived in Section 6 is a vehicle
for model selection based on criterion such as $C_{p}$, AIC, and BIC. For
example, the $C_{p}$ statistic
$\displaystyle C_{p}(\hat{\boldsymbol{\theta}})$ $\displaystyle=$
$\displaystyle\frac{1}{n}\|\boldsymbol{y}-\hat{\boldsymbol{\theta}}\|_{2}^{2}+\frac{2}{n}\sigma^{2}\text{df}(\hat{\boldsymbol{\theta}})$
is an unbiased estimator of the true prediction error (Efron, 2004) under the
estimator $\hat{\boldsymbol{\theta}}$. The right panel shows the $C_{p}$
statistic along the solution path. In this example the design matrix is a
diagonal matrix. As we will see in Section 7, postulating a more general
design matrix or other kinds of constraints broadens the scope of applications
of the path algorithm and the estimated degrees of freedom.
$\begin{array}[]{cc}\includegraphics[width=180.67499pt]{convreg_estimate}&\includegraphics[width=180.67499pt]{convreg_Cp_path}\end{array}$
Figure 1: Path solutions to the concave regression problem. Left: the
unconstrained solution (original data points), two intermediate solutions
(dotted and dashed lines), and the concavity constrained solution (solid
line). Right: the $C_{p}$ statistic as a function of the penalty constant
$\rho$ along the solution path.
Here is a roadmap to the remainder the current paper. Section 2 reviews the
exact penalty method for optimization and clarifies the connections between
constrained optimization and regularization in statistics. Section 3 derives
in detail our path algorithm. Its implementation via the sweep operator and QR
decomposition are described in Sections 4 and 5. Section 6 derives the degrees
of freedom formula. Section 7 presents various numerical examples. Finally,
Section 8 discusses the limitations of the path algorithm and hints at future
generalizations.
## 2 The Exact Penalty Method
Exact penalty methods minimize the function
$\displaystyle{\cal E}_{\rho}(\boldsymbol{x})$ $\displaystyle=$ $\displaystyle
f(\boldsymbol{x})+\rho\sum_{i=1}^{r}|g_{i}(\boldsymbol{x})|+\rho\sum_{j=1}^{s}\max\\{0,h_{j}(\boldsymbol{x})\\},$
where $f(\boldsymbol{x})$ is the objective function, $g_{i}(\boldsymbol{x})=0$
is one of $r$ equality constraints, and $h_{j}(\boldsymbol{x})\leq 0$ is one
of $s$ inequality constraints. It is interesting to compare this function to
the Lagrangian function
$\displaystyle{\cal L}(\boldsymbol{x})$ $\displaystyle=$ $\displaystyle
f(\boldsymbol{x})+\sum_{i=1}^{r}\lambda_{i}g_{i}(\boldsymbol{x})+\sum_{j=1}^{s}\mu_{j}h_{j}(\boldsymbol{x})$
that captures the behavior of $f(\boldsymbol{x})$ at a constrained local
minimum $\boldsymbol{y}$. By definition the Lagrange multipliers satisfy the
conditions $\nabla{\cal L}(\boldsymbol{y})={\bf 0}$ and $\mu_{j}\geq 0$ and
$\mu_{j}h_{j}(\boldsymbol{y})=0$ for all $j$. In the exact penalty method we
take
$\displaystyle\rho$ $\displaystyle>$
$\displaystyle\max\\{|\lambda_{1}|,\ldots,|\lambda_{r}|,\mu_{1},\ldots,\mu_{s}\\}.$
(5)
This choice creates the majorization $f(\boldsymbol{x})\leq{\cal
E}_{\rho}(\boldsymbol{x})$ with $f(\boldsymbol{z})={\cal
E}_{\rho}(\boldsymbol{z})$ at any feasible point $\boldsymbol{z}$. Thus,
minimizing ${\cal E}_{\rho}(\boldsymbol{x})$ forces $f(\boldsymbol{x})$
downhill. Much more than this is going on however. As the next proposition
proves, minimizing ${\cal E}_{\rho}(\boldsymbol{x})$ effectively minimizes
$f(\boldsymbol{x})$ subject to the constraints.
###### Proposition 2.1.
Suppose the objective function $f(\boldsymbol{x})$ and the constraint
functions are twice differentiable and satisfy the Lagrange multiplier rule at
the local minimum $\boldsymbol{y}$. If inequality (5) holds and
$\boldsymbol{v}^{*}d^{2}{\cal L}(\boldsymbol{y})\boldsymbol{v}>0$ for every
vector $\boldsymbol{v}\neq{\bf 0}$ satisfying
$dg_{i}(\boldsymbol{y})\boldsymbol{v}=0$ and
$dh_{j}(\boldsymbol{y})\boldsymbol{v}\leq 0$ for all active inequality
constraints, then $\boldsymbol{y}$ furnishes an unconstrained local minimum of
${\cal E}_{\rho}(\boldsymbol{x})$. If $f(\boldsymbol{x})$ is convex, the
$g_{i}(\boldsymbol{x})$ are affine, the $h_{j}(\boldsymbol{x})$ are convex,
and Slater’s constraint qualification holds, then $\boldsymbol{y}$ is a
minimum of ${\cal E}_{\rho}(\boldsymbol{x})$ if and only if $\boldsymbol{y}$
is a minimum of $f(\boldsymbol{x})$ subject to the constraints. In this convex
programming context, no differentiability assumptions are needed.
Proof: The conditions imposed on the quadratic form
$\boldsymbol{v}^{*}d^{2}{\cal L}(\boldsymbol{y})\boldsymbol{v}>0$ are well-
known sufficient conditions for a local minimum. Theorems 6.9 and 7.21 of the
reference (Ruszczyński, 2006) prove all of the foregoing assertions. ∎
## 3 The Path Following Algorithm
We now resume our study of minimizing the objective function (1) subject to
the affine equality constraints $\boldsymbol{V}\boldsymbol{x}=\boldsymbol{d}$
and the affine inequality constraints
$\boldsymbol{W}\boldsymbol{x}\leq\boldsymbol{e}$. The corresponding penalized
objective function takes the form
$\displaystyle{\cal E}_{\rho}(\boldsymbol{x})$ $\displaystyle=$
$\displaystyle\frac{1}{2}\boldsymbol{x}^{t}\boldsymbol{A}\boldsymbol{x}+\boldsymbol{b}^{t}\boldsymbol{x}+c+\rho\sum_{i=1}^{r}|\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i}|+\rho\sum_{j=1}^{s}(\boldsymbol{w}_{j}^{t}\boldsymbol{x}-e_{i})_{+}.$
(6)
Our assumptions on $\boldsymbol{A}$ render ${\cal E}_{\rho}(\boldsymbol{x})$
strictly convex and coercive and guarantee a unique minimum point
$\boldsymbol{x}(\rho)$. The generalized lasso problem studied in (Tibshirani
and Taylor, 2011) drops the last term and consequently excludes inequality
constrained applications.
According to the rules of the convex calculus (Ruszczyński, 2006), the unique
optimal point $\boldsymbol{x}(\rho)$ of the function ${\cal
E}_{\rho}(\boldsymbol{x})$ is characterized by the stationarity condition
$\displaystyle{\bf 0}$ $\displaystyle=$
$\displaystyle\boldsymbol{A}\boldsymbol{x}(\rho)+\boldsymbol{b}+\rho\sum_{i=1}^{r}s_{i}\boldsymbol{v}_{i}+\rho\sum_{j=1}^{s}t_{j}\boldsymbol{w}_{j}$
(7)
with coefficients
$\displaystyle
s_{i}\in\begin{cases}\\{-1\\}&\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i}<0\\\
[-1,1]&\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i}=0\\\
\\{1\\}&\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i}>0\end{cases},\hskip
36.135ptt_{j}\in\begin{cases}\\{0\\}&\boldsymbol{w}_{j}^{t}\boldsymbol{x}-e_{i}<0\\\
[0,1]&\boldsymbol{w}_{j}^{t}\boldsymbol{x}-e_{i}=0\\\
\\{1\\}&\boldsymbol{w}_{j}^{t}\boldsymbol{x}-e_{i}>0\end{cases}.$ (8)
Assuming the vectors
$\left(\cup_{i}\\{\boldsymbol{v}_{i}\\}\right)\cup\left(\cup_{j}\\{\boldsymbol{w}_{j}\\}\right)$
are linearly independent, the coefficients $s_{i}$ and $t_{j}$ are uniquely
determined. The sets defining the possible values of $s_{i}$ and $t_{j}$ are
the subdifferentials of the functions $|s_{i}|$ and
$(t_{j})_{+}=\max\\{0,t_{j}\\}$.
The solution path $\boldsymbol{x}(\rho)$ is continuous when $\boldsymbol{A}$
is positive definite. This also implies that the coefficient paths
$\boldsymbol{s}(\rho)$ and $\boldsymbol{t}(\rho)$ are continuous. For a
rigorous proof, note that the representation
$\displaystyle\boldsymbol{x}(\rho)$ $\displaystyle=$
$\displaystyle-\boldsymbol{A}^{-1}\Big{(}\boldsymbol{b}+\rho\sum_{i=1}^{r}s_{i}\boldsymbol{v}_{i}+\rho\sum_{j=1}^{s}t_{j}\boldsymbol{w}_{j}\Big{)}$
entails the norm inequality
$\displaystyle\|\boldsymbol{x}(\rho)\|$ $\displaystyle\leq$
$\displaystyle\|\boldsymbol{A}^{-1}\|\Big{(}\|\boldsymbol{b}\|+\rho\sum_{i=1}^{r}\|\boldsymbol{v}_{i}\|+\rho\sum_{j=1}^{s}\|\boldsymbol{w}_{j}\|\Big{)}.$
Thus, the solution vector $\boldsymbol{x}(\rho)$ is bounded whenever $\rho\geq
0$ is bounded above. To prove continuity, suppose that it fails for a given
$\rho$. Then there exists an $\epsilon>0$ and a sequence $\rho_{n}$ tending to
$\rho$ such $\|\boldsymbol{x}(\rho_{n})-\boldsymbol{x}(\rho)\|\geq\epsilon$
for all $n$. Since $\boldsymbol{x}(\rho_{n})$ is bounded, we can pass to a
subsequence if necessary and assume that $\boldsymbol{x}(\rho_{n})$ converges
to some point $\boldsymbol{y}$. Taking limits in the inequality ${\cal
E}_{\rho_{n}}[\boldsymbol{x}(\rho_{n})]\leq{\cal
E}_{\rho_{n}}(\boldsymbol{x})$ demonstrates that ${\cal
E}_{\rho}(\boldsymbol{y})\leq{\cal E}_{\rho}(\boldsymbol{x})$ for all
$\boldsymbol{x}$. Because $\boldsymbol{x}(\rho)$ is unique, we reach the
contradictory conclusions
$\|\boldsymbol{y}-\boldsymbol{x}(\rho)\|\geq\epsilon$ and
$\boldsymbol{y}=\boldsymbol{x}(\rho)$. Continuity is inherited by the
coefficients $s_{i}$ and $t_{j}$. Indeed, let $\boldsymbol{V}$ and
$\boldsymbol{W}$ be the matrices with rows $\boldsymbol{v}_{i}^{t}$ and
$\boldsymbol{w}_{j}^{t}$, and let $\boldsymbol{U}$ be the block matrix
$\begin{pmatrix}\boldsymbol{V}\\\ \boldsymbol{W}\end{pmatrix}$. The
stationarity condition can be restated as
$\displaystyle{\bf 0}$ $\displaystyle=$
$\displaystyle\boldsymbol{A}\boldsymbol{x}+\boldsymbol{b}+\rho\boldsymbol{U}^{t}\begin{pmatrix}\boldsymbol{s}\\\
\boldsymbol{t}\end{pmatrix}.$
Multiplying this equation by $\boldsymbol{U}$ and solving give
$\displaystyle\rho\begin{pmatrix}\boldsymbol{s}\\\
\boldsymbol{t}\end{pmatrix}$ $\displaystyle=$
$\displaystyle-(\boldsymbol{U}\boldsymbol{U}^{t})^{-1}\boldsymbol{U}\Big{[}\boldsymbol{A}\boldsymbol{x}(\rho)+\boldsymbol{b}\Big{]},$
(9)
and the continuity of the left-hand side follows from the continuity of
$\boldsymbol{x}(\rho)$. Finally, dividing by $\rho$ yields the continuity of
the coefficient $s_{i}$ and $t_{j}$ for $\rho>0$.
We next show that the solution path is piecewise linear. Along the path we
keep track of the following index sets determined by the constraint residuals:
$\displaystyle{\cal N}_{\text{E}}$
$\displaystyle=\\{i:\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i}<0\\},\hskip
36.135pt{\cal
N}_{\text{I}}=\\{j:\boldsymbol{w}_{j}^{t}\boldsymbol{x}-e_{j}<0\\}$
$\displaystyle{\cal Z}_{\text{E}}$
$\displaystyle=\\{i:\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i}=0\\},\hskip
36.135pt{\cal
Z}_{\text{I}}=\\{j:\boldsymbol{w}_{j}^{t}\boldsymbol{x}-e_{j}=0\\}$
$\displaystyle{\cal P}_{\text{E}}$
$\displaystyle=\\{i:\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i}>0\\},\hskip
36.135pt{\cal
P}_{\text{I}}=\\{j:\boldsymbol{w}_{j}^{t}\boldsymbol{x}-e_{j}>0\\}.$
For the sake of simplicity, assume that at the beginning of the current
segment $s_{i}$ does not equal $-1$ or $1$ when $i\in{\cal Z}_{\text{E}}$ and
$t_{j}$ does not equal $0$ or $1$ when $j\in{\cal Z}_{\text{I}}$. In other
words, the coefficients of the active constraints occur on the interior of
their subdifferentials. Let us show in this circumstance that the solution
path can be extended in a linear fashion. The general idea is to impose the
equality constraints $\boldsymbol{V}_{{\cal
Z}_{\text{E}}}\boldsymbol{x}=\boldsymbol{d}_{{\cal Z}_{\text{E}}}$ and
$\boldsymbol{W}_{{\cal Z}_{\text{I}}}\boldsymbol{x}=\boldsymbol{e}_{{\cal
Z}_{\text{I}}}$ and write the objective function ${\cal
E}_{\rho}(\boldsymbol{x})$ as
$\displaystyle\frac{1}{2}\boldsymbol{x}^{t}\boldsymbol{A}\boldsymbol{x}+\boldsymbol{b}^{t}\boldsymbol{x}+c-\rho\sum_{i\in{\cal
N}_{\text{E}}}(\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i})+\rho\sum_{i\in{\cal
P}_{\text{E}}}(\boldsymbol{v}_{i}^{t}\boldsymbol{x}-d_{i})+\rho\sum_{j\in{\cal
P}_{\text{I}}}(\boldsymbol{w}_{j}^{t}\boldsymbol{x}-e_{j}).$
For notational convenience define
$\displaystyle\boldsymbol{U}_{{\cal
Z}}=\left(\begin{array}[]{c}\boldsymbol{V}_{{\cal Z}_{\text{E}}}\\\
\boldsymbol{W}_{{\cal Z}_{\text{I}}}\end{array}\right),\hskip
10.84006pt\boldsymbol{c}_{{\cal
Z}}=\left(\begin{array}[]{c}\boldsymbol{d}_{{\cal Z}_{\text{E}}}\\\
\boldsymbol{e}_{{\cal Z}_{\text{I}}}\end{array}\right),\hskip
10.84006pt\boldsymbol{u}_{\bar{\cal Z}}=-\sum_{i\in{\cal
N}_{\text{E}}}\boldsymbol{v}_{i}+\sum_{i\in{\cal
P}_{\text{E}}}\boldsymbol{v}_{i}+\sum_{j\in{\cal
P}_{\text{I}}}\boldsymbol{w}_{j}.$
Minimizing ${\cal E}_{\rho}(\boldsymbol{x})$ subject to the constraints
generates the Lagrange multiplier problem
$\displaystyle\left(\begin{array}[]{ccc}\boldsymbol{A}&\boldsymbol{U}_{{\cal
Z}}^{t}\\\ \boldsymbol{U}_{{\cal Z}}&{\bf
0}\end{array}\right)\left(\begin{array}[]{c}\boldsymbol{x}\\\
\boldsymbol{\lambda}_{{\cal Z}}\end{array}\right)$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{c}-\boldsymbol{b}-\rho\boldsymbol{u}_{\bar{\cal
Z}}\\\ \boldsymbol{c}_{{\cal Z}}\end{array}\right)$ (16)
with the explicit path solution and Lagrange multipliers
$\displaystyle\boldsymbol{x}(\rho)$ $\displaystyle=$
$\displaystyle-\boldsymbol{P}(\boldsymbol{b}+\rho\boldsymbol{u}_{\bar{\cal
Z}})+\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}}\;\>\,=\;\>\,-\rho\boldsymbol{P}\boldsymbol{u}_{\bar{\cal
Z}}-\boldsymbol{P}\boldsymbol{b}+\boldsymbol{Q}\boldsymbol{c}_{{\cal Z}}$ (17)
$\displaystyle\boldsymbol{\lambda}_{\cal Z}$ $\displaystyle=$
$\displaystyle-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{{\cal
Z}}-\rho\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}.$ (18)
Here
$\displaystyle\begin{pmatrix}\boldsymbol{P}&\boldsymbol{Q}\\\
\boldsymbol{Q}^{t}&\boldsymbol{R}\end{pmatrix}$ $\displaystyle=$
$\displaystyle\begin{pmatrix}\boldsymbol{A}&\boldsymbol{U}_{{\cal Z}}^{t}\\\
\boldsymbol{U}_{{\cal Z}}&{\bf 0}\end{pmatrix}^{-1}$
with
$\displaystyle\boldsymbol{P}$ $\displaystyle=$
$\displaystyle\boldsymbol{A}^{-1}-\boldsymbol{A}^{-1}\boldsymbol{U}_{{\cal
Z}}^{t}(\boldsymbol{U}_{{\cal Z}}\boldsymbol{A}^{-1}\boldsymbol{U}_{{\cal
Z}}^{t})^{-1}\boldsymbol{U}_{{\cal Z}}\boldsymbol{A}^{-1}$
$\displaystyle\boldsymbol{Q}$ $\displaystyle=$
$\displaystyle\boldsymbol{A}^{-1}\boldsymbol{U}_{{\cal
Z}}^{t}(\boldsymbol{U}_{{\cal Z}}\boldsymbol{A}^{-1}\boldsymbol{U}_{{\cal
Z}}^{t})^{-1}$ $\displaystyle\boldsymbol{R}$ $\displaystyle=$
$\displaystyle-(\boldsymbol{U}_{{\cal
Z}}\boldsymbol{A}^{-1}\boldsymbol{U}_{{\cal Z}}^{t})^{-1}.$
As we will see in the next section, these seemingly complicated objects arise
naturally if path following is organized around the sweep operator.
It is clear that as we increase $\rho$, the solution path (17) changes in a
linear fashion until either an inactive constraint becomes active or the
coefficient of an active constraint hits the boundary of its subdifferential.
We investigate the first case first. Imagining $\rho$ to be a time parameter,
an inactive constraint $i\in{\cal N}_{\text{E}}\cup{\cal P}_{\text{E}}$
becomes active when
$\displaystyle\boldsymbol{v}_{i}^{t}\boldsymbol{x}(\rho)$ $\displaystyle=$
$\displaystyle-\boldsymbol{v}_{i}^{t}\boldsymbol{P}(\boldsymbol{b}+\rho\boldsymbol{u}_{\bar{\cal
Z}})+\boldsymbol{v}_{i}^{t}\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}}\;\>\,=\;\>\,d_{i}.$
If this event occurs, it occurs at the hitting time
$\displaystyle\rho^{(i)}$ $\displaystyle=$
$\displaystyle\frac{-\boldsymbol{v}_{i}^{t}\boldsymbol{P}\boldsymbol{b}+\boldsymbol{v}_{i}^{t}\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}}-d_{i}}{\boldsymbol{v}_{i}^{t}\boldsymbol{P}\boldsymbol{u}_{\bar{\cal
Z}}}.$ (19)
Similarly, an inactive constraint $j\in{\cal N}_{\text{I}}\cup{\cal
P}_{\text{I}}$ becomes active at the hitting time
$\displaystyle\rho^{(j)}$ $\displaystyle=$
$\displaystyle\frac{-\boldsymbol{w}_{j}^{t}\boldsymbol{P}\boldsymbol{b}+\boldsymbol{w}_{j}^{t}\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}}-e_{j}}{\boldsymbol{w}_{j}^{t}\boldsymbol{P}\boldsymbol{u}_{\bar{\cal
Z}}}.$ (20)
To determine the escape time for an active constraint, consider once again the
stationarity condition (7). The Lagrange multiplier corresponding to an active
constraint coincides with a product $\rho s_{i}(\rho)$ or $\rho t_{j}(\rho)$.
Therefore, if we collect the coefficients for the active constraints into the
vector $\boldsymbol{r}_{{\cal Z}}(\rho)$, then equation (18) implies
$\displaystyle\boldsymbol{r}_{{\cal Z}}(\rho)$ $\displaystyle=$
$\displaystyle\frac{1}{\rho}\boldsymbol{\lambda}_{\cal
Z}(\rho)\;\>\,=\;\>\,\frac{1}{\rho}(-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{{\cal
Z}})-\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}.$ (21)
Formula (21) for $\boldsymbol{r}_{{\cal Z}}(\rho)$ can be rewritten in terms
of the value $\boldsymbol{r}_{\cal Z}(\rho_{0})$ at the start $\rho_{0}$ of
the current segment as
$\displaystyle\boldsymbol{r}_{{\cal Z}}(\rho)$ $\displaystyle=$
$\displaystyle\frac{\rho_{0}}{\rho}\boldsymbol{r}_{{\cal
Z}}(\rho_{0})-\left(1-\frac{\rho_{0}}{\rho}\right)\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal
Z}}.$ (22)
It is clear that $\boldsymbol{r}_{{\cal Z}}(\rho)_{i}$ is increasing in $\rho$
when $[\boldsymbol{r}_{{\cal
Z}}(\rho_{0})+\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}]_{i}<0$ and
decreasing in $\rho$ when the reverse is true. The coefficient of an active
constraint $i\in{\cal Z}_{\text{E}}$ escapes at either of the times
$\displaystyle\rho^{(i)}$ $\displaystyle=$
$\displaystyle\frac{[-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{\cal
Z}]_{i}}{[\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}]_{i}-1}\;\;\text{ or
}\;\;\frac{[-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{\cal
Z}]_{i}}{[\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}]_{i}+1},$
whichever is pertinent. Similarly, the coefficient of an active constraint
$j\in{\cal Z}_{\text{I}}$ escapes at either of the times
$\displaystyle\rho^{(j)}$ $\displaystyle=$
$\displaystyle\frac{[-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{\cal
Z}]_{j}}{[\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}]_{j}}\;\;\text{ or
}\;\;\frac{[-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{\cal
Z}]_{j}}{[\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}]_{j}+1},$
whichever is pertinent. The earliest hitting time or escape time over all
constraints determines the duration of the current linear segment.
At the end of the current segment, our assumption that all active coefficients
occur on the interior of their subdifferentials is actually violated. When the
hitting time for an inactive constraint occurs first, we move the constraint
to the appropriate active set ${\cal Z}_{\text{E}}$ or ${\cal Z}_{\text{I}}$
and keep the other constraints in place. Similarly, when the escape time for
an active constraint occurs first, we move the constraint to the appropriate
inactive set and keep the other constraints in place. In the second scenario,
if $s_{i}$ hits the value $-1$, then we move $i$ to ${\cal N}_{\text{E}}$. If
$s_{i}$ hits the value $1$, then we move $i$ to ${\cal P}_{\text{E}}$. Similar
comments apply when a coefficient $t_{j}$ hits 0 or 1. Once this move is
executed, we commence a new linear segment as just described. The path
following algorithm continues segment by segment until for sufficiently large
$\rho$ the sets ${\cal N}_{\text{E}}$, ${\cal P}_{\text{E}}$, and ${\cal
P}_{\text{I}}$ are exhausted, $\boldsymbol{u}_{\bar{\cal Z}}={\bf 0}$, and the
solution vector (17) stabilizes.
This description omits two details. First, to get the process started, we set
$\rho=0$ and $\boldsymbol{x}(0)=-\boldsymbol{A}^{-1}\boldsymbol{b}$. In other
words, we start at the unconstrained minimum. For inactive constraints, the
coefficients $s_{i}(0)$ and $t_{j}(0)$ are fixed. However for active
constraints, it is unclear how to assign the coefficients and whether to
release the constraints from active status as $\rho$ increases. Second, very
rarely some of the hitting times and escape times will coincide. We are then
faced again with the problem of which of the active constraints with
coefficients on their subdifferential boundaries to keep active and which to
encourage to go inactive in the next segment. In practice, the first problem
can easily occur. Roundoff error typically keeps the second problem at bay.
In both anomalous cases, the status of each of active constraint can be
resolved by trying all possibilities. Consider the second case first. If there
are $a$ currently active constraints parked at their subdifferential
boundaries, then there are $2^{a}$ possible configurations for their active-
inactive states in the next segment. For a given configuration, we can exploit
formula (21) to check whether the coefficient for an active constraint occurs
in its subdifferential. If the coefficient occurs on the boundary of its
subdifferential, then we can use representation (22) to check whether it is
headed into the interior of the subdifferential as $\rho$ increases. Since the
path and its coefficients are unique, one and only one configuration should
determine the next linear segment. At the start of the path algorithm, the
correct configuration also determines the initial values of the active
coefficients. If we take limits in equation (21) as $\rho$ tends to 0, then
the coefficients will escape their subdifferentials unless
$-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{\cal Z}={\bf
0}$ and all components of $-\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}$
lie in their appropriate subdifferentials. Hence, again it is easy to decide
on the active set ${\cal Z}$ going forward from $\rho=0$. One could object
that the number of configurations $2^{a}$ is potentially very large, but in
practice this combinatorial bottleneck never occurs. Visiting the various
configurations can be viewed as a systematic walk through the subsets of
$\\{1,\ldots,a\\}$ and organized using a classical gray code (Savage, 1997)
that deletes at most one element and adjoins at most one element as one passes
from one active subset to the next. As we will see in the next section,
adjoining an element corresponds to sweeping a diagonal entry of a tableau and
deleting an element corresponds to inverse sweeping a diagonal entry of the
same tableau.
## 4 The Path Algorithm and Sweeping
Implementation of the path algorithm can be conveniently organized around the
sweep and inverse sweep operators of regression analysis (Dempster, 1969;
Goodnight, 1979; Jennrich, 1977; Little and Rubin, 2002; Lange, 2010). We
first recall the definition and basic properties of the sweep operator.
Suppose $\boldsymbol{A}$ is an $m\times m$ symmetric matrix. Sweeping on the
$k$th diagonal entry $a_{kk}\neq 0$ of $\boldsymbol{A}$ yields a new symmetric
matrix $\widehat{\boldsymbol{A}}$ with entries
$\displaystyle\hat{a}_{kk}$ $\displaystyle=$ $\displaystyle-\frac{1}{a_{kk}},$
$\displaystyle\hat{a}_{ik}$ $\displaystyle=$
$\displaystyle\frac{a_{ik}}{a_{kk}},\quad i\neq k$ $\displaystyle\hat{a}_{kj}$
$\displaystyle=$ $\displaystyle\frac{a_{kj}}{a_{kk}},\quad j\neq k$
$\displaystyle\hat{a}_{ij}$ $\displaystyle=$ $\displaystyle
a_{ij}-\frac{a_{ik}a_{kj}}{a_{kk}},\quad i,j\neq k.$
These arithmetic operations can be undone by inverse sweeping on the same
diagonal entry. Inverse sweeping sends the symmetric matrix $\boldsymbol{A}$
into the symmetric matrix $\check{\boldsymbol{A}}$ with entries
$\displaystyle\check{a}_{kk}$ $\displaystyle=$
$\displaystyle-\frac{1}{a_{kk}},$ $\displaystyle\check{a}_{ik}$
$\displaystyle=$ $\displaystyle-\frac{a_{ik}}{a_{kk}},\quad i\neq k$
$\displaystyle\check{a}_{kj}$ $\displaystyle=$
$\displaystyle-\frac{a_{kj}}{a_{kk}},\quad j\neq k$
$\displaystyle\check{a}_{ij}$ $\displaystyle=$ $\displaystyle
a_{ij}-\frac{a_{ik}a_{kj}}{a_{kk}},\quad i,j\neq k.$
Both sweeping and inverse sweeping preserve symmetry. Thus, all operations can
be carried out on either the lower or upper triangle of $\boldsymbol{A}$
alone, saving both computational time and storage. When several sweeps or
inverse sweeps are performed, their order is irrelevant. Finally, a symmetric
matrix $\boldsymbol{A}$ is positive definite if and only if $\boldsymbol{A}$
can be completely swept, and all of its diagonal entries remain positive until
swept. Complete sweeping produces $-\boldsymbol{A}^{-1}$. Each sweep of a
positive definite matrix reduces the magnitude of the unswept diagonal
entries. Positive definite matrices with poor condition numbers can be
detected by monitoring the relative magnitude of each diagonal entry just
prior to sweeping.
At the start of path following, we initialize a path tableau with block
entries
$\displaystyle\left(\begin{array}[]{c|cc}-\boldsymbol{A}&-\boldsymbol{U}^{t}&\boldsymbol{b}\\\
\hline\cr*&{\bf 0}&-\boldsymbol{c}\\\ &*&0\end{array}\right).$ (26)
The starred blocks here are determined by symmetry. Sweeping the diagonal
entries of the upper-left block $-\boldsymbol{A}$ of the tableau yields
$\displaystyle\left(\begin{array}[]{c|cc}\boldsymbol{A}^{-1}&\boldsymbol{A}^{-1}\boldsymbol{U}^{t}&-\boldsymbol{A}^{-1}\boldsymbol{b}\\\
\hline\cr*&\boldsymbol{U}\boldsymbol{A}^{-1}\boldsymbol{U}^{t}&-\boldsymbol{U}\boldsymbol{A}^{-1}\boldsymbol{b}-\boldsymbol{c}\\\
&*&\boldsymbol{b}^{t}\boldsymbol{A}^{-1}\boldsymbol{b}\end{array}\right).$
The new tableau contains the unconstrained solution
$\boldsymbol{x}(0)=-\boldsymbol{A}^{-1}\boldsymbol{b}$ and the corresponding
constraint residuals
$-\boldsymbol{U}\boldsymbol{A}^{-1}\boldsymbol{b}-\boldsymbol{c}$. In path
following, we adopt our previous notation and divide the original tableau into
sub-blocks. The result
$\displaystyle\left(\begin{array}[]{cc|cc}-\boldsymbol{A}&-\boldsymbol{U}_{{\cal
Z}}^{t}&-\boldsymbol{U}_{\bar{\cal Z}}^{t}&\boldsymbol{b}\\\ &{\bf 0}&{\bf
0}&-\boldsymbol{c}_{{\cal Z}}\\\ \hline\cr*&*&{\bf
0}&-\boldsymbol{c}_{\bar{\cal Z}}\\\ &*&*&0\end{array}\right)$ (32)
highlights the active and inactive constraints. If we continue sweeping until
all diagonal entries of the upper-left quadrant of this version of the tableau
are swept, then the tableau becomes
$\displaystyle\left(\begin{array}[]{cc|cc}\boldsymbol{P}&\boldsymbol{Q}&\boldsymbol{P}\boldsymbol{U}_{\bar{\cal
Z}}^{t}&-\boldsymbol{P}\boldsymbol{b}+\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}}\\\ &\boldsymbol{R}&\boldsymbol{Q}^{t}\boldsymbol{U}_{\bar{\cal
Z}}^{t}&-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{{\cal
Z}}\\\ \hline\cr*&*&\boldsymbol{U}_{\bar{\cal
Z}}\boldsymbol{P}\boldsymbol{U}_{\bar{\cal Z}}^{t}&\boldsymbol{U}_{\bar{\cal
Z}}(-\boldsymbol{P}\boldsymbol{b}+\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}})-\boldsymbol{c}_{\bar{\cal Z}}\\\
&*&*&\boldsymbol{b}^{t}\boldsymbol{P}\boldsymbol{b}-2\boldsymbol{b}^{t}\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}}+\boldsymbol{c}_{{\cal Z}}^{t}\boldsymbol{R}\boldsymbol{c}_{{\cal
Z}}\end{array}\right).$
All of the required elements for the path algorithm now magically appear.
Given the next $\rho$, the solution vector $\boldsymbol{x}(\rho)$ appearing in
equation (17) requires the sum
$-\boldsymbol{P}\boldsymbol{b}+\boldsymbol{Q}\boldsymbol{c}_{{\cal Z}}$, which
occurs in the revised tableau, and the vector
$\boldsymbol{P}\boldsymbol{u}_{\bar{\cal Z}}$. If $\boldsymbol{r}_{\bar{\cal
Z}}$ denotes the coefficient vector for the inactive constraints, with entries
of $-1$ for constraints in ${\cal N}_{\text{E}}$, 0 for constraints in ${\cal
N}_{\text{I}}$, and 1 for constraints in ${\cal P}_{\text{E}}\cup{\cal
P}_{\text{I}}$, then $\boldsymbol{P}\boldsymbol{u}_{\bar{\cal
Z}}=\boldsymbol{P}\boldsymbol{U}_{\bar{\cal Z}}^{t}\boldsymbol{r}_{\bar{\cal
Z}}$. Fortunately, $\boldsymbol{P}\boldsymbol{U}_{\bar{\cal Z}}^{t}$ appears
in the revised tableau. The update of $\rho$ depends on the hitting times (19)
and (20). These in turn depend on the numerators
$-\boldsymbol{v}_{i}^{t}\boldsymbol{P}\boldsymbol{b}+\boldsymbol{v}_{i}^{t}\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}}-d_{i}$ and
$-\boldsymbol{w}_{j}^{t}\boldsymbol{P}\boldsymbol{b}+\boldsymbol{w}_{j}^{t}\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}}-e_{j}$, which occur as components of the vector $\boldsymbol{U}_{\bar{\cal
Z}}(-\boldsymbol{P}\boldsymbol{b}+\boldsymbol{Q}\boldsymbol{c}_{{\cal
Z}})-\boldsymbol{c}_{\bar{\cal Z}}$, and the denominators
$\boldsymbol{v}_{i}^{t}\boldsymbol{P}\boldsymbol{u}_{\bar{\cal Z}}$ and
$\boldsymbol{w}_{j}^{t}\boldsymbol{P}\boldsymbol{u}_{\bar{\cal Z}}$, which
occur as components of the matrix $\boldsymbol{U}_{\bar{\cal
Z}}\boldsymbol{P}\boldsymbol{U}_{\bar{\cal Z}}^{t}\boldsymbol{r}_{\bar{\cal
Z}}$ computable from the block $\boldsymbol{U}_{\bar{\cal
Z}}\boldsymbol{P}\boldsymbol{U}_{\bar{\cal Z}}^{t}$ of the tableau. The escape
times for the active constraints also determine the update of $\rho$.
According to equation (22), the escape times depend on the current coefficient
vector, the current value $\rho_{0}$ of $\rho$, and the vector
$\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal
Z}}=\boldsymbol{Q}^{t}\boldsymbol{U}_{\bar{\cal
Z}}^{t}\boldsymbol{r}_{\bar{\cal Z}}$, which can be computed from the block
$\boldsymbol{Q}^{t}\boldsymbol{U}_{\bar{\cal Z}}^{t}$ of the tableau. Thus,
the revised tableau supplies all of the ingredients for path following.
Algorithm 1 outlines the steps for path following ignoring the anomalous
situations.
The ingredients for handling the anomalous situations can also be read from
the path tableau. The initial coefficients $\boldsymbol{r}_{{\cal
Z}}(0)=-\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal
Z}}=\boldsymbol{Q}^{t}\boldsymbol{U}_{\bar{\cal
Z}}^{t}\boldsymbol{r}_{\bar{\cal Z}}$ are available once we sweep the tableau
(26) on the diagonal entries corresponding to the constraints in ${\cal Z}$ at
the point $\boldsymbol{x}(0)=-\boldsymbol{A}^{-1}\boldsymbol{b}$. As noted
earlier, if the coefficients of several active constraints are simultaneously
poised to exit their subdifferentials, then one must consider all possible
swept and unswept combinations of these constraints. The operative criteria
for choosing the right combination involve the available quantities
$\boldsymbol{Q}^{t}\boldsymbol{u}_{\bar{\cal Z}}$ and
$-\boldsymbol{Q}^{t}\boldsymbol{b}+\boldsymbol{R}\boldsymbol{c}_{{\cal Z}}$.
One of the sweeping combinations is bound to give a correct direction for the
next extension of the path.
The computational complexity of path following depends on the number of
parameters $m$ and the number of constraints $n=r+s$. Computation of the
initial solution $-\boldsymbol{A}^{-1}\boldsymbol{b}$ takes about $3m^{3}$
floating point operations (flops). There is no need to store or update the
$\boldsymbol{P}$ block during path following. The remaining sweeps and inverse
sweeps take on the order of $n(m+n)$ flops each. This count must be multiplied
by the number of segments along the path, which empirically is on the order of
$O(n)$. The sweep tableau requires storing $(m+n)^{2}$ real numbers. We
recommend all computations be done in double precision. Both flop counts and
storage can be halved by exploiting symmetry. Finally, it is worth mentioning
some computational shortcuts for the multi-task learning model. Among these
are the formulas
$\displaystyle(\boldsymbol{I}\otimes\boldsymbol{X})^{t}(\boldsymbol{I}\otimes\boldsymbol{X})$
$\displaystyle=$
$\displaystyle\boldsymbol{I}\otimes\boldsymbol{X}^{t}\boldsymbol{X}$
$\displaystyle(\boldsymbol{I}\otimes\boldsymbol{X}^{t}\boldsymbol{X})^{-1}$
$\displaystyle=$
$\displaystyle\boldsymbol{I}\otimes(\boldsymbol{X}^{t}\boldsymbol{X})^{-1}$
$\displaystyle(\boldsymbol{I}\otimes\boldsymbol{X}^{t}\boldsymbol{X})^{-1}\boldsymbol{I}\otimes\boldsymbol{V}$
$\displaystyle=$
$\displaystyle\boldsymbol{I}\otimes(\boldsymbol{X}^{t}\boldsymbol{X})^{-1}\boldsymbol{V}$
$\displaystyle(\boldsymbol{I}\otimes\boldsymbol{X}^{t}\boldsymbol{X})^{-1}\boldsymbol{I}\otimes\boldsymbol{W}$
$\displaystyle=$
$\displaystyle\boldsymbol{I}\otimes(\boldsymbol{X}^{t}\boldsymbol{X})^{-1}\boldsymbol{W}.$
Initialize $k=0$, $\rho_{0}=0$, and the path tableau (26). Sweep the diagonal
entries of $-\boldsymbol{A}$. Enter the main loop.
repeat
Increment $k$ by 1.
Compute the hitting time or exit time $\rho^{(i)}$ for each constraint $i$.
Set $\rho_{k}=\min\\{\rho^{(i)}:\rho^{(i)}>\rho_{k-1}\\}$.
Update the coefficient vector by equation (22).
Sweep the diagonal entry of the inactive constraint that becomes active or
inverse sweep the diagonal entry of the active constraint that becomes
inactive.
Update the solution vector $\boldsymbol{x}_{k}=\boldsymbol{x}(\rho_{k})$ by
equation (17).
until ${\cal N}_{\text{E}}={\cal P}_{\text{E}}={\cal P}_{\text{I}}=\emptyset$
Algorithm 1 Solution path of the primal problem (6) when $\boldsymbol{A}$ is
positive definite.
## 5 Extensions of the Path Algorithm
As just presented, the path algorithm starts from the unconstrained solution
and moves forward along the path to the constrained solution. With minor
modifications, the same algorithm can start in the middle of the path or move
in the reverse direction along it. The latter tactic might prove useful in
lasso and fused-lasso problems, where the fully constrained solution is
trivial. In general, consider starting from $\boldsymbol{x}(\rho_{0})$ at a
point $\rho_{0}$ on the path. Let ${\cal Z}={\cal Z}_{\text{E}}\cup{\cal
Z}_{\text{I}}$ continue to denote the zero set for the segment containing
$\rho_{0}$. Path following begins by sweeping the upper left block of the
tableau (32) and then proceeds as indicated in Algorithm 1. Traveling in the
reverse direction entails calculation of hitting and exit times for decreasing
$\rho$ rather than increasing $\rho$.
Our assumption that $\boldsymbol{A}$ is positive definite automatically
excludes underdetermined statistical problems with more parameters than cases.
Here we briefly indicate how to carry out the exact penalty method when this
assumption fails and the sweep operator cannot be brought into play. In the
absence of constraints, $f(\boldsymbol{x})$ lacks a minimum if and only if
either $\boldsymbol{A}$ has a negative eigenvalue or the equation
$\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b}$ has no solution. In either
circumstance a unique global minimum may exist if enough constraints are
enforced. Suppose $\boldsymbol{x}(\rho_{0})$ supplies the minimum of the exact
penalty function ${\cal E}_{\rho}(\boldsymbol{x})$ at $\rho=\rho_{0}>0$. Let
the matrix $\boldsymbol{U}_{\cal Z}$ hold the active constraint vectors. As we
slide along the active constraints, the minimum point can be represented as
$\boldsymbol{x}(\rho)=\boldsymbol{x}(\rho_{0})+\boldsymbol{Y}\boldsymbol{y}(\rho)$,
where the columns of $\boldsymbol{Y}$ are orthogonal to the rows of
$\boldsymbol{U}_{\cal Z}$. One can construct $\boldsymbol{Y}$ by the Gramm-
Schmidt process; $\boldsymbol{Y}$ is then the orthogonal complement of
$\boldsymbol{U}_{\cal Z}$ in the QR decomposition. The active constraints hold
because $\boldsymbol{U}_{\cal Z}\boldsymbol{x}(\rho)=\boldsymbol{U}_{\cal
Z}\boldsymbol{x}(\rho_{0})=\boldsymbol{c}_{{\cal Z}}$.
The analogue of the stationarity condition (7) under reparameterization is
$\displaystyle{\bf 0}$ $\displaystyle=$
$\displaystyle\boldsymbol{Y}^{t}\boldsymbol{A}\boldsymbol{Y}\boldsymbol{y}(\rho)+\boldsymbol{Y}^{t}\boldsymbol{b}+\rho\boldsymbol{Y}^{t}\boldsymbol{u}_{\bar{\cal
Z}}.$ (34)
The inactive constraints do not appear in this equation because
$\boldsymbol{v}_{i}^{t}\boldsymbol{Y}={\bf 0}$ and
$\boldsymbol{w}_{j}^{t}\boldsymbol{Y}={\bf 0}$ for $i$ or $j$ active. Solving
for $\boldsymbol{y}(\rho)$ and $\boldsymbol{x}(\rho)$ gives
$\displaystyle\boldsymbol{y}(\rho)$ $\displaystyle=$
$\displaystyle-(\boldsymbol{Y}^{t}\boldsymbol{A}\boldsymbol{Y})^{-1}(\boldsymbol{Y}^{t}\boldsymbol{b}+\rho\boldsymbol{Y}^{t}\boldsymbol{u}_{\bar{\cal
Z}})$ $\displaystyle\boldsymbol{x}(\rho)$ $\displaystyle=$
$\displaystyle\boldsymbol{x}(\rho_{0})-\boldsymbol{Y}(\boldsymbol{Y}^{t}\boldsymbol{A}\boldsymbol{Y})^{-1}(\boldsymbol{Y}^{t}\boldsymbol{b}+\rho\boldsymbol{Y}^{t}\boldsymbol{u}_{\bar{\cal
Z}})$ (35)
and does not require inverting $\boldsymbol{A}$. Because the solution
$\boldsymbol{x}(\rho)$ is affine in $\rho$, it is straightforward to calculate
the hitting times for the inactive constraints.
Under the original parametrization, the Lagrange multipliers and corresponding
active coefficients appearing in the stationarity condition (7) can still be
recovered by invoking equation (9). Again it is a simple matter to calculate
exit times. The formulas are not quite as elegant as those based on the sweep
operator, but all essential elements for traversing the path are available.
Adding or deleting a row of the matrix $\boldsymbol{U}_{\cal Z}$ can be
accomplished by updating the QR decomposition. The fast algorithms for this
purpose simultaneously update $\boldsymbol{Y}$ (Lawson and Hanson, 1987;
Nocedal and Wright, 2006). More generally for equality constrained problems
generated by the lasso and generalized lasso, the constraint matrix
$\boldsymbol{U}_{\cal Z}$ as one approaches the penalized solution is often
very sparse. The required QR updates are then numerically cheap. For the sake
of brevity, we omit further details.
## 6 Degrees of Freedom Under Affine Constraints
We now specialize to the least squares problem with the choices
$\boldsymbol{A}=\boldsymbol{X}^{t}\boldsymbol{X}$,
$\boldsymbol{b}=-\boldsymbol{X}^{t}\boldsymbol{y}$, and
$\boldsymbol{x}(\rho)=\hat{\boldsymbol{\beta}}(\rho)$ and consider how to
define degrees of freedom in the presence of both equality and inequality
constraints. As previous authors (Efron et al., 2004; Tibshirani and Taylor,
2011; Zou et al., 2007) have shown, the most productive approach relies on
Stein’s characterization (Efron, 2004; Stein, 1981)
$\displaystyle\text{df}(\hat{\boldsymbol{y}})$ $\displaystyle=$
$\displaystyle\mathbf{E}\left(\sum_{i=1}^{n}\frac{\partial}{\partial
y_{i}}\hat{y_{i}}\right)\;\>\,=\;\>\,\mathbf{E}\left[\text{tr}(d_{\boldsymbol{y}}\hat{\boldsymbol{y}})\right]$
of the degrees of freedom. Here
$\hat{\boldsymbol{y}}=\boldsymbol{X}\hat{\boldsymbol{\beta}}$ is the fitted
value of $\boldsymbol{y}$, and $d_{\boldsymbol{y}}\hat{\boldsymbol{y}}$
denotes its differential with respect to the entries of $\boldsymbol{y}$.
Equation (17) implies that
$\displaystyle\hat{\boldsymbol{y}}$ $\displaystyle=$
$\displaystyle\boldsymbol{X}\hat{\boldsymbol{\beta}}\;\>\,=\;\>\,\boldsymbol{X}\boldsymbol{P}\boldsymbol{X}^{t}\boldsymbol{y}+\boldsymbol{X}\boldsymbol{Q}\boldsymbol{c}_{\cal
Z}-\rho\boldsymbol{X}\boldsymbol{P}\boldsymbol{u}_{\bar{\cal Z}}.$
Because $\rho$ is fixed, it follows that
$d_{\boldsymbol{y}}\hat{\boldsymbol{y}}=\boldsymbol{X}\boldsymbol{P}\boldsymbol{X}^{t}$.
The representation
$\displaystyle\boldsymbol{X}\boldsymbol{P}\boldsymbol{X}^{t}$ $\displaystyle=$
$\displaystyle\boldsymbol{X}(\boldsymbol{X}^{t}\boldsymbol{X})^{-1}\boldsymbol{X}^{t}-\boldsymbol{X}(\boldsymbol{X}^{t}\boldsymbol{X})^{-1}\boldsymbol{U}_{\cal
Z}^{t}[\boldsymbol{U}_{\cal
Z}(\boldsymbol{X}^{t}\boldsymbol{X})^{-1}\boldsymbol{U}_{\cal
Z}^{t}]^{-1}\boldsymbol{U}_{\cal
Z}(\boldsymbol{X}^{t}\boldsymbol{X})^{-1}\boldsymbol{X}^{t}$ $\displaystyle=$
$\displaystyle\boldsymbol{P}_{1}-\boldsymbol{P}_{2}$
and the cyclic permutation property of the trace function applied to the
matrices $\boldsymbol{P}_{1}$ and $\boldsymbol{P}_{2}$ yield the formula
$\displaystyle\mathbf{E}\left[\text{tr}(d_{\boldsymbol{y}}\hat{\boldsymbol{y}})\right]$
$\displaystyle=$ $\displaystyle m-\mathbf{E}(|{\cal Z}|),$
where $m$ equals the number of parameters. In other words, $m-|{\cal Z}|$ is
an unbiased estimator of the degrees of freedom. This result obviously depends
on our assumptions that $\boldsymbol{X}$ has full column rank $m$ and the
constraints $\boldsymbol{v}_{i}$ and $\boldsymbol{w}_{j}$ are linearly
independent. The latter condition is obviously true for lasso and fused-lasso
problems. The validity of Stein’s formula requires the fitted value
$\hat{\boldsymbol{y}}$ to be a continuous and almost surely differentiable
function of $\boldsymbol{y}$ (Stein, 1981). Fortunately, this is the case for
lasso (Zou et al., 2007) and generalized lasso problems (Tibshirani and
Taylor, 2011) and for at least one case of shape-restricted regression (Meyer
and Woodroofe, 2000). Our derivation does not depend directly on whether the
constraints are equality or inequality constraints. Hence, the degrees of
freedom estimator can be applied in shape-restricted regression using model
selection criteria such as $C_{p}$, AIC, and BIC along the whole path. The
concave regression example in the introduction illustrates the general idea.
## 7 Examples
Our examples illustrate both the mechanics and the potential of path
following. The path algorithm’s ability to handle inequality constraints
allows us to obtain path solutions to a variety of shape-restricted
regressions. Problems of this sort may well dominate the future agenda of
nonparametric estimation.
### 7.1 Two Toy Examples
Our first example (Lawson and Hanson, 1987) fits a straight line
$y=\beta_{0}+x\beta_{1}$ to the data points (0.25,0.5), (0.5,0.6), (0.5,0.7),
and (0.8,1.2) by minimizing the least squares criterion
$\|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\|_{2}^{2}$ subject to the
constraints
$\displaystyle\beta_{1}\geq 0,\hskip 14.45377pt\beta_{0}\geq 0,\hskip
14.45377pt\beta_{0}+\beta_{1}\leq 1.$
In our notation
$\displaystyle\boldsymbol{A}$ $\displaystyle=$
$\displaystyle\boldsymbol{X}^{t}\boldsymbol{X}\;\;=\;\;\left(\begin{array}[]{cc}4.0000&2.0500\\\
2.0500&1.2025\end{array}\right),\hskip
7.22743pt\boldsymbol{b}\;\;=\;\;-\boldsymbol{X}^{t}\boldsymbol{y}\;\;=\;\;\left(\begin{array}[]{c}-3.0000\\\
-1.7350\end{array}\right),$ $\displaystyle\boldsymbol{W}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{rr}-1&0\\\ -1&0\\\
1&1\end{array}\right),\hskip
7.22743pt\boldsymbol{e}\;\;=\;\;\left(\begin{array}[]{c}0\\\ 0\\\
1\end{array}\right).$
The initial tableau is
$\displaystyle\left(\begin{tabular}[]{rr|rrr|r}-4.0000&-2.0500&1&1&-1&-3.0000\\\
-2.0500&-1.2025&0&0&-1&-1.7350\\\ \hline\cr 1&0&0&0&0&0\\\ 1&0&0&0&0&0\\\
-1&-1&0&0&0&-1\\\ \hline\cr-3.0000&-1.7350&0&0&-1&0\end{tabular}\right).$
Sweeping the first two diagonal entries produces
$\displaystyle\left(\begin{tabular}[]{rr|rrr|r}1.9794&-3.3745&-1.9794&3.3745&-1.3951&0.0835\\\
-3.3745&6.5844&3.3745&-6.5844&3.2099&1.3004\\\
\hline\cr-1.9794&3.3745&1.9794&-3.3745&1.3951&-0.0835\\\
3.3745&-6.5844&-3.3745&6.5844&-3.2099&-1.3004\\\
-1.3951&3.2099&1.3951&-3.2099&1.8148&0.3840\\\ \hline\cr
0.0835&1.3004&-0.0835&-1.3004&0.3840&2.5068\\\ \end{tabular}\right),$
from which we read off the unconstrained solution
$\boldsymbol{\beta}(0)=(0.0835,1.3004)^{t}$ and the constraint residuals
$(-0.0835,-1.3004,0.3840)^{t}$. The latter indicates that ${\cal
N}_{\text{I}}=\\{1,2\\}$, ${\cal Z}_{\text{I}}=\emptyset$, and ${\cal
P}_{\text{I}}=\\{3\\}$. Multiplying the middle block matrix by the coefficient
vector $\boldsymbol{r}=(0,0,1)^{t}$ and dividing the residual vector entrywise
give the hitting times $\rho=(-0.0599,0.4051,0.2116)$. Thus $\rho_{1}=0.2116$
and
$\displaystyle\boldsymbol{\beta}(0.2116)$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{c}0.0835\\\
1.3004\end{array}\right)-0.2116\times\left(\begin{array}[]{c}-1.3951\\\
3.2099\end{array}\right)\;\>\,=\;\>\,\left(\begin{array}[]{c}0.3787\\\
0.6213\end{array}\right).$
Now ${\cal N}=\\{1,2\\}$, ${\cal Z}=\\{3\\}$, ${\cal P}=\emptyset$, and we
have found the solution. Figure 2 displays the data points and the
unconstrained and constrained fitted lines.
Figure 2: The data points and the fitted lines for the first toy example of
constrained curve fitting (Lawson and Hanson, 1987).
Our second toy example concerns the toxin response problem (Schoenfeld, 1986)
with $m$ toxin levels $x_{1}\leq x_{2}\leq\cdots\leq x_{m}$ and a mortality
rate $y_{i}=f(x_{i})$ at each level. It is reasonable to assume that the
mortality function $f(x)$ is nonnegative and increasing. Suppose $\bar{y}_{i}$
are the observed death frequencies averaged across $n_{i}$ trials at level
$x_{i}$. In a finite sample, the $\bar{y}_{i}$ may fail to be nondecreasing.
For example, in an EPA study of the effects of chromium on fish (Schoenfeld,
1986), the observed binomial frequencies and chromium levels are
$\displaystyle\bar{\boldsymbol{y}}$ $\displaystyle=$
$\displaystyle(0.3752,0.3202,0.2775,0.3043,0.5327)^{t}$
$\displaystyle\boldsymbol{x}$ $\displaystyle=$
$\displaystyle(51,105,194,384,822)^{t}\;\;\mbox{in $\mu$g/l}.$
Isotonic regression minimizes $\sum_{k=1}^{m}(\bar{y}_{k}-\theta_{k})^{2}$
subject to the constraints $0\leq\theta_{1}\leq\cdots\leq\theta_{m}$ on the
binomial parameters $\theta_{k}=f(x_{k})$. The solution path depicted in
Figure 3 is continuous and piecewise linear as advertised, but the coefficient
paths are nonlinear. The first four binomial parameters coalesce in the
constrained estimate.
$\begin{array}[]{cc}\includegraphics[width=180.67499pt]{toy_fish_solpath}&\includegraphics[width=162.6075pt]{toy_fish_coefpath}\end{array}$
Figure 3: Toxin response example. Left: Solution path. Right: Coefficient
paths for the constraints.
### 7.2 Generalized Lasso Problems
The generalized lasso problems studied in (Tibshirani and Taylor, 2011) all
reduce to minimization of some form of the objective function (6). To avoid
repetition, we omit detailed discussion of this class of problems and simply
refer readers interested in applications to lasso or fused-lasso penalized
regression, outlier detections, trend filtering, and image restoration to the
original article (Tibshirani and Taylor, 2011). Here we would like to point
out the relevance of the generalized lasso problems to graph-guided penalized
regression (Chen et al., 2010). Suppose each node $i$ of a graph is assigned a
regression coefficient $\beta_{i}$ and a weight $w_{i}$. In graph penalized
regression, the objective function takes the form
$\displaystyle\frac{1}{2}\|\boldsymbol{W}(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})\|_{2}^{2}+\lambda_{\text{G}}\sum_{i\sim
j}\left|\frac{\beta_{i}}{\sqrt{d_{i}}}-\text{sgn}(r_{ij})\frac{\beta_{j}}{\sqrt{d_{j}}}\right|+\lambda_{\text{L}}\sum_{j}|\beta_{j}|,$
(41)
where the set of neighboring pairs $i\sim j$ define the graph, $d_{i}$ is the
degree of node $i$, and $r_{ij}$ is the correlation coefficient between $i$
and $j$. Under a line graph, the objective function (41) reduces to the fused
lasso. In 2-dimensional imaging applications, the graph consists of
neighboring pixels in the plane, and minimization of the function (41) is
accomplished by total variation algorithms. In MRI images, the graph is
defined by neighboring pixels in 3 dimensions. Penalties are introduced in
image reconstruction and restoration to enforce smoothness. In microarray
analysis, the graph reflects gene networks. Smoothing the $\beta_{i}$ over the
network is motivated by the assumption that the expression levels of related
genes should rise and fall in a coordinated fashion. Ridge regularization in
graph penalized regression (Li and Li, 2008) is achieved by changing the
objective function to
$\displaystyle\frac{1}{2}\|\boldsymbol{W}(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})\|_{2}^{2}+\lambda_{\text{G}}\sum_{i\sim
j}\left(\frac{\beta_{i}}{\sqrt{d_{i}}}-\text{sgn}(r_{ij})\frac{\beta_{j}}{\sqrt{d_{j}}}\right)^{2}+\lambda_{\text{L}}\sum_{j}|\beta_{j}|.$
If one fixes either of the tuning constants in these models, our path
algorithm delivers the solution path as a function of the other tuning
constant, Alternatively, one can fix the ratio of the two tuning constants.
Finally, the extension
$\displaystyle\frac{1}{2}\|\boldsymbol{Y}-\boldsymbol{X}\boldsymbol{B}\|_{\text{F}}^{2}+\lambda_{\text{G}}\sum_{i\sim
j}\sum_{k=1}^{K}\left|\frac{\beta_{ki}}{\sqrt{d_{i}}}-\text{sgn}(r_{ij})\frac{\beta_{kj}}{\sqrt{d_{j}}}\right|+\lambda_{\text{L}}\sum_{k,i}|\beta_{k,i}|$
of the objective function to multivariate response models is obvious.
In principle, the path algorithm applies to all of these problems provided the
design matrix $\boldsymbol{X}$ has full column rank. If $\boldsymbol{X}$ has
reduced rank, then it is advisable to add a small amount of ridge
regularization $\epsilon\sum_{i}\beta_{i}^{2}$ to the objective function
(Tibshirani and Taylor, 2011). Even so, computation of the unpenalized
solution may be problematic in high dimensions. Alternatively, path following
can be conducted starting from the fully constrained problem as suggested in
Section 5.
### 7.3 Shape Restricted Regressions
Order-constrained regression is now widely accepted as an important modeling
tool (Robertson et al., 1988; Silvapulle and Sen, 2005). If
$\boldsymbol{\beta}$ is the parameter vector, monotone regression includes
isotone constraints $\beta_{1}\leq\beta_{2}\leq\cdots\leq\beta_{m}$ or
antitone constraints $\beta_{1}\geq\beta_{2}\geq\cdots\geq\beta_{m}$. In
partially ordered regression, subsets of the parameters are subject to isotone
or antitone constraints. In other problems it is sensible to impose convex or
concave constraints. If observations are collected at irregularly spaced time
points $t_{1}\leq t_{2}\leq\cdots\leq t_{m}$, then convexity translates into
the constraints
$\displaystyle\frac{\beta_{i+2}-\beta_{i+1}}{t_{i+2}-t_{i+1}}$
$\displaystyle\geq$ $\displaystyle\frac{\beta_{i+1}-\beta_{i}}{t_{i+1}-t_{i}}$
for $1\leq i\leq m-2$. When the time intervals are uniform, these convex
constraints become $\beta_{i+2}-\beta_{i+1}\geq\beta_{i+1}-\beta_{i}$.
Concavity translates into the opposite set of inequalities. All of these shape
restricted regression problems can be solved by path following.
As an example of partial isotone regression, we fit the data from Table 1.3.1
of the reference (Robertson et al., 1988) on the first-year grade point
averages (GPA) of 2397 University of Iowa freshmen. These data can be
downloaded as part of the R package ic.infer. The ordinal predictors high
school rank (as a percentile) and ACT (a standard aptitude test) score are
discretized into nine ordered categories each. A rational admission policy
based on these two predictor sets should be isotone separately within each
set. Figure 4 shows the unconstrained and constrained solutions for the
intercept and the two predictor sets and the solution path of the regression
coefficients for the high school rank predictor.
$\begin{array}[]{cc}\includegraphics[width=162.6075pt]{iowagpa_result}&\includegraphics[width=162.6075pt]{iowagpa_solpath}\end{array}$
Figure 4: Left: Unconstrained and constrained estimates for the Iowa GPA data.
Right: Solution paths of for the regression coefficients corresponding to high
school rank.
The same authors (Robertson et al., 1988) predict the probability of obtaining
a B or better college GPA based on high school GPA and ACT score. In their
data covering 1490 college students, $\bar{y}_{ij}$ is the proportion of
students who obtain a B or better college GPA among the $n_{ij}$ students who
are within the $i$th ACT category and the $j$th high school GPA category.
Prediction is achieved by minimizing the criterion
$\sum_{i}\sum_{j}n_{ij}(\bar{y}_{ij}-\theta_{ij})^{2}$ subject to the matrix
partial-order constraints $\theta_{11}\geq 0$,
$\theta_{ij}\leq\theta_{i+1,j}$, and $\theta_{ij}\leq\theta_{i,j+1}$. Figure 5
shows the solution path and the residual sum of squares and effective degrees
of freedom along the path. The latter vividly illustrates the tradeoff between
goodness of fit and degrees of freedom. Readers can consult page 33 of the
reference (Robertson et al., 1988) for the original data and the constrained
parameter estimates.
$\begin{array}[]{cc}\includegraphics[width=162.6075pt]{gpapred_solpath}&\includegraphics[width=162.6075pt]{gpapred_ssdf_path}\end{array}$
Figure 5: GPA prediction example. Left: Solution path for the predicted
probabilities. Right: Residual sum of squares and effective degrees of freedom
along the path.
### 7.4 Nonparametric Shape-Restricted Regression
In this section we visit a few problems amenable to the path algorithm arising
in nonparametric statistics. Given data $(x_{i},y_{i})$, $i=1,\ldots,n$, and a
weight function $w(x)$, nonparametric least squares seeks a regression
function $\theta(x)$ minimizing the criterion
$\displaystyle\sum_{i=1}^{n}w(x_{i})[y_{i}-\theta(x_{i})]^{2}$ (42)
over a space ${\cal C}$ of functions with shape restrictions. In concave
regression for instance, ${\cal C}$ is the space of concave functions. This
seemingly intractable infinite dimensional problem can be simplified by
minimizing the least squares criterion (3) subject to inequality constraints.
For a univariate predictor and concave regression, the constraints (4) are
pertinent. The piecewise linear function extrapolated from the estimated
$\theta_{i}$ is clearly concave. The consistency of concavity constrained
least squares is proved by Hanson and Pledger (1976); the asymptotic
distribution of the corresponding estimator and its rate of convergence are
investigated in later papers (Groeneboom et al., 2001; Mammen, 1991). Other
relevant shape restrictions for univariate predictors include monotonicity
(Brunk, 1955; Grenander, 1956), convexity (Groeneboom et al., 2001),
supermodularity (Beresteanu, 2004), and combinations of these.
Multidimensional nonparametric estimation is much harder because there is no
natural order on $\mathbb{R}^{d}$ when $d>1$. One fruitful approach to shape-
restricted regression relies on sieve estimators (Beresteanu, 2004; Shen and
Wong, 1994). The general idea is to introduce a basis of local functions (for
example, normalized B-splines) centered on the points of a grid
$\boldsymbol{G}$ spanning the support of the covariate vectors
$\boldsymbol{x}_{i}$. Admissible estimators are then limited to linear
combinations of the basis functions subject to restrictions on the estimates
at the grid points. Estimation can be formalized as minimization of the
criterion
$\|\boldsymbol{y}-\boldsymbol{\Psi}(\boldsymbol{X})\boldsymbol{\theta}\|_{2}^{2}$
subject to the constraints
$\boldsymbol{C}\boldsymbol{\Psi}(\boldsymbol{G})\boldsymbol{\theta}\leq{\bf
0}$, where $\boldsymbol{\Psi}(\boldsymbol{X})$ is the matrix of basis
functions evaluated at the covariate vectors $\boldsymbol{x}_{i}$,
$\boldsymbol{\Psi}(\boldsymbol{G})$ is the matrix of basis functions evaluated
at the grid points, and $\boldsymbol{\theta}$ is a vector of regression
coefficients. The linear inequality constraints incorporated in the matrix
$\boldsymbol{C}$ reflect the required shape restrictions required. Estimation
is performed on a sequence of grids (a sieve). Controlling the rate at which
the sieve sequence converges yields a consistent estimator (Beresteanu, 2004;
Shen and Wong, 1994). Prediction reduces to interpolation, and the path
algorithm provides a computational engine for sieve estimation.
A related but different approach for multivariate convex regression minimizes
the least squares criterion (3) subject to the constraints
$\boldsymbol{\xi}_{i}^{t}(\boldsymbol{x}_{j}-\boldsymbol{x}_{i})\leq\theta_{j}-\theta_{i}$
for every ordered pair $(i,j)$. In effect, $\theta_{i}$ is viewed as the value
of the regression function $\theta(\boldsymbol{x})$ at the point
$\boldsymbol{x}_{i}$. The unknown vector $\boldsymbol{\xi}_{i}$ serves as a
subgradient of $\theta(\boldsymbol{x})$ at $\boldsymbol{x}_{i}$. Because
convexity is preserved by maxima, the formula
$\displaystyle\theta(\boldsymbol{x})$ $\displaystyle=$
$\displaystyle\max_{j}\Big{[}\theta_{j}+\boldsymbol{\xi}_{j}^{t}(\boldsymbol{x}-\boldsymbol{x}_{j})\Big{]}$
defines a convex function with value $\theta_{i}$ at
$\boldsymbol{x}=\boldsymbol{x}_{i}$. In concave regression the opposite
constraint inequalities are imposed. Interpolation of predicted values in this
model is accomplished by simply taking minima or maxima. Estimation reduces to
a positive semidefinite quadratic program involving $n(d+1)$ variables and
$n(n-1)$ inequality constraints. Note that the feasible region is nontrivial
because setting all $\theta_{i}=0$ and all $\boldsymbol{\xi}_{i}={\bf 0}$
works. In implementing the extension of the path algorithm mentioned in
Section 5, the large number of constraints may prove to be a hindrance and
lead to very short path segments. To improve estimation of the subgradients,
it might be worth adding a small multiple of the ridge penalty
$\sum_{i}\|\boldsymbol{\xi}_{i}\|_{2}^{2}$ to the objective function (3). This
would have the beneficial effect of turning a semidefinite quadratic program
into a positive definite quadratic program.
## 8 Conclusions
Our new path algorithm for convex quadratic programming under affine
constraints generalizes previous path algorithms for lasso penalized
regression and generalized lasso penalized regression. By directly attacking
the primal problem, the new algorithm avoids the circuitous tactic of solving
the dual problem and translating the solution back to the primal problem. Our
various examples confirm the path algorithm’s versatility. It’s potential
disadvantages involve computing the initial point
$-\boldsymbol{A}^{-1}\boldsymbol{b}$ and storing the tableau. In problems with
large numbers of parameters, neither of these steps is trivial. However, if
$\boldsymbol{A}$ has enough structure, then an explicit inverse may exist. As
we noted, once $\boldsymbol{A}^{-1}$ is computed, there is no need to store
the entire tableau. The multi-task regression problem with a large number of
responses per case is a typical example where computation of $\boldsymbol{A}$
simplifies. In settings where the matrix $\boldsymbol{A}$ is singular,
parameter constraints may compensate. We have briefly indicated how to conduct
path following in this circumstance.
Our path algorithm qualifies as a general convex quadratic program solver.
Custom algorithms have been developed for many special cases of quadratic
programming. For example, the pool-adjacent-violators algorithm (PAVA) is now
the standard approach to isotone regression (de Leeuw et al., 2009). The other
generic methods of quadratic programming include active set and interior point
methods. A comparison with our path algorithm would be illuminating, but in
the interests of brevity we refrain from tackling the issue here. The path
algorithm bears a stronger resemblance to the active set method. Indeed, both
operate by deleting and adding constraints to a working active set. However,
the active set method must start with a feasible point, and interior point
methods must start with points in the relative interior of the feasible
region. The path algorithm’s ability to deliver the whole regularized path
with little additional computation cost beyond constrained estimation is bound
to be appealing to statisticians.
## Acknowledgements
Kenneth Lange acknowledges support from United States Public Health Service
grants GM53275, MH59490, CA87949, and CA16042.
## References
* Beresteanu (2004) Beresteanu, A. (2004). Nonparametric estimation of regression functions under restrictions on partial derivatives. Technical report.
* Brunk (1955) Brunk, H. D. (1955). Maximum likelihood estimates of monotone parameters. Ann. Math. Statist. 26, 607–616.
* Chen et al. (2010) Chen, X., Q. Lin, S. Kim, J. Carbonell, and E. Xing (2010). An efficient proximal gradient method for general structured sparse learning. arXiv:1005.4717.
* de Leeuw et al. (2009) de Leeuw, J., K. Hornik, and P. Mair (2009, 10). Isotone optimization in R: Pool-adjacent-violators algorithm (pava) and active set methods. Journal of Statistical Software 32(5), 1–24.
* Dempster (1969) Dempster, A. P. (1969). Elements of Continuous Multivariate Analysis. Addison-Wesley series in behavioral sciences. Reading, MA: Addison-Wesley.
* Efron (2004) Efron, B. (2004). The estimation of prediction error: covariance penalties and cross-validation. J. Amer. Statist. Assoc. 99(467), 619–642. With comments and a rejoinder by the author.
* Efron et al. (2004) Efron, B., T. Hastie, I. Johnstone, and R. Tibshirani (2004). Least angle regression. Ann. Statist. 32(2), 407–499. With discussion, and a rejoinder by the authors.
* Friedman (2008) Friedman, J. (2008). Fast sparse regression and classification. http://www-stat.stanford.edu/ jhf/ftp/GPSpaper.pdf.
* Goodnight (1979) Goodnight, J. H. (1979). A tutorial on the sweep operator. The American Statistician 33(3), pp. 149–158.
* Grenander (1956) Grenander, U. (1956). On the theory of mortality measurement. II. Skand. Aktuarietidskr. 39, 125–153 (1957).
* Groeneboom et al. (2001) Groeneboom, P., G. Jongbloed, and J. A. Wellner (2001). Estimation of a convex function: characterizations and asymptotic theory. Ann. Statist. 29(6), 1653–1698.
* Hanson and Pledger (1976) Hanson, D. L. and G. Pledger (1976). Consistency in concave regression. Ann. Statist. 4(6), 1038–1050.
* Hildreth (1954) Hildreth, C. (1954). Point estimates of ordinates of concave functions. J. Amer. Statist. Assoc. 49, 598–619.
* Jennrich (1977) Jennrich, R. (1977). Stepwise regression. In Statistical Methods for Digital Computers, pp. 58–75. New York: Wiley-Interscience.
* Lange (2010) Lange, K. (2010). Numerical Analysis for Statisticians (Second ed.). Statistics and Computing. New York: Springer.
* Lawson and Hanson (1987) Lawson, C. L. and R. J. Hanson (1987). Solving Least Squares Problems (Classics in Applied Mathematics) (New edition ed.). Society for Industrial Mathematics.
* Li and Li (2008) Li, C. and H. Li (2008). Network-constrained regularization and variable selection for analysis of genomic data. Bioinformatics 24(9), 1175–1182.
* Little and Rubin (2002) Little, R. J. A. and D. B. Rubin (2002). Statistical Analysis with Missing Data (Second ed.). Wiley Series in Probability and Statistics. Hoboken, NJ: Wiley-Interscience [John Wiley & Sons].
* Mammen (1991) Mammen, E. (1991). Nonparametric regression under qualitative smoothness assumptions. Ann. Statist. 19(2), 741–759.
* Meyer and Woodroofe (2000) Meyer, M. and M. Woodroofe (2000). On the degrees of freedom in shape-restricted regression. Ann. Statist. 28(4), 1083–1104.
* Nocedal and Wright (2006) Nocedal, J. and S. J. Wright (2006). Numerical Optimization (Second ed.). Springer Series in Operations Research and Financial Engineering. New York: Springer.
* Robertson et al. (1988) Robertson, T., F. T. Wright, and R. L. Dykstra (1988). Order restricted statistical inference. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. Chichester: John Wiley & Sons Ltd.
* Rosset and Zhu (2007) Rosset, S. and J. Zhu (2007). Piecewise linear regularized solution paths. Ann. Statist. 35(3), 1012–1030.
* Ruszczyński (2006) Ruszczyński, A. (2006). Nonlinear Optimization. Princeton, NJ: Princeton University Press.
* Savage (1997) Savage, C. (1997). A survey of combinatorial Gray codes. SIAM Rev. 39(4), 605–629.
* Schoenfeld (1986) Schoenfeld, D. A. (1986). Confidence bounds for normal means under order restrictions, with application to dose-response curves, toxicology experiments, and low-dose extrapolation. Journal of the American Statistical Association 81(393), pp. 186–195.
* Shen and Wong (1994) Shen, X. and W. H. Wong (1994). Convergence rate of sieve estimates. Ann. Statist. 22(2), 580–615.
* Silvapulle and Sen (2005) Silvapulle, M. J. and P. K. Sen (2005). Constrained Statistical Inference. Wiley Series in Probability and Statistics. Hoboken, NJ: Wiley-Interscience [John Wiley & Sons]. Inequality, order, and shape restrictions.
* Stein (1981) Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribution. Ann. Statist. 9(6), 1135–1151.
* Tibshirani and Taylor (2011) Tibshirani, R. and J. Taylor (2011). The solution path of the generalized lasso. Ann. Statist. to appear.
* Zou et al. (2007) Zou, H., T. Hastie, and R. Tibshirani (2007). On the “degrees of freedom” of the lasso. Ann. Statist. 35(5), 2173–2192.
|
arxiv-papers
| 2011-03-19T01:27:52 |
2024-09-04T02:49:17.787909
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Hua Zhou and Kenneth Lange",
"submitter": "Hua Zhou",
"url": "https://arxiv.org/abs/1103.3738"
}
|
1103.3787
|
# Pattern-recalling processes in quantum Hopfield networks far from saturation
Jun-ichi Inoue Graduate School of Information Science and Technology,
Hokkaido University, N14-W9, Kita-ku, Sapporo 060-0814, Japan
j$\underline{\,\,\,}$inoue@complex.ist.hokudai.ac.jp
###### Abstract
As a mathematical model of associative memories, the Hopfield model was now
well-established and a lot of studies to reveal the pattern-recalling process
have been done from various different approaches. As well-known, a single
neuron is itself an uncertain, noisy unit with a finite unnegligible error in
the input-output relation. To model the situation artificially, a kind of
‘heat bath’ that surrounds neurons is introduced. The heat bath, which is a
source of noise, is specified by the ‘temperature’. Several studies concerning
the pattern-recalling processes of the Hopfield model governed by the Glauber-
dynamics at finite temperature were already reported. However, we might extend
the ‘thermal noise’ to the quantum-mechanical variant. In this paper, in terms
of the stochastic process of quantum-mechanical Markov chain Monte Carlo
method (the quantum MCMC), we analytically derive macroscopically
deterministic equations of order parameters such as ‘overlap’ in a quantum-
mechanical variant of the Hopfield neural networks (let us call quantum
Hopfield model or quantum Hopfield networks). For the case in which non-
extensive number $p$ of patterns are embedded via asymmetric Hebbian
connections, namely, $p/N\to 0$ for the number of neuron $N\to\infty$ (‘far
from saturation’), we evaluate the recalling processes for one of the built-in
patterns under the influence of quantum-mechanical noise.
## 1 Introduction
Basic concept of associative memories in artificial neural networks was
already proposed in early 70’ s by a Japanese engineer Kaoru Nakano [1].
Unfortunately, in that time, nobody interested in his model, however in 80’s,
J.J. Hopfield [2, 3] pointed out that there exists an energy function
(Lyapunov function) in the so-called associatron (the Nakano model) and the
system can be treated as a kind of spin glasses. After his study, a lot of
researchers who were working in the research field of condensed matter physics
picked the so-called Hopfield model up for their brand-new ‘target materials’.
Among these studies, a remarkable progress has been done by three theoretical
physicists Amit, Gutfreund and Sompolinsky [4] who clearly (mathematically)
defined the concept of storage capacity in the Hopfield model as a critical
point at which system undergoes phase transitions from ferromagnetic retrieval
to spin glass phases by utilizing the replica method. They also introduced a
noise to prevent the network from retrieving one of the built-in patterns as a
‘heat bath’ which surrounds the neurons. They draw the phase diagram which is
composed of three distinct phases, namely, ferromagnetic-retrieval,
paramagnetic and spin glass phases. These phase boundaries are specified by
two control parameters, namely, storage capacity and temperature of the heat
bath.
To evaluate the storage capacity of the Hopfield model without energy function
(for instance, Hopfield model having a non-monotonic input-output function
[5]), Shiino and Fukai [6] proposed the so-called Self-Consistent Signal-to-
Noise Analysis (SCSNA) which enables us to derive a couple of self-consistent
macroscopic equations of state by making use of the concept of the TAP
equations [3].
As we mentioned, these theoretical arguments are constructed for the case in
which each neuron is surrounded by a heat bath at finite temperature. In this
sense, the above studies revealed the robustness of the associative memories
against thermal noises in the artificial brain.
However, we might consider a different kind of such noises, that is, quantum-
mechanical noise. As such successful attempts, Ma and Gong [7], Nishimori and
Nonomura [8] independently introduced the quantum-mechanical noise to the
conventional Hopfield model by adding the transverse field to the classical
Hamiltonian (from now on, we shall refer the model as quantum Hopfield model).
Especially, Nishimori and Nonomura [8] investigated the structure of retrieval
phase diagrams by using of the standard replica method with the assistance of
the static approximation. Therefore, we might say that the equilibrium
properties of the Hopfield model were now well-understood and the methodology
to investigate the model was already established.
On the other hand, theory of the dynamics to evaluate the pattern-recalling
processes is not yet well-established. However, up to now, several powerful
approaches were proposed. For instance, Amari and Maginu [9] pointed out that
the relevant macroscopic quantities in the synchronous neuro-dynamics are the
overlap (the direction cosine) and the noise variance. They derived the update
equations with respect to these quantities. After their study, the so-called
Amari-Maginu theory was improved by taking into account the correlations in
the noise variances by Okada [10].
Whereas for the asynchronous dynamics, Coolen and his co-authors established a
general approach, so-called dynamical replica theory [11, 12]. They utilized
two assumptions, namely, equipartitioning in the sub-shells and self-averaging
of the intrinsic noise distribution to derive the deterministic flow equations
for relevant macroscopic quantities. However, there is no such theoretical
framework so far to investigate the pattern-recalling process of the quantum
Hopfield model systematically.
In this paper, we propose such a candidate of dynamical theory to deal with
the pattern-recalling processes in the quantum Hopfield model. We shall
consider the stochastic process of quantum Monte Carlo method which is applied
for the quantum Hopfield model and investigate the quantum neuro-dynamics
through the differential equations with respect to the macroscopic quantities
such as the overlap.
This paper is organized as follows. In the next section 2, we explain the
basics of the conventional Hopfield model and its generic properties. Then, we
categorize the model into two distinct classes according to the origin of
noises in artificial brain. The quantum Hopfield model is clearly defined. In
section 3, we explain the quantum Monte Carlo method based on the Suzuki-
Trotter decomposition [13] and consider the stochastic process in order to
investigate the pattern-recalling dynamics of the quantum Hopfield model. In
section 4, we derive the macroscopic deterministic flow of the overlap between
the neuronal state and one of the built-in patterns from the microscopic
master equation which describes the stochastic processes in the quantum Monte
Calro method for the quantum Hopfield model [14]. The general solution of the
dynamics is obtained under the so-called static approximation. In the next
section 5, we apply our general solution to a special case, namely, sequential
recalling the built-in two patterns via asymmetric Hebb connections. The
effect of quantum-mechanical noise is compared with that of the conventional
thermal noise. The last section is summary.
## 2 The Hopfield model
In this section, we briefly explain the basics of the conventional Hopfield
model. Then, we shall divide the model into two classes, namely, the Hopfield
model put in thermal noises (the model is referred to as classical systems)
and the same model in the quantum-mechanical noise (the model is referred to
as quantum systems). In following, we define each of the models explicitly.
### 2.1 The classical system
Let us consider the network having $N$-neurons. Each neuron $S_{i}$ takes two
states, namely, $S_{i}=+1$ (fire) and $S_{i}=-1$ (stationary). Neuronal states
are given by the set of variables $S_{i}$, that is,
$\bm{S}=(S_{1},\cdots,S_{N}),\,S_{i}\in\\{+1,-1\\}$. Each neuron is located on
a complete graph, namely, graph topology of the network is ‘fully-connected’.
The synaptic connection between arbitrary two neurons, say, $S_{i}$ and
$S_{j}$ is defined by the following Hebb rule:
$J_{ij}=\frac{1}{N}\sum_{\mu,\nu}\xi_{i}^{\mu}A_{\mu\nu}\xi_{j}^{\nu}$ (1)
where $\bm{\xi}^{\mu}=(\xi_{1},\cdots,\xi_{N}),\xi_{i}^{\mu}\in\\{+1,-1\\}$
denote the embedded patterns and each of them is specified by a label
$\mu=1,\cdots,P$. $A_{\mu\nu}$ denotes $(P\times P)$-size matrix and $P$
stands for the number of built-in patterns. We should keep in mind that there
exists an energy function (a Lyapunov function) in the system if the matrix
$A_{\mu\nu}$ is symmetric.
Then, the output of the neuron $i$, that is, $S_{i}$ is determined by the sign
of the local field $h_{i}$ as
$h_{i}=\sum_{\mu,\nu=1}^{p}\xi_{i}^{\mu}A_{\mu\nu}m^{\nu}+\frac{1}{N}\sum_{\mu^{{}^{\prime}},\nu^{{}^{\prime}}=p+1}^{P}\xi_{i}^{\mu^{{}^{\prime}}}A_{\mu^{{}^{\prime}}\nu^{{}^{\prime}}}\sum_{j}\xi_{j}^{\nu^{{}^{\prime}}}S_{j}$
(2)
where $A_{\mu\nu}$ and $A_{\mu^{{}^{\prime}}\nu^{{}^{\prime}}}$ are elements
of $p\times p,(P-p)\times(P-p)$-size matrices, respectively. We also defined
the overlap (the direction cosine) between the state of neurons $\bm{S}$ and
one of the built-in patterns $\bm{\xi}^{\nu}$ by
$\displaystyle m^{\nu}$ $\displaystyle\equiv$
$\displaystyle\frac{1}{N}\,(\bm{S}\cdot\bm{\xi}^{\nu})=\frac{1}{N}\sum_{i}\xi_{i}^{\nu}S_{i}.$
(3)
Here we should notice that the Hamiltonian of the system is given by
$-\sum_{i}h_{i}S_{i}$. The first term appearing in the left hand side of
equation (2) is a contribution from $p\sim{\cal O}(1)$ what we call ‘condensed
patterns’, whereas the second term stands for the so-called ‘cross-talk
noise’. In this paper, we shall concentrate ourselves to the case in which the
second term is negligibly small in comparison with the first term, namely, the
case of $P=p\sim\mathcal{O}(1)$. In this sense, we can say that the network is
‘far from its saturation’.
### 2.2 The quantum system
To extend the classical system to the quantum-mechanical variant, we rewrite
the local field $h_{i}$ as follows.
$\bm{\phi}_{i}=\sum_{\mu,\nu=1}^{p}\xi_{i}^{\mu}A_{\mu\nu}\left(\frac{1}{N}\sum_{i}\xi_{i}^{\nu}\bm{\sigma}_{i}^{z}\right)$
(4)
where $\bm{\sigma}^{z}_{i}$ ($i=1,\cdots,N$) stands for the $z$-component of
the Pauli matrix. Thus, the Hamiltonian
$\bm{H}_{0}\equiv-\sum_{i}\bm{\phi}_{i}\bm{\sigma}_{i}^{z}$ is a diagonalized
$(2^{N}\times 2^{N})$-size matrix and the lowest eigenvalue is identical to
the ground state of the classical Hamiltonian $-\sum_{i}\phi_{i}S_{i}$
($S_{i}$ is an eigenvalue of the matrix $\bm{\sigma}_{i}^{z}$).
Then, we introduce quantum-mechanical noise into the Hopfield neural network
by adding the transverse field to the Hamiltonian as follows.
$\bm{H}=\bm{H}_{0}-\Gamma\sum_{i=1}^{N}\bm{\sigma}_{i}^{x}$ (5)
where $\bm{\sigma}_{i}^{x}$ is the $x$-component of the Pauli matrix and
transitions between eigenvectors of the classical Hamiltonian $\bm{H}_{0}$ are
induced due to the off-diagonal elements of the matrix $\bm{H}$ for
$\Gamma\neq 0$. In this paper, we mainly consider the system described by (5).
## 3 Quantum Monte Carlo method
The dynamics of the quantum model (5) follows Schr$\ddot{\rm o}$dinger
equation. Thus, we should solve it or investigate the time dependence of the
state $|\psi(t)\rangle$ by using the time-evolutionary operator $\mbox{\rm
e}^{-i\bm{H}\Delta t/\hbar}$ defined for infinitesimal time $\Delta t$ as
$\displaystyle|\psi(t+\Delta t)\rangle$ $\displaystyle=$
$\displaystyle\mbox{\rm e}^{-i\bm{H}\Delta t/\hbar}|\psi(t)\rangle.$ (6)
However, even if we carry it out numerically, it is very hard for us to do it
with reliable precision because $(2^{N}\times 2^{N})$-size Hamilton matrix
becomes huge for the number of neurons $N\gg 1$ as in a realistic brain.
Hence, here we use the quantum Monte Carlo method to simulate the quantum
system in our personal computer and consider the stochastic processes of
Glauber-type to discuss the pattern-recalling dynamics of the quantum Hopfield
model.
### 3.1 The Suzuki-Trotter decomposition
The difficulty to carry out algebraic calculations in the model system is due
to the non-commutation operators appearing in the Hamiltonian (5), namely,
$\bm{H}_{0},\bm{H}_{1}\equiv-\Gamma\sum_{i}\bm{\sigma}_{i}^{x}$. Thus, we use
the following Suzuki-Trotter decomposition [13] in order to deal with the
system as a classical spin system.
$\mbox{\rm tr}\,\mbox{\rm
e}^{\beta(\bm{H}_{0}+\bm{H}_{1})}=\lim_{M\to\infty}\mbox{\rm
tr}\left({\exp}\left(\frac{\beta\bm{H}_{0}}{M}\right){\exp}\left(\frac{\beta\bm{H}_{1}}{M}\right)\right)^{M}$
(7)
where $\beta$ denotes the ‘inverse temperature’ and $M$ is the number of the
Trotter slices, for which the limit $M\to\infty$ should be taken. Thus, one
can deal with $d$-dimensional quantum system as the corresponding
($d+1$)-dimensional classical system.
## 4 Derivation of the deterministic flows
In the previous section, we mentioned that we should simulate the quantum
Hopfield model by means of the quantum Monte Carlo method to reveal the
quantum neuro-dynamics through the time-dependence of the macroscopic
quantities such as the overlap. However, in general, it is also very difficult
to simulate the quantum-mechanical properties at the ground state by a
personal computer even for finite size systems ($N,M<\infty$).
With this fact in mind, in this section, we attempt to derive the macroscopic
flow equations from the microscopic master equation for the classical system
regarded as the quantum system in terms of the Suzuki-Trotter decomposition.
This approach is efficiently possible because the Hopfield model is a fully-
connected mean-field model such as the Sherrington-Kirkpatrick model [15] for
spin glasses and its equilibrium properties are completely determined by
several order parameters.
### 4.1 The master equation
After the Suzuki-Trotter decomposition (7), we obtain the local field for the
neuron $i$ located on the $k$-th Trotter slice as follows.
$\displaystyle\beta\phi_{i}(\bm{\sigma}_{k}:\sigma_{i}(k\pm 1))$
$\displaystyle=$
$\displaystyle\frac{\beta}{M}\sum_{\mu,\nu}\xi_{i}^{\nu}A_{\mu\nu}\left\\{\frac{1}{N}\sum_{j}\xi_{j}^{\nu}\sigma_{j}(k)\right\\}+\frac{B}{2}\left\\{\sigma_{i}(k-1)+\sigma_{i}(k+1)\right\\}$
(8)
where parameter $B$ is related to the amplitude of the transverse field (the
strength of the quantum-mechanical noise) $\Gamma$ by
$B=\frac{1}{2}\log\coth\left(\frac{\beta\Gamma}{M}\right).$ (9)
In the classical limit $\Gamma\to 0$, the parameter $B$ goes to infinity. For
the symmetric matrix $A_{\mu\nu}$, the Hamiltonian (scaled by $\beta$) of the
system is given by
$-\sum_{i}\beta\phi_{i}(\mbox{\boldmath$\sigma$}_{k}:\sigma(k\pm
1))\sigma_{i}(k)$.
Then, the transition probability which specifies the Glauber dynamics of the
system is given by
$w_{i}(\mbox{\boldmath$\sigma$}_{k})=(1/2)[1-\sigma_{i}(k)\tanh(\beta\phi_{i}(\mbox{\boldmath$\sigma$}_{k}:\sigma(k\pm
1)))]$. More explicitly, $w_{i}(\mbox{\boldmath$\sigma$}_{k})$ denotes the
probability that an arbitrary neuron $\sigma_{i}(k)$ changes its state as
$\sigma_{i}(k)\to-\sigma_{i}(k)$ within the time unit. Therefore, the
probability that the neuron $\sigma_{i}(k)$ takes $+1$ is obtained by setting
$\sigma_{i}(k)=-1$ in the above $w_{i}(\mbox{\boldmath$\sigma$}_{k})$ and we
immediately find $\sigma_{i}(k)=\sigma_{i}(k-1)=\sigma_{i}(k+1)$ with
probability $1$ in the limit of $B\to\infty$ which implies the classical limit
$\Gamma\to 0$.
Hence, the probability that a microscopic state including the $M$-Trotter
slices
$\\{\bm{\sigma}_{k}\\}\equiv(\bm{\sigma}_{1},\cdots,\bm{\sigma}_{M}),\bm{\sigma}_{k}\equiv(\sigma_{1}(k),\cdots,\sigma_{N}(k))$
obeys the following master equation:
$\displaystyle\frac{dp_{t}(\\{\mbox{\boldmath$\sigma$}_{k}\\})}{dt}$
$\displaystyle=$
$\displaystyle\sum_{k=1}^{M}\sum_{i=1}^{N}[p_{t}(F_{i}^{(k)}(\mbox{\boldmath$\sigma$}_{k}))w_{i}(F_{i}^{(k)}(\mbox{\boldmath$\sigma$}_{k}))-p_{t}(\mbox{\boldmath$\sigma$}_{k})w_{i}(\mbox{\boldmath$\sigma$}_{k})]$
(10)
where $F_{i}^{(k)}(\cdot)$ denotes a single spin flip operator for neuron $i$
on the Trotter slice $k$ as $\sigma_{i}(k)\to-\sigma_{i}(k)$. When we pick up
the overlap between neuronal state $\bm{\sigma}_{k}$ and one of the built-in
patterns $\bm{\xi}^{\nu}$, namely,
$\displaystyle m_{k}$ $\displaystyle\equiv$
$\displaystyle\frac{1}{N}\,(\bm{\sigma}_{k}\cdot\bm{\xi}^{\nu})=\frac{1}{N}\sum_{i}\xi_{i}^{\nu}\sigma_{i}(k)$
(11)
as a relevant macroscopic quantity, the joint distribution of the set of the
overlaps $\\{m_{1},\cdots,m_{M}\\}$ at time $t$ is written in terms of the
probability for realizations of microscopic states
$p_{t}(\\{\mbox{\boldmath$\sigma$}_{k}\\})$ at the same time $t$ as
$P_{t}(m_{1}^{\nu},\cdots,m_{M}^{\nu})=\sum_{\\{\mbox{\boldmath$\sigma$}_{k}\\}}p_{t}(\\{\mbox{\boldmath$\sigma$}_{k}\\})\prod_{k=1}^{M}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))$
(12)
where we defined the sums by
$\displaystyle\sum_{\\{\bm{\sigma}_{k}\\}}(\cdots)$ $\displaystyle\equiv$
$\displaystyle\sum_{\bm{\sigma}_{1}}\cdots\sum_{\bm{\sigma}_{M}}(\cdots),\,\,\,\,\,\sum_{\bm{\sigma}_{k}}(\cdots)\equiv\sum_{\sigma_{1}(k)=\pm
1}\cdots\sum_{\sigma_{N}(k)=\pm 1}(\cdots).$ (13)
Taking the derivative of equation (12) with respect to $t$ and substituting
(10) into the result, we have the following differential equations for the
joint distribution
$\displaystyle\frac{dP_{t}(m_{1}^{\nu},\cdots,m_{M}^{\nu})}{dt}$
$\displaystyle=$ $\displaystyle\sum_{k}\frac{\partial}{\partial
m_{k}^{\nu}}\\{m_{k}^{\nu}P_{t}(m_{1}^{\nu},\cdots,m_{k}^{\nu},\cdots,m_{M}^{\nu})\\}$
(14) $\displaystyle-$ $\displaystyle\sum_{k}\frac{\partial}{\partial
m_{k}^{\nu}}{\Biggr{\\{}}P_{t}(m_{1}^{\nu},\cdots,m_{k}^{\nu},\cdots,m_{M}^{\nu})\int_{-\infty}^{\infty}D[\xi^{\nu}]d\xi^{\nu}$
$\displaystyle\times$
$\displaystyle\frac{\sum_{\\{\mbox{\boldmath$\sigma$}_{k}\\}}p_{t}(\\{\mbox{\boldmath$\sigma$}_{k}\\})\xi^{\nu}\tanh[\beta\phi(k)]\prod_{k,i}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))}{\sum_{\\{\mbox{\boldmath$\sigma$}_{k}\\}}p_{t}(\\{\mbox{\boldmath$\sigma$}_{k}\\})\prod_{k}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))}{\Biggr{\\}}}$
$\displaystyle\times$
$\displaystyle\delta(\sigma(k+1)-\sigma_{i}(k+1))\delta(\sigma(k-1)-\sigma_{i}(k-1))$
where we introduced several notations
$D[\xi^{\nu}]\equiv\frac{1}{N}\sum_{i}\delta(\xi^{\nu}-\xi_{i}^{\nu})$ (15)
$\beta\phi(k)\equiv\frac{\beta\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}}{M}m_{k}^{\nu}+\frac{B}{2}\sigma(k-1)+\frac{B}{2}\sigma(k+1)$
(16)
for simplicity.
Here we should notice that if the local field $\beta\phi(k)$ is independent of
the microscopic variable $\\{\bm{\sigma}_{k}\\}$, one can get around the
complicated expectation of the quantity $\tanh[\beta\phi(k)]$ over the time-
dependent Gibbs measurement which is defined in the sub-shell:
$\prod_{k}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))$. As
the result, only procedure we should carry out to get the deterministic flow
is to calculate the data average (the average over the built-in patterns).
However, unfortunately, we clearly find from equation (16) that the local
field depends on the $\\{\bm{\sigma}_{k}\\}$. To overcome the difficulty and
to carry out the calculation, we assume that the probability
$p_{t}(\\{\bm{\sigma}_{k}\\})$ of realizations for microscopic states during
the dynamics is independent of $t$, namely,
$\displaystyle p_{t}(\\{\bm{\sigma}_{k}\\})$ $\displaystyle=$ $\displaystyle
p(\\{\bm{\sigma}_{k}\\}).$ (17)
Then, our average over the time-dependent Gibbs measurement in the sub-shell
is rewritten as
$\displaystyle\frac{\sum_{\\{\mbox{\boldmath$\sigma$}_{k}\\}}p_{t}(\\{\mbox{\boldmath$\sigma$}_{k}\\})\xi^{\nu}\tanh[\beta\phi(k)]\prod_{k,i}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))}{\sum_{\\{\mbox{\boldmath$\sigma$}_{k}\\}}p_{t}(\\{\mbox{\boldmath$\sigma$}_{k}\\})\prod_{k}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))}$
(18) $\displaystyle\times$
$\displaystyle\delta(\sigma(k+1)-\sigma_{i}(k+1))\delta(\sigma(k-1)-\sigma_{i}(k-1))$
$\displaystyle\equiv$
$\displaystyle\langle\xi^{\nu}\tanh[\beta\phi(k)]\prod_{i}\delta(\sigma(k+1)-\sigma_{i}(k+1))\delta(\sigma(k-1)-\sigma_{i}(k-1))\rangle_{*}$
where $\langle\cdots\rangle_{*}$ stands for the average in the sub-shell
defined by $m_{k}^{\nu}=m_{k}^{\nu}(\bm{\sigma}_{k})\,(\forall_{k})$:
$\langle\cdots\rangle_{*}\equiv\frac{\sum_{\\{\mbox{\boldmath$\sigma$}_{k}\\}}p(\\{\mbox{\boldmath$\sigma$}_{k}\\})(\cdots)\prod_{k}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))}{\sum_{\\{\mbox{\boldmath$\sigma$}_{k}\\}}p(\\{\mbox{\boldmath$\sigma$}_{k}\\})\prod_{k}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))}$
(19)
If we notice that the Gibbs measurement in the sub-shell is rewritten as
$\displaystyle\sum_{\\{\mbox{\boldmath$\sigma$}_{k}\\}}p(\\{\mbox{\boldmath$\sigma$}_{k}\\})\prod_{k}\delta(m_{k}^{\nu}-m_{k}^{\nu}(\mbox{\boldmath$\sigma$}_{k}))$
$\displaystyle=$ $\displaystyle\mbox{\rm
tr}_{\\{\sigma\\}}{\exp}\left[\beta\sum_{l=1}^{M}\phi(l)\sigma(l)\right]$ (20)
($\mbox{\rm
tr}_{\\{\sigma\\}}(\cdots)\equiv\prod_{k}\sum_{\mbox{\boldmath$\sigma$}_{k}}(\cdots)$),
and the quantity
$\tanh\left[\beta\phi(k)\right]=\frac{\sum_{\sigma(k)=\pm
1}\sigma(k)\,{\exp}[\beta\phi(k)\sigma(k)]}{\sum_{\sigma(k)=\pm
1}{\exp}[\beta\phi(k)\sigma(k)]}$ (21)
is independent of $\sigma(k)$, the average appearing in (18) leads to
$\displaystyle\langle\xi^{\nu}\tanh[\beta\phi(k)]\prod_{i}\delta(\sigma(k\pm
1)-\sigma_{i}(k\pm 1))\rangle_{*}$ $\displaystyle=$
$\displaystyle\frac{\mbox{\rm
tr}_{\\{\sigma\\}}\xi^{\nu}\\{\frac{1}{M}\sum_{l=1}^{M}\sigma(l)\\}\exp[\beta\phi(k)\sigma(k)]}{{\rm
tr}_{\\{\sigma\\}}\exp[\beta\phi(k)\sigma(k)]}$ (22) $\displaystyle\equiv$
$\displaystyle\xi^{\nu}\langle\sigma\rangle_{path}^{(\xi^{\nu})}$
in the limit of $M\to\infty$. This is nothing but a path integral for the
effective single neuron problem in which the neuron updates its state along
the imaginary time axis: $\mbox{\rm
tr}_{\\{\sigma\\}}(\cdots)\equiv\sum_{\sigma(1)=\pm
1}\cdots\sum_{\sigma(M)=\pm 1}(\cdots)$ with weights
${\exp}[\beta\phi(k)\sigma(k)],\,(k=1,\cdots,M)$.
Then, the differential equation (14) leads to
$\displaystyle\frac{dP_{t}(m_{1}^{\nu},\cdots,m_{M}^{\nu})}{dt}$
$\displaystyle=$ $\displaystyle\sum_{k}\frac{\partial}{\partial
m_{k}^{\nu}}\\{m_{k}^{\nu}P_{t}(m_{1}^{\nu},\cdots,m_{k}^{\nu},\cdots,m_{M}^{\nu})\\}$
(23) $\displaystyle-$ $\displaystyle\sum_{k}\frac{\partial}{\partial
m_{k}^{\nu}}{\Biggr{\\{}}P_{t}(m_{1}^{\nu},\cdots,m_{k}^{\nu},\cdots,m_{M}^{\nu})\int_{-\infty}^{\infty}D[\xi^{\nu}]d\xi^{\nu}\xi^{\nu}\langle\sigma\rangle_{path}^{(\xi^{\nu})}{\Biggr{\\}}}.$
In order to derive the compact form of the differential equations with respect
to the overlaps, we substitute
$P_{t}(m_{1}^{\nu},\cdots,m_{M}^{\nu})=\prod_{k=1}^{M}\delta(m_{k}^{\nu}-m_{k}^{\nu}(t))$
into the above (23) and multiplying $m_{l}^{\nu}$ by both sides of the
equation and carrying out the integral with respect to $dm_{1}^{\nu}\cdots
dm_{M}^{\nu}$ by part, we have for $l=1,\cdots,M$ as
$\displaystyle\frac{dm_{l}^{\nu}}{dt}$ $\displaystyle=$ $\displaystyle-
m_{l}^{\nu}+\int_{-\infty}^{\infty}D[\xi^{\nu}]d\xi^{\nu}\xi^{\nu}\langle\sigma\rangle_{path}^{(\xi^{\nu})}.$
(24)
Here we should notice that the path integral
$\xi^{\nu}\langle\sigma\rangle_{path}^{(\xi^{\nu})}$ depends on the embedded
patterns $\bm{\xi}^{\nu}$. In the next subsection, we carry out the quenched
average explicitly under the so-called static approximation.
### 4.2 Static approximation
In order to obtain the final form of the deterministic flow, we assume that
macroscopic quantities such as the overlap are independent of the Trotter
slices $k$ during the dynamics. Namely, we must use the so-called static
approximation:
$\displaystyle m_{k}^{\nu}$ $\displaystyle=$ $\displaystyle
m^{\nu}\,(\forall_{k}).$ (25)
Under the static approximation, let us use the following inverse process of
the Suzuki-Trotter decomposition (7):
$\lim_{M\to\infty}Z_{M}=\mbox{\rm
tr}\,{\exp}\left[\beta\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu}\sigma_{z}+\beta\Gamma\sigma_{x}\right]$
(26) $\displaystyle Z_{M}$ $\displaystyle\equiv$ $\displaystyle\mbox{\rm
tr}_{\\{\sigma\\}}{\exp}{\Biggr{[}}\frac{\beta\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu}}{M}\sum_{k}\sigma(k)+B\sum_{k}\sigma(k)\sigma(k+1){\Biggr{]}}$
(27)
In our previous study [14], we numerically checked the validity of static
approximation by computer simulations and found that the approximation is
successfully valid for the pure-ferromagnetic system but it is deviated from
the good approximation for the disordered systems. The validity of the static
approximation was recently argued by Takahashi and Matsuda [16] from the
different perspective.
Then, one can calculate the path integral immediately as
$\displaystyle\langle\sigma\rangle_{path}^{(\xi^{\nu})}$ $\displaystyle=$
$\displaystyle\frac{\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu}}{\sqrt{(\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu})^{2}+\Gamma^{2}}}\tanh\beta\sqrt{\left(\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu}\right)^{2}+\Gamma^{2}}.$
(28)
Inserting this result into (24), we obtain
$\displaystyle\frac{dm^{\nu}}{dt}$ $\displaystyle=$
$\displaystyle-m^{\nu}+\mathbb{E}_{\bm{\xi}}{\Biggr{[}}\frac{\xi^{\nu}\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu}}{\sqrt{(\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu})^{2}+\Gamma^{2}}}\tanh\beta\sqrt{\left(\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu}\right)^{2}+\Gamma^{2}}{\Biggr{]}}$
(29)
where we should bear in mind that the empirical distribution $D[\xi^{\nu}]$ in
(24) was replaced by the built-in pattern distribution
$\mathcal{P}(\xi^{\nu})$ as
$\lim_{N\to\infty}\frac{1}{N}\sum_{i}\delta(\xi_{i}^{\nu}-\xi^{\nu})=\mathcal{P}(\xi^{\nu})$
(30)
in the limit of $N\to\infty$ and the average is now carried out explicitly as
$\displaystyle\int
D[\xi^{\nu}]d\xi^{\nu}(\cdots)=\int\mathcal{P}(\xi^{\nu})d\xi^{\nu}(\cdots)\equiv\mathbb{E}_{\bm{\xi}}[\cdots].$
(31)
Equation (29) is a general solution for the problem in this paper.
### 4.3 The classical and zero-temperature limits
It is easy for us to take the classical limit $\Gamma\to 0$ in the result
(29). Actually, we have immediately
$\frac{dm^{\nu}}{dt}=-m^{\nu}+\mathbb{E}_{\bm{\xi}}\left[\xi^{\nu}\tanh\left(\beta\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu}\right)\right].$
(32)
The above equation is identical to the result by Coolen and Ruijgrok [17] who
considered the retrieval process of the conventional Hopfield model under
thermal noise.
We can also take the zero-temperature limit $\beta\to\infty$ in (29) as
$\frac{dm^{\nu}}{dt}=-m^{\nu}+\mathbb{E}_{\bm{\xi}}\left[\frac{\xi^{\nu}\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu}}{\sqrt{(\sum_{\mu\nu}\xi^{\mu}A_{\mu\nu}m^{\nu})^{2}+\Gamma^{2}}}\right].$
(33)
Thus, the equation (29) including the above two limiting cases is our general
solution for the neuro-dynamics of the quantum Hopfield model in which
$\mathcal{O}(1)$ patterns are embedded. Thus, we can discuss any kind of
situations for such pattern-recalling processes and the solution is always
derived from (29) explicitly.
## 5 Limit cycle solution for asymmetric connections
In this section, we discuss a special case of the general solution (29).
Namely, we investigate the pattern-recalling processes of the quantum Hopfield
model with asymmetric connections $\bm{A}\equiv\\{A_{\mu\nu}\\}$.
### 5.1 Result for two-patterns
Let us consider the case in which just only two patterns are embedded via the
following matrix:
$A=\left(\begin{array}[]{cc}1&-1\\\ 1&1\end{array}\right)$ (34)
Then, from the general solution (29), the differential equations with respect
to the two overlaps $m_{1}$ and $m_{2}$ are written as
$\displaystyle\frac{dm_{1}}{dt}$ $\displaystyle=$ $\displaystyle-
m_{1}+\frac{m_{1}}{\sqrt{(2m_{1})^{2}+\Gamma^{2}}}-\frac{m_{2}}{\sqrt{(2m_{2})^{2}+\Gamma^{2}}}$
$\displaystyle\frac{dm_{2}}{dt}$ $\displaystyle=$ $\displaystyle-
m_{2}+\frac{m_{1}}{\sqrt{(2m_{1})^{2}+\Gamma^{2}}}+\frac{m_{2}}{\sqrt{(2m_{2})^{2}+\Gamma^{2}}}.$
In Figure 1, we show the time evolutions of the overlaps $m_{1}$ and $m_{2}$
for the case of the amplitude $\Gamma=0.01$.
Figure 1: Time evolutions of $m_{1}$ and $m_{2}$ for the case of
$\Gamma=0.01$.
From this figure, we clearly find that the neuronal state evolves as $A\to
B\to\overline{A}\to\overline{B}\to A\to B\to\cdots$
($\overline{A},\overline{B}$ denote the ‘mirror images’ of $A$ and $B$,
respectively), namely, the network behaves as a limit cycle.
To compare the effects of thermal and quantum noises on the pattern-recalling
processes, we plot the trajectories $m_{1}$-$m_{2}$ for
$(T\equiv\beta^{-1},\Gamma)=(0,0.01),(0.01,0)$ (left panel),
$(T,\Gamma)=(0,0.8),(0.8,0)$ (right panel) in Figure 2.
Figure 2: Trajectories $m_{1}$-$m_{2}$ for $(T,\Gamma)=(0,0.01),(0.01,0)$
(left panel), $(T,\Gamma)=(0,0.8),(0.8,0)$ (right panel).
From these panels, we find that the limit cycles are getting collapsed as the
strength of the noise level is increasing for both thermal and quantum-
mechanical noises, and eventually the trajectories shrink to the origin
$(m_{1},m_{2})=(0,0)$ in the limit of $T,\Gamma\to\infty$.
## 6 Summary
In this paper, we considered the stochastic process of quantum Monte Carlo
method applied for the quantum Hopfield model and investigated the quantum
neuro-dynamics through the differential equations with respect to the
macroscopic quantities such as the overlap. By using the present approach, one
can evaluate the ‘inhomogeneous’ Markovian stochastic process of quantum Monte
Carlo method (in which amplitude $\Gamma$ is time-dependent [18, 19]) such as
quantum annealing [20, 21, 22, 23, 24, 25]. In the next step of the present
study, we are planning to extend this formulation to the probabilistic
information processing described by spin glasses including a peculiar type of
antiferromagnet [26]. We thank B.K. Chakrabarti, A.K. Chandra, P. Sen and S.
Dasgupta for fruitful discussion. We also thank local organizers of Statphys-
Kolkata VII for their warm hospitality. This work was financially supported by
Grant-in-Aid for Scientific Research (C) of Japan Society for the Promotion of
Science, No. 22500195.
## References
## References
* [1] Nakano K 1972 IEEE Trans. on Systems, Man, and Sybernetics SMC-2 380
* [2] Hopfield J J 1982 PNAS 79 2554
* [3] Mézard M, Parisi G and Virasoro M A 1987 Spin Glass Theory and Beyond (Singapore: World Scientific)
* [4] Amit D, Gutfreund H and Somplolinsky H 1985 Phys. Rev. Lett. 55 1530
* [5] Inoue J 1996 J. Phys. A 29 4815
* [6] Shiino M and Fukai T J. Phys. A 25 L375
* [7] Ma Y Q and Gong C D 1992 Phys. Rev. B 45 793
* [8] Nishimori H and Nonomura Y 1996 J. Phys. Soc. Japan 65 3780
* [9] Amari S and Maginu K 1988 Neural Networks 1 63
* [10] Okada M 1995 Neural Networks 8 833
* [11] Coolen A C C and Sherrington D 1994 Phys. Rev. Let. 49 1921
* [12] Coolen A C C, Laughton S N and Sherrington D 1996 Phys. Rev. B 53 8184
* [13] Suzuki M 1976 Prog. Theor. Phys. 56 1454
* [14] Inoue J 2010 Journal of Physics: Conference Series 233 012010
* [15] Sherrington D and Kirkpatrick S 1975 Phys. Rev. Lett. 35 1792
* [16] Takahashi K and Matsuda Y 2010 J. Phys. Soc. Japan 79 043712
* [17] Coolen A C C and Ruijgrok Th W 1988 Phys. Rev. A 38 4253
* [18] Das A, Sengupta K, Sen D and Chakrabarti B K 2006 Phys. Rev. B 74 144423
* [19] Das A 2010 Phys. Rev. B 82 172402
* [20] Kadowaki T and Nishimori H 1998 Physical Review E 58 5355
* [21] Farhi E, Goldstone J, Gutmann S, Lapan J, Lundgren A and Preda P 2001 Science 292 472
* [22] Morita S and Nishimori H 2006 J. Phys. A 39 13903
* [23] Suzuki S and Okada M 2005 J. Phys. Soc. Jpn. 74 1649
* [24] Santoro G E and Tosatti 2006 J. Phys. A 41 209801
* [25] Das A and Chakrabarti B K 2008 Rev. Mod. Phys. 80 1061
* [26] Chandra A K, Inoue J and Chakrabarti B K 2010 Phys. Rev. E 81 021101
|
arxiv-papers
| 2011-03-19T14:57:03 |
2024-09-04T02:49:17.797673
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jun-ichi Inoue",
"submitter": "Jun-ichi Inoue",
"url": "https://arxiv.org/abs/1103.3787"
}
|
1103.3894
|
# Fidelity matters: the birth of entanglement in the mixing of Gaussian states
Stefano Olivares stefano.olivares@ts.infn.it Dipartimento di Fisica,
Università degli Studi di Trieste, I-34151 Trieste, Italy CNISM UdR Milano
Statale, I-20133 Milano, Italy Matteo G. A. Paris
matteo.paris@fisica.unimi.it Dipartimento di Fisica, Università degli Studi di
Milano, I-20133 Milano, Italy CNISM UdR Milano Statale, I-20133 Milano, Italy
###### Abstract
We address the interaction of two Gaussian states through bilinear exchange
Hamiltonians and analyze the correlations exhibited by the resulting bipartite
systems. We demonstrate that entanglement arises if and only if the fidelity
between the two input Gaussian states falls under a threshold value depending
only on their purities, first moments and on the strength of the coupling. Our
result clarifies the role of quantum fluctuations (squeezing) as a
prerequisite for entanglement generation and provides a tool to optimize the
generation of entanglement in linear systems of interest for quantum
technology.
###### pacs:
03.67.Mn
Gaussian states (GS), that is quantum states with Gaussian Wigner functions,
play a leading role in continuous variable quantum technology revs for their
extremal properties wlf06 and because they may be generated with current
technology, in particular in the quantum optics context gra03 ; dau:09 ;
dae:10 . As a consequence, much attention have been dedicated to the
characterization of Gaussian entanglement sim:00 ; dua:00 ; gio03 ; pvl03 ;
ser04 ; ade:04 ; hyl06 ; sch06 ; mar:08 . Among the possible mechanisms to
generate Gaussian entanglement, the one consisting in mixing squeezed states
par97 ; wan02 ; kim02 ; zhu04 ; ade:06 ; nha09 ; spe:11 is of special
interest in view of its feasibility, which indeed had been crucial to achieve
continuous variable teleportation fur:98 . The entangling power of bilinear
interactions has been widely analyzed, either to optimize the generation of
entanglement par:99 ; wol:03 or to find relations between their entanglement
and purities ade:04b or teleportation fidelity pir:03 ; ade:05 .
In this Letter we address bilinear, energy conserving, i.e., exchange,
interactions described by Hamiltonians of the form
$H_{I}=g(a^{\dagger}b+ab^{\dagger})$, where $a$ and $b$ are bosonic
annihilation operators, $[a,a^{\dagger}]=1$ and $[b,b^{\dagger}]=1$, and $g$
the coupling constant. Hamiltonians of this kind are suitable to describe very
different kinds of quantum systems, such as, e.g., two light-mode in a beam
splitter or a frequency converter, collective modes in a gases of cold atoms
meystre , atom-light nondemolition measurements Nat11 , optomechanical
oscillators pir:03 ; xia:10 , nanomechanical oscillators cav08 , and
superconducting resonators chi:10 , all of which are of interest for the
quantum technology. Our analysis can be applied to all these systems and lead
to very general results about the resources needed for Gaussian entanglement
generation.
The bilinear Hamiltonians $H_{I}$ generally describe the action of simple
passive interactions and, in view of this simplicity, their fundamental
quantum properties are often overlooked. Actually, the exchange amplitudes for
the quanta of one of the systems strongly depend on the statistics of the
quanta of the other one and on the particle indistinguishability. This
mechanism gives rise to interference and, thus, to the birth of correlations
in the output bipartite system. A question arises about the nature of these
correlations, depending on the parameters of the input signals and coupling
constant. In this Letter, motivated by recent results on the dynamics of
bipartite GS through bilinear interactions kim:09 ; oli:09 and by their
experimental demonstration blo:10 , we investigate the relation between the
properties of two input GS and the correlations exhibited by the output state.
Our main result is that entanglement arises if and only if the fidelity
between the two input states falls under a threshold value depending only on
their purities, first-moment values and on the strength of the coupling. Our
analysis provides a direct link between the mismatch in the quantum properties
of the input signals and the creation of entanglement, thus providing a better
understanding of the process leading to the generation of nonclassical
correlations. In fact, if, from the one hand, it is well known that squeezing
is a necessary resource to create entanglement wan02 ; kim02 ; wol:03 , from
the other hand in this Letter we show what is the actual role played by the
squeezing, that is making the two input GS different enough to entangle the
output system.
The most general single-mode Gaussian state can be written as
$\varrho=\varrho(\alpha,\xi,N)=D(\alpha)S(\xi)\nu_{\rm
th}(N)S^{{\dagger}}(\xi)D^{{\dagger}}(\alpha)$, where
$S(r)=\exp[\frac{1}{2}(\xi{a^{{\dagger}}}^{2}-\xi^{*}a^{2})]$ and
$D(\alpha)=\exp[\alpha a^{{\dagger}}-\alpha^{*}a)]$ are the squeezing operator
and the displacement operator, respectively, and $\nu_{\rm
th}(N)=(N)^{a^{\dagger}a}/(1+N)^{a^{\dagger}a+1}$ is a thermal equilibrium
state with $N$ average number of quanta, $a$ being the annihilation operator.
Up to introducing the vector of operator
$\boldsymbol{R}^{T}=(R_{1},R_{2})\equiv(q,p)$, where
$q=(a+a^{\dagger})/{\sqrt{2}}$ and $p=(a-a^{\dagger})/(i\sqrt{2})$ are the so-
called quadrature operators, we can fully characterize $\varrho$ by means of
the first moment vector
$\overline{\boldsymbol{X}}^{T}=\langle\boldsymbol{R}^{T}\rangle=\sqrt{2}(\Re{\rm
e}[\alpha],\Im{\rm m}[\alpha])$, with $\langle
A\rangle=\hbox{Tr}[A\,\varrho]$, and of the $2\times 2$ covariance matrix (CM)
$\boldsymbol{\sigma}$, with $[\boldsymbol{\sigma}]_{hk}=\frac{1}{2}\langle
R_{h}R_{k}+R_{k}R_{h}\rangle-\langle R_{h}\rangle\langle R_{k}\rangle$,
$k=1,2$, which explicitly reads:
$[\boldsymbol{\sigma}]_{kk}=(2\mu)^{-1}\left[\cosh(2r)-(-1)^{k}\cos(\psi)\sinh(2r)\right]$
for $k=1,2$ and
$[\boldsymbol{\sigma}]_{12}=[\boldsymbol{\sigma}]_{21}=-(2\mu)^{-1}\sin(\psi)\sinh(2r)$,
where we put $\xi=re^{i\psi}$, $r,\psi\in{\mathbbm{R}}$ and introduced the
purity of the state $\mu=\hbox{Tr}[\varrho^{2}]=(1+2N)^{-1}$. Since we are
interested in the dynamics of the correlations, which are not affected by the
first moment, we start addressing GS with zero first moments ($\alpha=0$). The
general case will be considered later on in this Letter.
When two uncorrelated, single-mode GS $\varrho_{k}$ with CMs
$\boldsymbol{\sigma}_{k}$, $k=1,2$, interacts through the bilinear Hamiltonian
$H_{I}$, the $4\times 4$ CM $\boldsymbol{\Sigma}$ of the evolved bipartite
state
$\varrho_{12}=U_{g}(t)\,\varrho_{1}\otimes\varrho_{2}\,U^{{\dagger}}_{g}(t)$,
$U_{g}(t)=\exp\\{-iH_{I}t\\}$ being the evolution operator, can be written in
the following block-matrix form revs :
$\boldsymbol{\Sigma}=\left(\begin{array}[]{cc}\boldsymbol{\Sigma}_{1}&\boldsymbol{\Sigma}_{12}\\\\[4.30554pt]
\boldsymbol{\Sigma}_{12}&\boldsymbol{\Sigma}_{2}\end{array}\right),\quad\begin{array}[]{l}\boldsymbol{\Sigma}_{1}=\tau\boldsymbol{\sigma}_{1}+(1-\tau)\boldsymbol{\sigma}_{2},\\\
\boldsymbol{\Sigma}_{2}=\tau\boldsymbol{\sigma}_{2}+(1-\tau)\boldsymbol{\sigma}_{1},\\\
\boldsymbol{\Sigma}_{12}=\tau(1-\tau)(\boldsymbol{\sigma}_{2}-\boldsymbol{\sigma}_{1}),\end{array}$
(1)
$\tau=\cos^{2}(gt)$ being an effective coupling parameter, and where the
presence of a nonzero covariance term $\boldsymbol{\Sigma}_{12}$ suggests the
emergence of quantum or classical correlations between the two systems. Since
$\boldsymbol{\Sigma}_{12}$ depends on the difference between the input states
CMs, a question naturally arises about the relation between the “similarity”
of the inputs and the birth of (nonlocal) correlations. In this Letter we
answer this question and demonstrate that entanglement arises if and only if
the fidelity between the two input GS falls under a threshold value, which
depends only on their purities, the value of the first moments, and the
coupling $\tau$.
Let us now consider the pair of uncorrelated, single-mode GS
$\varrho_{k}=\varrho(\xi_{k},N_{k})$, $k=1,2$ and assume, without loss of
generality, $\xi_{1}=r_{1}$ and $\xi_{2}=r_{2}e^{i\psi}$, with
$r_{k},\psi\in{\mathbbm{R}}$. After the interaction, we found that the
presence of entanglement at the output is governed by the sole fidelity
$F(\varrho_{1},\varrho_{2})=[\hbox{Tr}(\sqrt{\sqrt{\varrho_{1}}\,\varrho_{2}\sqrt{\varrho_{1}}})]^{2}$
between the inputs. Our results may be summarized by the following:
###### Theorem 1
The state
$\varrho_{12}=U_{g}(t)\,\varrho_{1}\otimes\varrho_{2}\,U^{{\dagger}}_{g}(t)$,
resulting from the mixing of two GS with zero first moments,
$\varrho_{1}(r_{1},N_{1})$ and $\varrho_{2}(r_{2}e^{i\psi},N_{2})$, is
entangled if and only if the fidelity $F(\varrho_{1},\varrho_{2})$ between the
inputs falls below a threshold value $F_{\rm e}(\mu_{1},\mu_{2};\tau)$, which
depends only on their purities
$\mu_{k}=\hbox{Tr}[\varrho_{k}^{2}]=(1+2N_{k})^{-1}$, $k=1,2$, and on the
effective coupling parameter $\tau=\cos^{2}(gt)$.
Proof: In order to prove the theorem, we recall that a bipartite Gaussian
state $\varrho_{12}$ is entangled if and only if the minimum symplectic
eigenvalue $\tilde{\lambda}$ of CM associated with the partially transposed
state is $\tilde{\lambda}<1/2$ sim:00 . Moreover, without loss of generality,
we can address the scenario in which $r_{k}$ and $N_{k}$, $k=1,2$, are fixed
and we let $\psi$ vary in the interval $[0,2\pi]$.
Figure 1: (Color online) Plot of the fidelity $F(\varrho_{1},\varrho_{2})$
(red, solid line) between the two input states and of the minimum symplectic
eigenvalue $\tilde{\lambda}$ as a function of $\psi$ for $\tau=0.5$ (blue,
dashed line) and $\tau=0.8$ (purple, dotted line). The other involved
parameters are $\xi_{1}=0.5$, $N_{1}=0.2$, $\xi_{2}=0.7e^{i\psi}$ and
$N_{2}=0.3$. The colored regions denote the ranges of $\psi$ leading to an
entangled state for the given $\tau$, while the horizontal dot-dashed lines
refer to the corresponding thresholds $F_{\rm e}$.
First of all, we prove that $\tilde{\lambda}<1/2\Rightarrow
F(\varrho_{1},\varrho_{2})<F_{\rm e}(\mu_{1},\mu_{2};\tau)$ (necessary
condition). As we will see, this will allow us to find the analytic expression
of the threshold $F_{\rm e}(\mu_{1},\mu_{2};\tau)$, which will be used to
prove the sufficient condition, i.e., $F(\varrho_{1},\varrho_{2})<F_{\rm
e}(\mu_{1},\mu_{2};\tau)\Rightarrow\tilde{\lambda}<1/2$. Fig. 1 shows the
typical behavior of $\tilde{\lambda}$ and of the fidelity $F$ as a function of
the squeezing phase $\psi$ for fixed $r_{k}$ and $N_{k}$, $k=1,2$ (here we do
not report their analytic expressions since they are quite cumbersome). As one
can see, both $\tilde{\lambda}$ and $F$ are monotone, decreasing (increasing)
functions of $\psi$ in the interval $[0,\pi)$ ($[\pi,2\pi]$, respectively) and
have a minimum at $\psi=\pi$, whose actual value depends on both $r_{k}$ and
$N_{k}$ but not on $\tau$. In our case, one finds that, for fixed $r_{k}$ and
$N_{k}$, $k=1,2$, if $\min_{\psi}\tilde{\lambda}<1/2$, then there exists a
threshold value $\psi_{\rm e}\equiv\psi_{\rm
e}(r_{1},\mu_{1},r_{2},\mu_{2},\tau)$:
$\displaystyle\psi_{\rm
e}=\arccos\left\\{\frac{\cosh(2r_{1})\cosh(2r_{2})-f(\mu_{1},\mu_{2},\tau)}{\sinh(2r_{1})\sinh(2r_{2})}\right\\},$
where we introduced:
$\displaystyle
f(\mu_{1},\mu_{2},\tau)=\frac{1+\mu_{1}^{2}\mu_{2}^{2}-(\mu_{1}^{2}+\mu_{2}^{2})(1-2\tau)^{2}}{8\,\mu_{1}\mu_{2}\tau(1-\tau)},$
and $\mu_{k}=\hbox{Tr}[\varrho_{k}^{2}]=(1+2N_{k})^{-1}$, $k=1,2$, are the
purities of the inputs, such that if $\psi\in(\psi_{\rm e},2\pi-\psi_{\rm e})$
then $\tilde{\lambda}<1/2$, i.e., $\varrho_{12}$ is entangled. Since the
fidelity between the two GS $\varrho_{k}$, characterized by the CMs
$\boldsymbol{\sigma}_{k}$, $k=1,2$ (and zero first moments), is given by
scu:98
$F(\varrho_{1},\varrho_{2})=\left(\sqrt{\Delta+\delta}-\sqrt{\delta}\right)^{-1}$,
where $\Delta=\det[\boldsymbol{\sigma}_{1}+\boldsymbol{\sigma}_{2}]$ and
$\delta=4\prod_{k=1}^{2}(\det[\boldsymbol{\sigma}_{k}]-\frac{1}{4})$, the
threshold value $F_{\rm e}\equiv F_{\rm e}(\mu_{1},\mu_{2};\tau)$ of the
fidelity is thus obtained by setting $\psi=\psi_{\rm e}$ and explicitly reads:
$F_{\rm
e}=\frac{4\mu_{1}\mu_{2}\sqrt{\tau(1-\tau)}}{\sqrt{g_{-}+4\tau(1-\tau)g_{+}}-\sqrt{4\tau(1-\tau)g_{-}}},$
(2)
where $g_{\pm}\equiv g_{\pm}(\mu_{1},\mu_{2})=\prod_{k=1,2}(1\pm\mu_{k}^{2})$.
The threshold depends only on $\tau$ and on the purities $\mu_{k}$ of the
input GS and is independent of the squeezing parameters $r_{k}$, despite the
fact $\psi_{\rm e}$ does. Finally, if $\tilde{\lambda}<1/2$, i.e.,
$\varrho_{12}$ is entangled, then $F(\varrho_{1},\varrho_{2})<F_{\rm
e}(\mu_{1},\mu_{2};\tau)$. This concludes the first part of the proof.
Now we focus on the sufficient condition, i.e.,
$F(\varrho_{1},\varrho_{2})<F_{\rm
e}(\mu_{1},\mu_{2};\tau)\Rightarrow\tilde{\lambda}<1/2$. Thanks to the first
part of the theorem and since both $F$ and $\tilde{\lambda}$ are continuous
functions of $\psi$, for fixed $r_{k}$ and $N_{k}$, $k=1,2$, which have a
minimum in $\psi=\pi$, it is enough to show that $F_{\rm
min}\equiv\min_{\psi}F(\varrho_{1},\varrho_{2})<F_{\rm
e}(\mu_{1},\mu_{2};\tau)\Rightarrow\lambda_{\rm
min}\equiv\min_{\psi}\tilde{\lambda}<1/2$. We have:
$\displaystyle
F_{\min}=\frac{2\mu_{1}\mu_{2}}{\sqrt{1+\mu_{1}^{2}\mu_{2}^{2}+2\mu_{1}\mu_{2}\cosh[2(r_{1}+r_{2})]}-\sqrt{g_{-}}},$
where $g_{-}$ is the same as in Eq. (2), and:
$\displaystyle\tilde{\lambda}_{\rm
min}=\frac{1}{2}\frac{\left[\gamma-\sqrt{\gamma^{2}-(2\mu_{1}\mu_{2})^{2}}\right]^{\frac{1}{2}}}{\sqrt{2}\mu_{1}\mu_{2}},$
with
$\gamma=(\mu_{1}^{2}+\mu_{2}^{2})(1-2\tau)^{2}+8\mu_{1}\mu_{2}\tau(1-\tau)\cosh[2(r_{1}+r_{2})]$,
respectively. The inequality $F_{\rm min}<F_{\rm e}(\mu_{1},\mu_{2};\tau)$,
where $F_{\rm e}(\mu_{1},\mu_{2};\tau)$ is given in Eq. (2), is satisfied if
$\gamma>1+\mu_{1}^{2}\mu_{2}^{2}$, which leads to $\tilde{\lambda}_{\rm
min}<1/2$, as one may verify after a straightforward calculation. Now, since
$\tilde{\lambda}$ is a continuous function of $\psi$, there exists a range of
values centered at $\psi=\pi$, where the minimum occurs, in which
$\tilde{\lambda}<1/2$ and, thus, $F(\varrho_{1},\varrho_{2})<F_{\rm
e}(\mu_{1},\mu_{2};\tau)$, because of the first part of the theorem (necessary
condition). This concludes the proof of the Theorem. $\Box$
As a matter of fact, the presence of nonzero first moments does not affect the
nonclassical correlations exhibited by a bipartite Gaussian state, which only
depend on the CM revs . Thus, we can state the following straightforward:
###### Corollary 1
If
$\overline{\boldsymbol{X}}_{k}^{T}=\hbox{Tr}[(q_{k},p_{k})\,\varrho_{k}]\neq
0$, where $q_{k}=(a_{k}+a_{k}^{\dagger})/{\sqrt{2}}$ and
$p=(a_{k}^{\dagger}-a_{k})/(i\sqrt{2})$ are the quadrature operators of the
system $k=1,2$, then the state
$\varrho_{12}=U_{g}(t)\,\varrho_{1}\otimes\varrho_{2}\,U^{{\dagger}}_{g}(t)$
is entangled if and only if:
$F(\varrho_{1},\varrho_{2})<\Gamma(\overline{\boldsymbol{X}}_{1},\overline{\boldsymbol{X}}_{2})\,F_{\rm
e}(\mu_{1},\mu_{2};\tau),$ (3)
where $F_{\rm e}(\mu_{1},\mu_{2};\tau)$ is still given in Eq. (2) and:
$\Gamma(\overline{\boldsymbol{X}}_{1},\overline{\boldsymbol{X}}_{2})=\exp\left[-\mbox{$\frac{1}{2}$}\,\overline{\boldsymbol{X}}_{12}^{T}(\boldsymbol{\sigma}_{1}+\boldsymbol{\sigma}_{2})^{-1}\overline{\boldsymbol{X}}_{12}\right],$
(4)
where
$\overline{\boldsymbol{X}}_{12}=(\overline{\boldsymbol{X}}_{1}-\overline{\boldsymbol{X}}_{2})$.
Proof: The proof follows from Theorem 1 by noting that the presence of
nonzero first moments does not modify the evolution of the CM, whereas the
expression of the fidelity becomes scu:98
$F(\varrho_{1},\varrho_{2})=\Gamma(\overline{\boldsymbol{X}}_{1},\overline{\boldsymbol{X}}_{2})\left(\sqrt{\Delta+\delta}-\sqrt{\delta}\right)^{-1}\\!\\!$,
where $\Delta$ and $\delta$ have been defined above. $\Box$
Theorem 1 states that if the two Gaussian inputs are “too similar” the
correlations induced by the interaction are local, i.e., may be mimicked by
local operations performed on each of the systems. The extreme case
corresponds to mix a pair of identical GS: in this case the interaction
produces no effect, since the output state is identical to the input one
kim:09 ; oli:09 , i.e., a factorized state made of two copies of the same
input states, and we have no correlations at all at the output. Notice that
for pure (zero mean) states the threshold on fidelity reduces to $F_{\rm
e}(1,1;\tau)=1$ $\forall\tau$, namely, any pair of not identical (zero mean)
pure GS gives raise to entanglement at the output. On the contrary, two
thermal states $\nu_{k}\equiv\nu_{\rm th}(N_{k})$, $k=1,2$, as inputs, i.e.,
the most classical GS, lead to $F(\nu_{1},\nu_{2})>F_{\rm
e}(\mu_{1},\mu_{2};\tau)$: this fact, thanks to the Theorem 1, shows that we
need to squeeze one or both of the classical inputs in order to make the
states different enough to give rise to entanglement. Notice, finally, that
the thresholds in Eqs. (2) and (3) involve strict inequalities and when the
fidelity between the inputs is exactly equal to the threshold the output state
is separable.
The threshold $F_{\rm e}(\mu_{1},\mu_{2};\tau)$ is symmetric under the
exchange $\mu_{1}\leftrightarrow\mu_{2}$ and if one of the two state is pure,
i.e., if $\mu_{k}=1$ then $F_{\rm
e}(\tau)=\sqrt{2}\,\mu_{h}/\sqrt{1+\mu_{h}^{2}}$, with $h\neq k$, i.e., the
threshold no longer depends on $\tau$.
For what concerns Gaussian entanglement, i.e., the resource characterized by
the violation of Simon’s condition on CM sim:00 , our results also apply to
the case of non-Gaussian input signals, upon evaluating the fidelity between
the GS with the same CMs of the non-Gaussian ones. In fact, violation of the
Simon’s condition is governed only by the behavior of the CM independently on
the Gaussian character of the inputs states. On the other hand, identical non-
Gaussian states may give raise to entangled output, the mixing of two single-
photon states in quantum optical systems being the paradigmatic example HOM:87
. In other words, the entanglement raising from the mixing of two identical
non-Gaussian states cannot be detected by Simon’s condition on CM.
Up to now we have considered the correlation properties of the output states
with respect to the fidelity between the input ones. However, similar
relations may be found for the fidelities
$F(\varrho_{h},\tilde{\varrho}_{k})$, $k,h=1,2$, between the input and output
states, respectively, where $\tilde{\varrho}_{h}=\hbox{Tr}_{k}[\varrho_{12}]$,
with $h\neq k$, are the reduced density matrices of the output states taken
separately. In this case we found that the output is entangled if and only if
$F(\varrho_{h},\tilde{\varrho}_{k})<F_{\rm
e}(\varrho_{h},\tilde{\varrho}_{k})$ where all the thresholds $F_{\rm
e}(\varrho_{h},\tilde{\varrho}_{k})$ still depends only on $\mu_{1}$,
$\mu_{2}$ and $\tau$ (here we put as arguments the density matrices in order
to avoid confusion with the previous thresholds). The analytic expressions of
$F_{\rm e}(\varrho_{h},\tilde{\varrho}_{k})$ are cumbersome and are not
reported explicitly, but we plot in Fig. 2 the input-output fidelities and the
corresponding thresholds for a particular choice of the involved parameters.
If we look at the interaction between the two systems as a quantum noisy
channel for one of the two, namely, $\varrho_{k}\to{\cal
E}(\varrho_{k})\equiv\hbox{Tr}_{h}[\varrho_{12}]$, $h\neq k$, than the birth
of the correlations between the outgoing systems corresponds to a reduction of
the input-output fidelity: the correlations arise at the expense of the
information contained in the input signals. In turn, this result may be
exploited for decoherence control and preservation of entanglement using bath
engineering blo:10 .
Figure 2: (Color online) Plot of the fidelities
$F(\varrho_{h},\tilde{\varrho}_{k})$ for $\tau=0.8$ and the same choice of the
other involved parameters as in Fig. 1. The yellow region shows the interval
of values of $\psi$ leading to an entangled state. The right panel is a
magnification of the green, boxed region of the left panel: the horizontal
lines refer to the corresponding thresholds $F_{\rm
e}(\varrho_{h},\tilde{\varrho}_{k})$. See the text for details.
In conclusion, we have analyzed the correlations exhibited by two initially
uncorrelated GS which interact through a bilinear exchange Hamiltonian. We
found that entanglement arises if and only if the fidelity between the two
inputs falls under a threshold value depending only on their purities, the
first moments, and on the coupling constant. Similar relations have been
obtained for the input-output fidelities. Our theorems clarify the role of
squeezing as a prerequisite to obtain entanglement out of bilinear
interactions, and provide a tool to optimize the generation of entanglement by
passive (energy conserving) devices. Our results represent a progress for the
fundamental understanding of nonclassical correlations in continuous variable
systems and may found practical applications in quantum technology. Due to the
recent advancement in the generation and manipulation of GS, we foresee
experimental implementations in optomechanical and quantum optical systems.
SO acknowledges support from the University of Trieste through the “FRA 2009”.
## References
* (1) J. Eisert and M. B. Plenio, Int. J. Quant. Inf. 1, 479 (2003); S. L. Braunstein and P. van Loock, Rev. Mod. Phys. 77, 513 (2005); A. Ferraro, S. Olivares and M. G. A. Paris, Gaussian States in Quantum Information (Bibliopolis, Napoli, 2005).
* (2) M. M. Wolf, G. Giedke and J. I. Cirac, Phys. Rev. Lett. 96, 080502 (2006).
* (3) F. Grosshans, G. Van Assche, J. Wenger, R. Brouri, N. J. Cerf and P. Grangier, Nature 421, 238 (2003).
* (4) V. D’Auria, S. Fornaro, A. Porzio, S. Solimeno, S. Olivares and M. G. A. Paris, Phys. Rev. Lett. 102, 020502 (2009).
* (5) D. Daems, F. Bernard, N. J. Cerf and M. I. Kolobov, J. Opt. Soc. Am. B 27, 447 (2010).
* (6) R. Simon, Phys. Rev. Lett. 84, 2726 (2000).
* (7) L.-M. Duan, G. Giedke, J. I. Cirac and P. Zoller, Phys. Rev. Lett. 84, 2722 (2000); G. Giedke, B. Kraus, M. Lewenstein and J. I. Cirac, Phys. Rev. A 64, 052303 (2001).
* (8) V. Giovannetti, S. Mancini, D. Vitali and P. Tombesi, Phys. Rev. A 67 022320 (2003).
* (9) P. van Loock and A. Furusawa, Phys. Rev. A 67, 052315 (2003).
* (10) A. Serafini, F. Illuminati, M. G. A. Paris and S. De Siena, Phys. Rev A 69, 022318 (2004).
* (11) G. Adesso, A. Serafini and F. Illuminati, Phys. Rev. Lett. 93, 220504 (2004).
* (12) E. Shchukin and W. Vogel, Phys. Rev. A 74, 030302(R) (2006).
* (13) P. Hyllus and J. Eisert, New J. Phys. 8, 51 (2006).
* (14) P. Marian and T. A. Marian, Phys. Rev. Lett. 101, 220403 (2008).
* (15) M. G. A. Paris, Phys. Lett. A 225, 28 (1997).
* (16) W. Xiang-bin, Phys. Rev. A 66, 024303 (2002).
* (17) M. S. Kim, W. Son, V. Buzek and P. L. Knight, Phys. Rev. A 65, 032323 (2002.)
* (18) Q. P. Zhou, M. F. Fang, X. J. Liu, X. M. Chen, Q. Wu and H. Z. Xu, Chin. Phys. 13, 1881 (2004).
* (19) G. Adesso, Phys. Rev. Lett. 97, 130502 (2006).
* (20) R. Tahira, M. Ikram, H. Nha and M. S. Zubairy, Phys. Rev. A 79, 023816 (2009).
* (21) J. Sperling and W. Vogel, Phys. Rev. A 83, 042315 (2011).
* (22) A. Furusawa, J. L. Sørensen, S. L. Braunstein, C. A. Fuchs, H. J. Kimble and E. S. Polzik, Science 282, 706 (1998).
* (23) M. G. A. Paris, Phys. Rev. A 59, 1615 (1999).
* (24) M. M. Wolf, J. Eisert and M. B. Plenio, Phys. Rev. Lett. 90, 047904 (2003).
* (25) G. Adesso, A. Serafini and F. Illuminati, Phys. Rev. Lett. 92, 087901 (2004).
* (26) G. Adesso and F. Illuminati, Phys. Rev. Lett. 95, 150503 (2005).
* (27) S. Pirandola, S. Mancini, D. Vitali and P. Tombesi, Phys. Rev. A 68, 062317 (2003).
* (28) P. Meystre, Atom Optics (Springer Verlag, New York, 2001).
* (29) R. Tatham, N. Korolkova, J. Phys. B: At. Mol. Opt. Phys. 44, 175506 (2011).
* (30) S.-H. Xiang, W. Wen, Z.-G. Shi and K.-H. Song, Phys. Rev. A 81, 054301 (2010).
* (31) M. J. Woolley, G. J. Milburn, C.M. Caves, New J. Phys. 10, 125018 (2008).
* (32) L. Chirolli, G. Burkard, S. Kumar and D. P. Di Vincenzo, Phys. Rev. Lett. 104, 230502 (2010).
* (33) S. C. Springer, J. Lee, M. Bellini and M. S. Kim, Phys. Rev. A 79, 062303 (2009).
* (34) S. Olivares and M. G. A. Paris, Phys. Rev. A 80, 032329 (2009).
* (35) R. Bloomer, M. Pysher and O. Pfister, New J. Phys. 13, 063014 (2011).
* (36) H. Scutaru, J. Phys. A: Math. Gen. 31, 3659 (1998).
* (37) C. K. Hong, Z. Y. Ou and L. Mandel, Phys. Rev. Lett. 59, 2044 (1987).
|
arxiv-papers
| 2011-03-20T22:11:02 |
2024-09-04T02:49:17.804428
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Stefano Olivares and Matteo G. A. Paris",
"submitter": "Stefano Olivares",
"url": "https://arxiv.org/abs/1103.3894"
}
|
1103.3948
|
# Is There Scale Invariance in $\mathcal{N}=1$ Supersymmetric Field Theories
$?$
Sibo Zheng
Department of Physics, Chongqing University, Chongqing 401331, P.R. China
Abstract
In two dimensions, it is well known that the scale invariance can be
considered as conformal invariance. However, there is no solid proof of this
equivalence in four or higher dimensions. we address this issue in the context
of $4d$ $\mathcal{N}=1$ SUSY theories. The SUSY version of dilatation current
for theories without conserved $R$ symmetry is constructed through the FZ-
multiplet. We discover that the scale-invariant SUSY theory is also conformal
when the real superfield in the dilatation current multiplet is conserved.
Otherwise, it is only scale-invariant, despite of the transformation of
improvement.
## 1 Introduction
In two dimensions, the equivalence between scale and conformal invariance has
been proved in terms of a “c theorem ” [1, 2]. At the classical level this
statement is argued to be still true in four-dimensional ( $4d$ ) QFT [3].
However, it is unclear at the quantum level generally, although some examples
( e.g, see [2, 4, 5, 6, 7]) with or without supersymmetry (SUSY) have been
proposed in the literature.
The difference between scale and conformal invariance in SUSY can be
understood from the viewpoint of structure of their groups [8]. The generators
for the group of scale invariance include a dilatation operator $\Delta$ and
those of super-Poincare group. The elements of the group for super-conformal
symmetry are bigger. In particular, the super-conformal group contains a
$R$-symmetric generator. So one might guess the role played by the $R$
symmetry in $4d$ SUSY theory is crucial for discriminating scale invariance
from conformal invariance.
Following the intuition, we address the connection between two symmetries in
the context of $4d$ $\mathcal{N}=1$ SUSY. The first task we should solve is
the realization of dilatation current of SUSY version. This is tied to two
elements. At first, the SUSY version of dilatation current is actually to
address the SUSY generalization of momentum-energy tensor $T_{\mu\nu}$ [10,
11]. There are a few such multiplets known as supercurrent multiplets which
admit the $T_{\mu\nu}$ as a component freedom. These supercurrent multiplets
are classified into Ferrara-Zumino (FZ)-multiplet [12], $\mathcal{R}$\-
multiplet [13] and $\mathcal{S}$\- multiplet [14], (see also [15, 16]). The
connection between scale and conformal invariance has been discussed in [9] in
terms of $\mathcal{R}$\- multiplet, which admits a conserved $R$ symmetry. In
this paper, we explore the FZ multiplet, which conversely doesn’t admit such
conserved $R$ symmetry.
The other element which is crucial for our discussion is the ambiguity in the
definition of $T_{\mu\nu}$. As this is the main source for incorporating the
scale and conformal invariance. In what follows, we simply review the
transformation of improvement due to the ambiguity in QFT, and then take care
of the improvement in its SUSY version.
The paper is organized as follows. In section 2, inspired by the construction
of dilatation current multiplet for $\mathcal{R}$-multiplet [9], we consider
the SUSY version of dilatation current and virial current multiplet in the
case of FZ multiplet. In section 3, we use the consistent constraints of
unitarity for scale-invariant SUSY theory and closure of SUSY algebra as the
main tool to explore the structure of the virial current multiplet. We find
that the the scale-invariant SUSY theory is also conformal when the real
superfield in the dilatation current multiplet is conserved. Otherwise, it is
only scale-invariant, despite of the transformation of improvement. Together
with the claim on conditions for the equivalence between these two symmetries
in $R$-symmetric case [9], we complete understanding their $4d$ SUSY version.
## 2 Supercurrent and Dilatation Current Multiplet
### 2.1 $4D$ version in QFT
Before we discuss the SUSY version of the dilatation current, let us recall
its definition in $4d$ quantum field theory (QFT). Given a QFT with scaling
invariance, there exists a conserved current, i.e, the dilatation current
$\Delta_{\mu}$, which is found to be,
$\displaystyle{}\Delta_{\mu}=x^{\nu}T_{\mu\nu}+\mathcal{O}_{\mu}$ (2.1)
Here $\mathcal{O}_{\mu}$ refers to the virial current that does not explicitly
depend on the spacetime coordinates. Scale invariance gives rise to an anomaly
for the derivative of the virial current,
$\displaystyle{}T=-\partial^{\mu}\mathcal{O}_{\mu}$ (2.2)
which shows that the virial current must also be conserved if QFT with scale
invariance is promoted to be conformal. In other words, the anomaly involved
the virial current in (2.2) is a character for scale invariance against
conformal invariance. As illustrated by Polchinski in Ref. [2], scale-
invariant QFT can be promoted to be a conformal one if and only if the virial
current permits the structure as
$\displaystyle{}\mathcal{O}_{\mu}=j_{\mu}+\partial^{\nu}L_{\mu\nu},$ (2.3)
with $L_{\mu\nu}$ is an anti-symmetric tensor and $j_{\mu}$ an conserved
current, $\partial^{\mu}j_{\mu}=0$.
There are two important issues which determine whether virial current is
allowed to have the structure as in (2.3). The first issue is the ambiguity in
the definition of the 4d energy-momentum tensor,
$\displaystyle{}T_{\mu\nu}$ $\displaystyle\rightarrow$
$\displaystyle~{}T_{\mu\nu}+(\partial_{\mu}\partial_{\nu}-\eta_{\mu\nu}\partial^{2})\varphi$
(2.4)
which transfers the ambiguity into the definition of virial current via (2.2).
The second issue involves in the SUSY version of (2.1). Inspired by the
connection between supersymmetric current $S_{\mu\alpha}$ and $T_{\mu\nu}$
from viewpoint of SUSY algebra, they can be embedded into super-multiplets
known as super-current multiplets. In what follows, we choose the super-
current multiplet which doesn’t allow the $R$ symmetry, i.e, the FZ-multiplet
$\mathcal{J}_{\mu}$ 111 Supercurrent multiplets without conserved $R$ symmetry
include $\mathcal{S}$\- and FZ-multiplet. Consider the fact that under certain
limits [14] the former reduces to either the later one or the $R$ multiplet,
we will study the FZ-multiplet.. We follow the conventions of Wess and Bagger
[17] 222The bi-spinor representation for vector field is taken as,
$\displaystyle
J_{\alpha\dot{\alpha}}=-2\sigma^{\mu}_{\alpha\dot{\alpha}}J_{\mu},~{}~{}~{}~{}~{}~{}and~{}~{}~{}~{}J_{\mu}=\frac{1}{4}\bar{\sigma}_{\mu}^{\dot{\alpha}\alpha}J_{\alpha\dot{\alpha}}$
, and present the explicit component expression for FZ-multiplet in the
appendix A. In the appendix, it is easy to see that the divergence of bottom
component
$j_{\mu}=\partial^{\alpha\dot{\alpha}}\mathcal{S}_{\alpha\dot{\alpha}}=i\left(\bar{D}^{2}\bar{X}-D^{2}X\right)$,
from which this global current is not conserved except in the case of
$\mathcal{R}$-multiplet.
### 2.2 Dilatation Current Multiplet
The SUSY version of (2.1) can be realized by a multiplet
$\mathbf{\Delta}_{\mu}$ defined as,
$\displaystyle{}\mathbf{\Delta}_{\mu}=x^{\nu}\left\\{-\frac{1}{8}\bar{\sigma}_{\mu}^{\dot{\alpha}\alpha}[D_{\alpha},\bar{D}_{\dot{\alpha}}]\mathcal{\mathcal{J}}_{\nu}+\frac{1}{16}\epsilon_{\nu\mu\rho\sigma}\left(\bar{\sigma}^{\dot{\alpha}\alpha}\right)^{\rho}\\{D_{\alpha},\bar{D}_{\dot{\alpha}}\\}\mathcal{J}^{\sigma}+\frac{1}{4}\eta_{\mu\nu}(D^{2}X+\bar{D}^{2}\bar{X})\right\\}+\mathcal{O}_{\mu}$
with $\Delta_{\mu}$ being the bottom component of real superfield
$\mathbf{\Delta}_{\mu}$ in (2.1), from which one can verify (2.1) by using the
component expression in the appendix A. The SUSY version of (2.2) can be
directly read from (2.2),
$\displaystyle{}\partial^{\mu}\mathcal{O}_{\mu}=\frac{3}{16}(\bar{D}^{2}\bar{X}+D^{2}X)$
(2.6)
In parrel to the previous discussion in the $4d$ version of QFT, it is crucial
to figure out the ambiguity of superfield $\mathcal{O}_{\mu}$ defined in (2.2)
before we proceed to discuss its structure. The constraint on supercurrent
$\mathcal{J}_{\mu}$,
$\bar{D}^{\dot{\alpha}}\mathcal{J}_{\alpha\dot{\alpha}}=D_{\alpha}X$ is not
affected by an improvement as [14],
$\displaystyle{}\mathcal{J}_{\alpha\dot{\alpha}}$ $\displaystyle\rightarrow$
$\displaystyle\mathcal{J}_{\alpha\dot{\alpha}}-2i\sigma^{\mu}_{\alpha\dot{\alpha}}\partial_{\mu}(Y-\bar{Y})$
$\displaystyle X$ $\displaystyle\rightarrow$
$\displaystyle~{}X-\frac{1}{2}\bar{D}^{2}\bar{Y}$ (2.7)
where $Y$ a chiral superfield. The improvement (2.2) shifts both the energy-
momentum tensor and the supersymmetry current simultaneously as,
$\displaystyle{}S_{\mu\alpha}$ $\displaystyle\rightarrow$ $\displaystyle
S_{\mu\alpha}+2i\left(\sigma_{\mu\nu}\right)^{\beta}_{\alpha}\partial^{\nu}Y\mid_{\theta^{\beta}}$
$\displaystyle T_{\mu\nu}$ $\displaystyle\rightarrow$
$\displaystyle~{}T_{\mu\nu}-(\partial_{\mu}\partial_{\nu}-\eta_{\mu\nu}\partial^{2})Re~{}Y\mid$
(2.8)
Substituting the second improvement in (2.2) into (2.6) leads to the SUSY
version of the improvement transformations,
$\displaystyle{}\partial^{\mu}\mathcal{O}_{\mu}\rightarrow\partial^{\mu}\mathcal{O}_{\mu}-\frac{3}{32}\left(D^{2}\bar{D}^{2}\bar{Y}+\bar{D}^{2}D^{2}Y\right)$
(2.9)
Here are a few comments in order. It is obvious that the transformations in
(2.2) don’t violate the conservations of supersymmetric current
$\partial^{\mu}S_{\mu\alpha}=0$ and energy-momentum tensor
$\partial^{\mu}T_{\mu\nu}=0$. Nevertheless, it modifies the trace part of
energy-momentum tensor as $T\rightarrow T-3\partial^{2}(ReY\mid)$. As we will
discuss later, this character potentially interpolates the scale invariant and
conformal invariant theories. That is to say, in a scale-invariant SUSY
theory, if there exists a well-defined $Y$ from UV to deep IR energy scale
such that trace part of energy-momentum tensor $T^{\prime}=0$ by the
improvement, then this SUSY theory is actually super-conformal. Otherwise, a
scale invariant SUSY theory is exactly allowed to exist, unless it doesn’t
satisfy the examination of unitary constraints (see below for more
discussions).
## 3 Constraints on Virial-current Multiplet
Now we proceed to uncover the structure of virial current superfield through
some consistent checks from SUSY algebra and unitarity of scale invariant
theories. The fact that the supercharge $Q_{\alpha}$ has scaling dimension
$1/2$ implies that,
$\displaystyle{}[Q_{\alpha},\Delta]=-\frac{i}{2}Q_{\alpha}=\int
d^{3}x[Q_{\alpha},\Delta_{0}]$ (3.1)
By using the component expression for FZ-multiplet in appendix A, one obtains,
$\displaystyle{}\int
d^{3}x\left(\mathcal{O}_{0\alpha}-\frac{i}{2\sqrt{2}}(\sigma_{0}\bar{\psi})_{\alpha}-i(\sigma^{\nu}_{0})_{\alpha}^{\beta}S_{\mu\beta}-\frac{i}{2}S_{0\alpha}\right)=0$
(3.2)
where $\psi=\frac{\sqrt{2}}{3}(\sigma^{\mu}\bar{S}_{\mu})$ in the case of FZ-
multiplet, and $Q_{\mu\alpha}\equiv[Q_{\alpha},\mathcal{O}_{\mu}]$ as in Ref.
[9]. Then (3.2) gives rise to,
$\displaystyle{}\mathcal{O}_{\mu\alpha}=\frac{i}{3}\sigma_{\mu\alpha\dot{\alpha}}\bar{\sigma}^{\nu\dot{\alpha}\beta}S_{\nu\beta}+(\sigma_{\mu}^{\nu})^{\beta\delta}\partial_{\nu}\gamma_{\beta\delta\alpha}+(\bar{\sigma}_{\mu}^{\nu})^{\dot{\beta}\dot{\delta}}\partial_{\nu}\gamma_{\dot{\beta}\dot{\delta}\alpha}$
(3.3)
where $\gamma_{\beta\delta\alpha}$ and
$\gamma_{\dot{\beta}\dot{\delta}\alpha}$ are local and gauge invariant
operators of dimension $5/2$.
In the case of conformal field theory, there exist well-known bounds on
dimension of local and gauge invariant operators [8]. Similar situation
happens in the case of non-conformal fixed points [18], in terms of which
operators $\gamma_{\beta\delta\alpha}$ and
$\gamma_{\dot{\beta}\dot{\delta}\alpha}$ (and higher spin operators) are found
to satisfy [9],
$\displaystyle{}(\sigma_{\mu}^{\nu})^{\beta\delta}\partial_{\nu}\gamma_{\beta\delta\alpha}=0,~{}~{}~{}(\bar{\sigma}_{\mu}^{\nu})^{\dot{\beta}\dot{\delta}}\partial_{\nu}\gamma_{\dot{\beta}\dot{\delta}\alpha}=0$
(3.4)
Thus, one finds,
$\displaystyle{}\mathcal{O}_{\mu\alpha}=\frac{i}{3}\sigma_{\mu\alpha\dot{\alpha}}\bar{\sigma}^{\nu\dot{\alpha}\beta}S_{\nu\beta}+(\sigma_{\mu}^{\nu})^{\alpha}_{\beta}\partial_{\nu}\gamma_{\beta}$
(3.5)
To proceed, we impose the closure of SUSY transformation to extract possible
information on $\gamma_{\beta}$ in (3.5) and its descents. It is
straightforward to impose the constraints,
$\displaystyle{}(\eta^{\beta}\xi^{\alpha}-\xi^{\beta}\eta^{\alpha})\delta_{\beta}\delta_{\alpha}\mathcal{O}_{\mu}$
$\displaystyle=$ $\displaystyle 0$
$\displaystyle(\xi^{\alpha}\bar{\eta}_{\dot{\alpha}}\delta^{\dot{\alpha}}\delta_{\alpha}-\bar{\eta}_{\dot{\alpha}}\xi^{\alpha}\delta_{\alpha}\delta^{\dot{\alpha}})\mathcal{O}_{\mu}$
$\displaystyle=$ $\displaystyle
2i(\xi\sigma^{\nu}\bar{\eta})\partial_{\nu}\mathcal{O}_{\mu}$ (3.6)
$\displaystyle(\xi^{\alpha}\bar{\eta}_{\dot{\alpha}}\delta^{\dot{\alpha}}\delta_{\alpha}-\bar{\eta}_{\dot{\alpha}}\xi^{\alpha}\delta_{\alpha}\delta^{\dot{\alpha}})\gamma_{\beta}$
$\displaystyle=$ $\displaystyle
2i(\xi\sigma^{\nu}\bar{\eta})\partial_{\nu}\gamma_{\beta}$
which will give us some insights about the structure of $\gamma_{\beta}$. In
what follows, we follow a set of definitions of Ref.[9],
$\displaystyle{}\delta_{\alpha}\gamma_{\beta}$ $\displaystyle=$ $\displaystyle
i\epsilon_{\alpha\beta}\gamma-(\sigma^{\mu\nu})_{\alpha\beta}\gamma_{\mu\nu},$
$\displaystyle\delta_{\dot{\alpha}}\gamma_{\beta}$ $\displaystyle=$
$\displaystyle(\sigma^{\mu})_{\beta\dot{\alpha}}\gamma_{\mu}$ (3.7)
where $\gamma$, $\gamma_{\mu}$ and $\gamma_{\mu\nu}$ is gauge invariant
scalar, vector and anti-symmetric tensor operator, respectively.
In terms of the SUSY transformation (A), we obtain from the first constraint
in (3),
$\displaystyle{}i\partial^{\nu}\gamma_{\nu\mu}+\frac{1}{3}\partial_{\mu}\gamma=0$
(3.8)
Note that (3.8) coincides with what has been found in the case of
$\mathcal{R}$-multiplet. Thus, as discussed in [9], we arrive at the
conclusion that scalar $\gamma$ and tensor field $\gamma_{\mu\nu}$ both
vanish. In other words, $\gamma_{\alpha}$ is an anti-chiral superfield,
$\displaystyle{}D^{2}\mathcal{O}_{\mu}=\bar{D}^{2}\mathcal{O}_{\mu}=0$ (3.9)
Evaluating the second constraint in (3), one derives that,
$\displaystyle{}\partial_{\nu}\mathcal{O}_{\mu}=-\frac{2}{3}\eta_{\nu\mu}T-\frac{2}{3}\epsilon_{\mu\nu\rho\sigma}\partial^{\rho}j^{\sigma}-\frac{1}{4}\partial_{\nu}(\gamma_{\mu}+\bar{\gamma}_{\mu})+\frac{1}{4}\eta_{\nu\mu}\partial^{\rho}(\gamma_{\rho}+\bar{\gamma}_{\rho})-\frac{i}{4}\varepsilon_{\sigma\rho\mu\nu}\partial^{\sigma}(\gamma^{\rho}-\bar{\gamma}^{\rho})$
where we have used the anti-commutators of supercharges and supercurrent. From
(3) one finds the divergence of virial current,
$\displaystyle{}\partial^{\mu}\mathcal{O}_{\mu}=-\frac{1}{20}\partial^{\mu}(\gamma_{\mu}+\bar{\gamma}_{\mu})$
(3.11)
in term of the relation $T=-\partial^{\mu}\mathcal{O}_{\mu}$. Introduce
superfield $\Gamma_{\alpha}$ which accommodate $\gamma_{\alpha}$ as the bottom
component, $\Gamma_{\alpha}=\gamma_{\alpha}+\cdots$, we can write (3.11) in
the superfield expression
$\displaystyle{}\mathcal{O}_{\mu}$ $\displaystyle=$
$\displaystyle\frac{1}{40}\bar{\sigma}_{\mu}^{\dot{\alpha}\alpha}\left(\bar{D}_{\dot{\alpha}}\Gamma_{\alpha}-D_{\alpha}\bar{\Gamma}_{\dot{\alpha}}\right)+J_{\mu}$
(3.12)
Following the fact that the anti-symmetric part involved $\gamma_{\mu}$ in (3)
doesn’t contribute to $\mathcal{O}_{\mu}$ in (3.12), one can accommodate this
part as,
$\displaystyle{}U_{\mu}=-i(\gamma_{\mu}-\bar{\gamma}_{\mu})+\mathcal{\hat{O}}_{\mu}$
(3.13)
where $U_{\mu}$ and $\mathcal{\hat{O}}_{\mu}$ is the vector freedom of real
superfield $U$ 333For illustration, superfield $U$ that can be decomposed into
a chiral and its anti-chiral superfield, can achieve the null contribution
from $U_{\mu}\sim\partial_{\mu}(A-A^{*})$. and a primary operator
$\mathcal{\hat{O}}$. Using the last constraint in (3), one finds
$\displaystyle{}\Gamma_{\alpha}=\frac{i}{2}D_{\alpha}U+\frac{1}{2}\mathcal{\hat{O}}_{\alpha}$
(3.14)
from which (3.12) can be rewritten as
$\displaystyle{}\mathcal{O}_{\mu}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\partial_{\mu}U+\frac{1}{8}\bar{\sigma}_{\mu}^{\dot{\alpha}\alpha}\left[D_{\alpha},\bar{D}_{\dot{\alpha}}\right]\mathcal{\hat{O}}$
(3.15)
where we have made a scaling of $\mathcal{O}_{\mu}$.
In summary, the virial current multiplet in a scale-invariant SUSY theory
satisfies
$\displaystyle{}0$ $\displaystyle=$ $\displaystyle
D^{2}\mathcal{O}_{\mu}=\bar{D}^{2}\mathcal{O}_{\mu}$
$\displaystyle\mathcal{O}_{\mu}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}~{}\partial_{\mu}U+\frac{1}{8}\bar{\sigma}_{\mu}^{\dot{\alpha}\alpha}\left[D_{\alpha},\bar{D}_{\dot{\alpha}}\right]\mathcal{\hat{O}}$
(3.16)
## 4 Scale Invariance vs Conformal Invariance
According to (2.9) the scale-invariant SUSY theory can be improved to be
conformal-invariant if and only if,
$\displaystyle{}\partial^{\mu}\mathcal{O}_{\mu}=\\{D^{2},\bar{D}^{2}\\}\hat{Y}$
(4.1)
with $\hat{Y}=Y+\bar{Y}$ a real superfield. Impose the first constraint of (3)
on its second one, one immediately finds that
$\displaystyle{}\partial^{\mu}\mathcal{O}_{\mu}=-\frac{1}{4}~{}\Box~{}U$ (4.2)
If $U$ is conserved, the scale-invariant SUSY theory is actually conformal-
invariant. Conversely, it is only scale-invariant, which is not affected by
the improvement (4.1) as we explain below.
Compare (4.2) with (4.1), one gets the intuition that scale-invariant SUSY
with non-conserved $U$ can be improved to be conformal-invariant if $U$
satisfies,
$\displaystyle{}\left(\Box~{}-\tilde{c}~{}\\{D^{2},\bar{D}^{2}\\}\right)U=0$
(4.3)
with an adjustable real coefficient $\tilde{c}$. Also the improvement suggests
that $U$ is proportional to $\hat{Y}$. This means $\hat{Y}$ should also
satisfy (4.3) and $D^{2}\hat{Y}=0$ simultaneously. Substituting the later into
(4.3) leads to,
$\displaystyle{}\Box~{}\hat{Y}=0~{}~{}~{}~{}~{}\Leftrightarrow~{}~{}~{}~{}~{}\Box~{}U=0$
(4.4)
In conclusion, when the virial current multiplet $\mathcal{O}_{\mu}$, defined
by the scale-invariant SUSY theory, doesn’t contain a conserved $U$, the
theory can not be improved to be conformal. Conversely, when such an $U$ is
conserved, the scale-invariant SUSY theory must also be conformal, despite of
the transition of improvement. Together with the claim on the equivalence
between these two symmetries in $R$-symmetric case [9], we complete
understanding their $4d$ SUSY version.
To illustrate the roles played by $R$ symmetry, let us use the SUSY Wess-
Zumino model for example. Given the Kahler potential
$K(\Phi^{i},\bar{\Phi}^{i})$ and superpotential $W(\Phi_{i})$ for chiral
superfields $\Phi_{i}$, the FZ-multiplet and $X$ superfield is given by ,
$\displaystyle{}\mathcal{J}_{\alpha\dot{\alpha}}$ $\displaystyle=$
$\displaystyle
2g_{i\bar{i}}\left(D\Phi^{i}\right)\left(\bar{D}\bar{\Phi}^{\bar{i}}\right)-\frac{2}{3}[D_{\alpha},D_{\dot{\alpha}}]K,$
$\displaystyle X$ $\displaystyle=$ $\displaystyle 4W-\frac{1}{3}\bar{D}^{2}K$
(4.5)
If there is $R$ symmetry in SUSY Wess-Zumino models, $X$ can be written as the
specific form [14],
$\displaystyle
X=\bar{D}^{2}\left(\frac{1}{2}\sum_{i}R_{i}\Phi^{i}\partial_{i}K-\frac{1}{3}K\right)=-\frac{1}{2}\bar{D}^{2}\tilde{U},$
(4.6)
It is crucial to note that $U$ is identifies as $\tilde{U}$ that is indeed
decomposed of a chiral and its anti-chiral part. Such $U$ which is constrained
by $D^{2}U=\bar{D}^{2}U=0$ as in (3) trivially satisfies (4.4). Superficially,
the super-conformality in this type of $R$-symmetric Wess-Zumino model is
restored in terms of the transition of improvement. From the viewpoint of
virial current multiplet, this improvement is actually irrelevant.
$\bf{Acknowledgement}$
This work is supported in part by the Doctoral Fund of Ministry of Education
of China (No. 20110191120045).
## Appendix A Communicators
The component expression for $\mathcal{S}_{\mu}$ that satisfies the constraint
$\bar{D}^{\dot{\alpha}}\mathcal{J}_{\alpha\dot{\alpha}}=D_{\alpha}X+\chi_{\alpha}$
is given by [14],
$\displaystyle{}\mathcal{J}_{\mu}$ $\displaystyle=$ $\displaystyle
j^{(S)}_{\mu}+\theta^{\alpha}(S_{\mu\alpha}-\frac{1}{\sqrt{2}}\sigma_{\mu}\bar{\psi})+\bar{\theta}(\bar{S}_{\mu}+\frac{1}{\sqrt{2}}\bar{\sigma}_{\mu}\psi)+\frac{i}{2}\theta^{2}\partial_{\mu}\phi^{{\dagger}}-\frac{i}{2}\bar{\theta}^{2}\partial_{\mu}\phi$
$\displaystyle+$
$\displaystyle(\theta\sigma^{\nu}\bar{\theta})\left(2T_{\nu\mu}-\eta_{\nu\mu}Z+\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}\left(F^{(S)\rho\sigma}+\partial^{\rho}j^{(S)\sigma}\right)\right)$
$\displaystyle+$
$\displaystyle\theta^{2}\left(\frac{i}{2}\partial_{\rho}S_{\mu}\sigma^{\rho}-\frac{i}{2\sqrt{2}}\partial_{\rho}\bar{\psi}\bar{\sigma}^{\rho}\sigma_{\mu}\right)\bar{\theta}+\bar{\theta}^{2}\theta\left(-\frac{i}{2}\sigma^{\rho}\partial_{\rho}S_{\mu}+\frac{i}{2\sqrt{2}}\sigma_{\mu}\bar{\sigma}^{\rho}\partial_{\rho}\psi\right)$
$\displaystyle+$
$\displaystyle\theta^{2}\bar{\theta}^{2}\left(\frac{1}{2}\partial_{\mu}\partial^{\nu}j_{\nu}^{(S)}-\frac{1}{4}\partial^{2}j^{(S)}_{\mu}\right)$
with
$\displaystyle{}X$ $\displaystyle=$
$\displaystyle\phi+\sqrt{2}\theta\psi+\theta^{2}\left(Z+i\partial^{\rho}j^{(R)}_{\rho}\right)$
$\displaystyle\chi_{\alpha}$ $\displaystyle=$
$\displaystyle-i\lambda^{(S)}_{\alpha}+\left(D\delta_{\alpha}^{\beta}-2i(\sigma^{\rho}\bar{\sigma}^{\sigma})_{\alpha}^{~{}\beta}~{}F^{(S)}_{\rho\sigma}\right)\theta_{\beta}+\theta^{2}\sigma_{\nu\alpha\dot{\alpha}}\partial_{\nu}\bar{\lambda}^{(S)\dot{\alpha}}$
(A.2)
The component fields also satisfy two extra constraints,
$\displaystyle{}D=-4T^{\mu}_{\mu}+6Z,~{}~{}~{}~{}~{}~{}\lambda_{\alpha}^{(S)}=-2i\sigma^{\mu}\bar{S}_{\mu}+3i\sqrt{2}\psi.$
(A.3)
According to (A), one can derive the SUSY transformation of supercurrent
$S_{\mu\alpha}$,
$\displaystyle{}\delta_{\dot{\beta}}S_{\mu\alpha}$ $\displaystyle=$
$\displaystyle\sigma^{\nu}_{\alpha\dot{\beta}}\left(2T_{\nu\mu}-i\eta_{\nu\mu}\partial^{\rho}j^{(S)}_{\rho}+i\partial_{\nu}j^{(S)}_{\mu}-\frac{1}{2}\epsilon_{\nu\mu\rho\sigma}F^{(S)\rho\sigma}-\frac{1}{2}\epsilon_{\nu\mu\rho\sigma}\partial^{\rho}j^{(S)\sigma}\right)$
$\displaystyle\delta_{\beta}S_{\mu\alpha}$ $\displaystyle=$
$\displaystyle-2\varepsilon_{\lambda\beta}(\sigma_{\mu\rho})^{\lambda}_{\alpha}\partial^{\rho}\phi^{*}$
(A.4)
as well as their conjugators,
$\displaystyle{}\delta_{\beta}\bar{S}_{\mu\dot{\alpha}}$ $\displaystyle=$
$\displaystyle\sigma^{\nu}_{\beta\dot{\alpha}}\left(2T_{\nu\mu}+i\eta_{\nu\mu}\partial^{\rho}j^{(S)}_{\rho}-i\partial_{\nu}j^{(S)}_{\mu}-\frac{1}{2}\epsilon_{\nu\mu\rho\sigma}F^{(S)\rho\sigma}-\frac{1}{2}\epsilon_{\nu\mu\rho\sigma}\partial^{\rho}j^{(S)\sigma}\right)$
$\displaystyle\delta_{\dot{\beta}}\bar{S}_{\mu\dot{\alpha}}$ $\displaystyle=$
$\displaystyle-2\varepsilon_{\dot{\lambda}\dot{\beta}}(\bar{\sigma}_{\mu\rho})^{\dot{\lambda}}_{\dot{\alpha}}\partial^{\rho}\phi.$
(A.5)
We can obtain the corresponding relationes for the case of FZ multiplet by
taking the limit $\chi_{\alpha}=0$. The anti-commutators relationes (A) and
(A) modify as,
$\displaystyle{}\delta_{\dot{\beta}}S_{\mu\alpha}$ $\displaystyle=$
$\displaystyle\sigma^{\nu}_{\alpha\dot{\beta}}\left(2T_{\nu\mu}-i\eta_{\nu\mu}\partial^{\rho}j_{\rho}+i\partial_{\nu}j_{\mu}-\frac{1}{2}\epsilon_{\nu\mu\rho\sigma}\partial^{\rho}j^{\sigma}\right)$
$\displaystyle\delta_{\beta}S_{\mu\alpha}$ $\displaystyle=$
$\displaystyle-2\varepsilon_{\lambda\beta}(\sigma_{\mu\rho})^{\lambda}_{\alpha}\partial^{\rho}\phi^{*}$
(A.6)
and
$\displaystyle{}\delta_{\beta}\bar{S}_{\mu\dot{\alpha}}$ $\displaystyle=$
$\displaystyle\sigma^{\nu}_{\beta\dot{\alpha}}\left(2T_{\nu\mu}+i\eta_{\nu\mu}\partial^{\rho}j_{\rho}-i\partial_{\nu}j_{\mu}-\frac{1}{2}\epsilon_{\nu\mu\rho\sigma}\partial^{\rho}j^{\sigma}\right)$
$\displaystyle\delta_{\dot{\beta}}\bar{S}_{\mu\dot{\alpha}}$ $\displaystyle=$
$\displaystyle-2\varepsilon_{\dot{\lambda}\dot{\beta}}(\bar{\sigma}_{\mu\rho})^{\dot{\lambda}}_{\dot{\alpha}}\partial^{\rho}\phi.$
(A.7)
Here we have used the $D=0$ and $\lambda^{(S)}_{\alpha}=0$ in (A.3) to cancel
the dependence on $\psi$ and $Z$ fields, from which the conservation of
$\partial^{\mu}S_{\mu\alpha}=0$ is consistent with these anti-commutators.
## References
* [1] A. B. Zamolodchikov, “Irreversibility of the Flux of the Renormalization Group in a 2D Field Theory ,”JETP Lett. 43, 730 (1986) [Pisma Zh. Eksp. Teor. Fiz. 43, 565 (1986)].
* [2] J. Polchinski, “Scale and Conformal Invariance in Quantum Field Theory ,” Nucl. Phys. B 303, 226 (1988).
* [3] C. G. Callan, S. R. Coleman and R. Jackiw, “New improved energy - momentum tensor ,”Annals Phys. 59, 42 (1970).
* [4] D. Dorigoni and V. S. Rychkov, “Scale Invariance + Unitarity $\Rightarrow$ Conformal Invari- ance? ” [arXiv:0910.1087].
* [5] D. Gaiotto, “N=2 dualities,” [arXiv:0904.2715].
* [6] P. C. Argyres, M. Ronen Plesser, N. Seiberg and E. Witten, “New N=2 Superconfor- mal Field Theories in Four Dimensions ,” Nucl. Phys. B 461, 71 (1996) [arXiv:hep- th/9511154].
* [7] N. Seiberg, “ Electric - magnetic duality in supersymmetric nonAbelian gauge theo- ries ,” Nucl. Phys. B435, 129 (1995), [arXiv:hep-th/9411149].
* [8] G. Mack, “ All Unitary Ray Representations Of The Conformal Group SU(2,2) With Positive Energy,” Commun. Math. Phys. 55, 1 (1977).
* [9] I. Antoniadis, M.Buican, “On R-symmetric Fixed Points and Superconformality,” [arXiv:1102.2294].
* [10] V. Ogievetsky and E. Sokatchev “Supercurrent,” Sov. J. Nucl. Phys28, 423 (1978), Yad. Fiz28, 825 (1978).
* [11] V. Ogievetsky and E. Sokatchev, “On vector superfield generated by supercurrent,” Nucl. Phys. B 124, 309 (1977).
* [12] S. Ferrara and B. Zumino, “Transformation Properties Of The Supercurrent,” Nucl. Phys. B 87, 207 (1975).
* [13] K. S. Stelle and P. C. West,“ Minimal Auxiliary Fields For Supergravity,” Phys. Lett. B74, 330 (1978).
* [14] Z. Komargodski, N. Seiberg, “Comments on Supercurrent Multiplets, Supersymmetric Field Theories and Supergravity,” JHEP 07, 017 (2010), [arXiv:1002.2228].
* [15] T. E. Clark, O. Piguet and K. Sibold, “Supercurrents, Renormaliztion and Anomalies,” Nucl. Phys. B143, 445 (1978).
* [16] S. Zheng and J. Huang,“Variant supercurrent and Linearized Supergravity,” Class. Quant. Grav. 28, 075012 (2011). arXiv: [arXiv:1007.3092].
* [17] J. Wess and J. Bagger, “Supersymmetry and supergravity,” Princeton Univ. Pr. (1992).
* [18] B. Grinstein, K. A. Intriligator and I. Z. Rothstein, “Comments on Unparticles ,” Phys. Lett. B 662, 367 (2008) [arXiv:0801.1140].
|
arxiv-papers
| 2011-03-21T09:26:52 |
2024-09-04T02:49:17.809761
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Sibo Zheng",
"submitter": "Sibo Zheng",
"url": "https://arxiv.org/abs/1103.3948"
}
|
1103.3977
|
# GW Invariants Relative Normal Crossings Divisors
Eleny-Nicoleta Ionel
Stanford University
Stanford, CA 94305 Research supported in part by the NSF grants DMS-0605003
and DMS-0905738.
###### Abstract
In this paper we introduce a notion of symplectic normal crossings divisor $V$
and define the GW invariant of a symplectic manifold $X$ relative such a
divisor. Our definition includes normal crossings divisors from algebraic
geometry. The invariants we define in this paper are key ingredients in
symplectic sum type formulas for GW invariants, and extend those defined in
our previous joint work with T.H. Parker [IP1], which covered the case $V$ was
smooth. The main step is the construction of a compact moduli space of
relatively stable maps into the pair $(X,V)$ in the case $V$ is a symplectic
normal crossings divisor in $X$.
## 0 Introduction
In previous work with Thomas H. Parker [IP1] we constructed the relative
Gromov-Witten invariant $GW(X,V)$ of a closed symplectic manifold $X$ relative
a smooth “divisor” $V$, that is, a (real) codimension 2 symplectic
submanifold. These relative invariants are defined by choosing an almost
complex structure $J$ on $X$ that is compatible with both $V$ and the
symplectic form, and counting $J$-holomorphic maps that intersect $V$ with
specified multiplicities. An important application is the symplectic sum
formula that relates the GW invariant of a symplectic sum $X\\#_{V}Y$ to the
relative GW invariants of $(X,V)$ and $(Y,V)$ (see [IP2] and the independent
approaches [LR], [Li] and [EGH]).
In this paper we introduce a notion of symplectic normal crossings divisor $V$
and define the GW invariant of a symplectic manifold $X$ relative such a
divisor. Roughly speaking, a set $V\subset X$ is a symplectic normal crossings
divisor if it is locally the transverse intersection of codimension 2
symplectic submanifolds compatible with $J$ (the precise definition is given
in Section 1).
There are many reasons why one would want to extend the definition of relative
GW invariants to include normal crossings divisors, and we already have
several interesting applications in mind. One is a Mayer-Vietoris type formula
for the GW invariants: a formula describing how the GW invariants behave when
$X$ degenerates into several components and that allows one to recover the
invariants of $X$ from those of the components of the limit. The simplest such
degenerations come from the symplectic sum along a smooth divisor. But if one
wants to iterate this degeneration, one is immediately confronted with several
pieces whose intersection is no longer smooth, but instead are normal
crossings divisors. Normal crossings divisors appear frequently in algebraic
geometry, not only as the central fiber of a stable degeneration but also for
example as the toric divisor in a toric manifold which then appears in the
context of mirror symmetry. We also have some purely symplectic applications
in mind in which normal crossings divisors arise from Donaldson’s theorem;
these will appear in a subsequent paper.
The general approach in this paper is to appropriately adapt the ideas in
[IP1] but now allow the divisor to have a simple type of singularity, which we
call symplectic normal crossings. This is defined in Section 1, where we also
present many of the motivating examples. The notion of simple singularity is
of course relative: the main issue here is to be able to control the analysis
of the problem; the topology of the problem, though perhaps much more
complicated is essentially of a combinatorial nature so it is much easier
controlled.
There are several new features and problems that appear when the divisor $V$
has such singular locus. First, one must include in the moduli space
holomorphic curves that intersect the singular locus, and one must properly
record the contact information about such intersections. In Section 2 we
describe how to do this and construct the corresponding moduli space ${\cal
M}_{s}(X,V)$ of stable maps into $X$ whose contract intersection with $V$ is
described by the sequence $s$. There is a lot of combinatorics lurking in the
background that keeps track of the necessary topological information along the
singular locus, which could make the paper unnecessarily longer. We have
decided to keep the notation throughout the paper to a minimum, and expand its
layers only as needed for accuracy in each section. We give simple examples of
why certain situations have to be considered, explain in that simple example
what needs to be done, and only after that proceed to describe how such
situations can be handled in general. In the Appendix we describe various
needed stratifications associated to a normal crossings divisor, and
topological data associated to it.
The other more serious problem concerns the construction of a compactification
$\overline{{\cal M}}_{s}(X,V)$ of the relative moduli space. In the usual
Gromov compactification of stable maps into $X$, a sequence of holomorphic
maps that have a prescribed contact to $V$ may limit to a map that has
components in $V$ or even worse in the singular locus of $V$; then not only
the contact information is lost in the limit, but the formal dimension of the
corresponding boundary stratum of the stable map compactification is greater
than the dimension of the moduli space. This problem already appeared for the
moduli space relative a smooth divisor, where the solution was to rescale the
target normal to $V$ to prevent components from sinking into $V$; but now the
problem is further compounded by the presence of the singular locus of $V$. So
the main issue now is to how to precisely refine the Gromov compactness and
construct an appropriate relatively stable map compactification
$\overline{{\cal M}}_{s}(X,V)$ in such a way that its boundary strata are not
larger dimensional than the interior.
In his unpublished Ph. D. thesis, Joshua Davis [Da] described how one can
construct a relatively stable map compactification for the space of genus zero
maps relative a normal crossings divisor, by recursively blowing up the
singular locus of the divisor. As components sunk into this singular locus, he
recursively blew it up to prevent this from happening. This works for genus
zero, but unfortunately not in higher genus. The main reason for this is that
in genus zero a dimension count shows that components sinking into $V$ cause
no problem, only those sinking into the singular locus of $V$ do. However,
that is not the case in higher genus, so then one would also need to rescale
around $V$ to prevent this type of behavior. But then the process never
terminates: Josh had a simple example in higher genus where a component would
sink into the singular locus. Blowing up the singular locus forced the
component to fall into the exceptional divisor. Rescaling around the
exceptional divisor then forced the component to fall back into the next
singular locus, etc.
In this paper we present a different way to construct the a relatively stable
map compactification $\overline{{\cal M}}_{s}(X,V)$, by instead rescaling $X$
simultaneously normal to all the branches of $V$, a procedure we describe in
Section 3. When done carefully, this is essentially a souped up version of the
rescaling procedure described in [IP1] in the case $V$ was smooth.
Unfortunately, the naive compactification that one would get by simply
importing the description of that in [IP1] simply does not work when the
singular locus of $V$ is nonempty! There are two main reasons for its failure:
the first problem is that the ”boundary stratum” containing curves with
components over the singular locus is again larger dimensional than the
”interior” so it is in some sense too big; the second problem is that it still
does not capture all the limits of curves sinking into the singular locus, so
it is too small! This seems to lead into a dead end, but upon further analysis
in Sections 5 and 6 of the limiting process near the singular locus two new
features appear that allows us to still proceed.
The first new feature is the enhanced matching condition that the limit curves
must satisfy along the singular locus of $V$. It turns out that not all the
curves which satisfy the naive matching conditions can appear as limits of
maps in ${\cal M}_{s}(X,V)$. The naive matching conditions require that the
curves intersect $V$ in the same points with matching order of contact, as was
the case in [IP1], while the enhanced ones along the singular locus require in
some sense that their slopes in the normal directions to $V$ also match. So
the enhanced matching conditions also involve the leading coefficients of the
maps in these normal directions, and so they give conditions in a certain
weighted projectivization of the normal bundle to the singular locus, a simple
form of which is described in Section 4. Luckily, this is enough to cut back
down the dimensions of the boundary to what should be expected. In retrospect,
these enhanced matching conditions already appeared in one of the key Lemmas
in our second joint paper [IP2] with Thomas H. Parker about the symplectic sum
formula, but they do not play any role in the first paper [IP1] because they
are automatically satisfied when $V$ is smooth.
The second new feature that appears when $V$ is singular is that unfortunately
one cannot avoid trivial components stuck in the neck (over the singular locus
of $V$), as we show in some simple Examples at the end of Section 4. This
makes the enhanced matching conditions much more tricky to state, essentially
because these trivial components do not have the right type of leading
coefficients. The solution to this problem is to realize that the trivial
components are there only to make the maps converge in Hausdorff distance to
their limit, and in fact they do not play any essential role in the
compactification, so one can simply collapse them in the domain, at the
expense of allowing a node of the collapsed domain to be between not
necessarily consecutive levels. The enhanced matching condition then occurs
only at nodes between two nontrivial components, but needs to take into
account this possible jump across levels. It is described more precisely in
Section 5.
This finally allows us to define in Section 6 the compactified moduli space
$\overline{{\cal M}}_{s}(X,V)$ of relatively $V$-stable maps into $X$, which
comes together with a continuous map
$\displaystyle\mbox{\rm st}\times\mbox{\rm Ev}:\overline{{\cal
M}}_{s}(X,V)\rightarrow\overline{{\cal
M}}_{\chi_{s},\ell(s)}\times\prod_{x}{\mathbb{P}}_{s(x)}(NV_{I(x)})$ (0.1)
The first factor is the usual stabilization map recording the domain of $f$,
which may be disconnected, but the new feature is the second factor Ev. It is
a refinement of the usual (naive) evaluation map ev at the points $x$ that are
mapped into the singular locus of $V$, and it also records the weighted
projectivization of the leading coefficients of $f$ at $x$ in all the normal
directions to $V$ at $f(x)$. This is precisely the map that appears in the
enhanced matching conditions.
In Section 7 we then show that for generic $V$-compatible $(J,\nu)$ the image
of $\overline{{\cal M}}_{s}(X,V)$ under the map (0.1) indeed defines a
homology class $GW_{s}(X,V)$ in dimension
$\displaystyle{\rm dim\;}\overline{{\cal M}}_{s}(X,V)=2c_{1}(TX)A_{s}+({\rm
dim\;}X-6)\frac{\chi_{s}}{2}+2\ell(s)-2A_{s}\cdot V$
called the GW invariant of $X$ relative the normal crossings divisor $V$. The
class $GW_{s}(X,V)$ is independent of the perturbation $\nu$ and is in fact an
invariant under smooth deformations of the pair $(X,V)$ and of $(\omega,J)$
though $V$-compatible structures. When $V$ is smooth these invariants agree
with the usual relative GW invariants as constructed in [IP1].
There is a string of very recent preprints that have some overlap with the
situation considered in our paper, in that they all generalize in some way the
normal crossings situation from algebraic geometry. First of all, there is
certainly an overlap between what we call a symplectic normal crossings
divisor in this paper and what fits into the exploded manifold setup
considered by Brett Parker [P]. There is also some overlap with the
logarithmic Gromov-Witten invariants [GrS] considered by Gross and Siebert in
the context of algebraic geomery (see also the Abramovich-Chen paper [AC] on a
related topic). However, the precise local structure near the divisor is very
different: log geometry vs symplectic normal crossings vs exploded structures.
Furthermore, the moduli spaces constructed in these papers and in particular
their compactifications are completely different, even when applied to the
common case when $V$ is a smooth divisor in a smooth projective variety, see
Remarks 1.15 and 1.16 for more details. This means that a priori even in this
common case each one of these other approaches many lead to different
invariants, some even different from the usual relative GW invariants.
This paper is based on notes from a talk the author gave in Sept 2006 in the
Moduli Space Program at Mittag-Leffler Institute, during a month long stay
there. The notes were expanded in the fall of 2009 during the Symplectic and
Contact Geometry and Topology program at MSRI. We thank both research
institutes for their hospitality.
## 1 Symplectic normal crossings divisors
In this section we define a notion of symplectic normal crossings divisors,
which encodes the geometrical information required for the analysis of [IP1]
and [IP2] to extend after appropriate modifications. In particular this notion
generalizes the notion of normal crossings divisor in algebraic geometry.
Clearly the local model of such divisor $V$ should be the union of $k$
coordinate planes in ${\mathbb{C}}^{n}$, where the number of planes may vary
from point to point. But we also need a local model for the symplectic form
$\omega$ and the tamed almost complex structure $J$ near such divisor. We will
therefore require that each branch of $V$ is both $\omega$ symplectic and $J$
holomorphic. This will allow us to define the order of contact of
$J$-holomorphic curves to $V$. We also need a good description of the normal
directions to the divisor, because these are going to be the directions in
which the manifold $X$ will be rescaled when components of the holomorphic
curves fall into $V$. In particular, we need to keep track of both the normal
bundle to each branch of $V$ and its inclusion into $X$ which describes the
neighborhood of that branch.
###### Definition 1.1
(local model). In ${\mathbb{C}}^{n}$ consider the union $V$ of $k\geq 0$
(distinct) coordinate hyperplanes $H_{i}=\\{x|x_{i}=0\\}$ in
${\mathbb{C}}^{n}$, together with their normal direction $N_{i}$ given by the
usual projection $\pi_{i}:{\mathbb{C}}^{n}\longrightarrow H_{i}$ onto the
coordinate plane and the usual inclusion
$\iota:N_{i}\rightarrow{\mathbb{C}}^{n}$. We say that they form a model for a
normal crossings divisor in ${\mathbb{C}}^{n}$ with respect to a pair
$(\omega,J)$ if all the divisors $H_{i}$ are both $\omega$-symplectic and
$J$-holomorphic.
###### Remark 1.2
There is a natural action of ${\mathbb{C}}^{*}$ on the model induced by
scaling by a factor of $t^{-1}$ in the normal direction to each $H_{i}$, for
$i=1,\dots,k$. This defines a rescaling map
$R_{t}:{\mathbb{C}}^{n}\rightarrow{\mathbb{C}}^{n}$ for
$t\in{\mathbb{C}}^{*}$. By construction, the $R_{t}$ leaves the divisors
$H_{i}$ invariant, but not pointwise, and may not preserve $J$. However, as
$t\rightarrow 0$, $R_{t}^{*}J$ converges uniformly on compacts to a
${\mathbb{C}}^{*}$ invariant limit $J_{0}$ which depends on the 1-jet of $J$
along the divisor.
###### Definition 1.3
Assume $(X,\omega,J)$ is a symplectic manifold with a tamed almost complex
structure. $V$ is called a normal crossings divisor in $(X,\omega,J)$ with
normal bundle $N$ if there exists a smooth manifold $\widetilde{V}$ with a
complex line bundle $\pi:N\rightarrow\widetilde{V}$ and an immersion
$\iota:U_{V}\rightarrow X$ of some disk bundle $U_{V}$ of $N_{V}$ into $X$
satisfying the following properties:
* •
$V$ is the image of the zero section $\widetilde{V}$ of $N$
* •
the restriction of $\iota^{*}J$ to the fiber of $N$ along the zero section
induces the complex multiplication in the bundle $N$.
* •
at each point $p\in X$ we can find local coordinates on $X$ in which the
configuration $(X,\pi,\iota,V)$ becomes identified with one of the local
models in Definition 1.1.
Such a pair $(J,\omega)$ is called adapted to the divisor $V$.
Note that $\iota$ induces by pullback from $X$ both a symplectic structure
$\omega$ and an almost complex structure $J$ on the total space of the disk
bundle in $N$ over on $\widetilde{V}$, which will serve as a global model of
$X$ near $V$. Its zero section $\widetilde{V}$ is both symplectic and
$J$-holomorphic and serves as a smooth model of the divisor $V$ (called the
normalization of $V$). $N$ is also a complex line bundle whose complex
structure comes from the restriction of $J$ along the zero section. Thus $N$
also comes with a ${\mathbb{C}}^{*}$ action which will be used to rescale $X$
normal to $V$.
###### Remark 1.4
We are not requiring $J$ to be locally invariant under this ${\mathbb{C}}^{*}$
action. We also are not imposing the condition that the branches are
perpendicular with respect to $\omega$ or that the projections $\pi_{i}$ are
$J$-holomorphic. We also allow for self intersections of various components of
$V$. When each component of $V$ is a submanifold of $X$ the divisor is said to
have simple normal crossings singularities. Any of these assumptions would
simplify some of the arguments, but are not needed.
In this paper we will work only with $J$’s which are compatible to $V$ in the
sense of Definition 3.2 of [IP1]. This is a condition on the normal 1-jet of
$J$ along $V$:
1. (b)
$[(\nabla_{\xi}J+J\nabla_{J\xi}J)(v)]^{N}=[\nabla_{v}J)\xi+J(\nabla_{Jv}J)\xi]^{N}$
for all $v\in TV$, $\xi\in NV$;
discussed in more detail in the Appendix. This extra condition is needed to
ensure that the stable map compactification has codimension 2 boundary strata,
so it gives an invariant, independent of parameters. A priori, even when $V$
is smooth the relatively stable map compactification may have real codimension
1 boundary without this extra assumption.
###### Example 1.5
A large class of examples of normal crossings is provided by algebraic
geometry. Assume $X$ is a smooth projective variety and $V$ a complex
codimension 1 subvariety which is locally a transverse union of smooth
divisors. In particular $V$ could be the transverse intersection of smooth
divisors, in which case $V$ is said to have simple normal crossings, but in
general the divisors may also self intersect. Then $V$ is a symplectic normal
crossings divisor for $(X,J_{0},\omega_{0})$ where $J_{0}$ is integrable
complex structure and $\omega_{0}$ the Kahler form. For example (a) $X$ could
be a Hirzebruch surface and $V$ the union of the zero section, the infinity
section and several fibers or (b) $V$ could be the union of a section and a
nodal fiber in an elliptic surface $X$.
An important example of this type is when $X$ is a toric manifold and $V$ is
its toric divisor, which is a case considered in mirror symmetry, see for
example [Au2].
###### Example 1.6
Another particular example to keep in mind is $X={\mathbb{C}}{\mathbb{P}}^{2}$
with a degree 3 normal crossings divisor $V$. For example $V$ could be a
smooth elliptic curve, or $V$ could be a nodal sphere, or finally $V$ could be
a union of 3 distinct lines. In the second case the normalization
$\widetilde{V}$ is ${\mathbb{C}}{\mathbb{P}}^{1}$ with normal bundle $O(7)$
while in the last case it is
${\mathbb{C}}{\mathbb{P}}^{1}\sqcup{\mathbb{C}}{\mathbb{P}}^{1}\sqcup{\mathbb{C}}{\mathbb{P}}^{1}$,
each component with normal bundle $O(1)$. Of course, in a complex 1-parameter
family, a smooth degree three curve can degenerate into either one of the
other two cases.
Another motivating example of this type comes from a smooth quintic 3-fold
degenerating to a union of 5 hyperplanes in ${\mathbb{C}}{\mathbb{P}}^{4}$.
###### Remark 1.7
Yet another special case is $X=\overline{{\cal M}}_{0,n}$ the Deligne-Mumford
moduli space of stable genus $0$ curves and $V$ the union of all its boundary
strata (i.e. the stratum of nodal curves). The usual description of each
boundary stratum and of its normal bundle provides the required local models
for a symplectic normal crossings divisor. This discussion can also be
extended to the orbifold setting to cover the higher genus case
$\overline{{\cal M}}_{g,n}$ and certainly covers its smooth finite cover, the
moduli space of Prym curves, constructed by Looijenga [Lo].
Of course, there are many more symplectic examples besides those coming from
algebraic geometry.
###### Example 1.8
Assume $V$ is a symplectic codimension two submanifold of $(X,\omega)$. The
symplectic neighborhood theorem then allows us to find a $J$ and a model for
the normal direction to $V$, so $V$ is normal crossings divisor in
$(X,\omega,J)$. Of course in this case $V$ is a smooth divisor, so it has
empty singular locus.
One may have hoped that the union of several transversely intersecting
codimension two symplectic submanifolds would also similarly be a normal
crossings divisor. Unfortunately, if the singular locus is not empty, that may
not be the case, as illustrated by the example below.
###### Example 1.9
Let $V_{1}$ be an exceptional divisor in a symplectic 4-manifold and $V_{2}$ a
sufficiently small generic perturbation of it, thus still a symplectic
submanifold, intersecting transversely $V_{1}$. This configuration cannot be
given the structure of a normal crossings divisor, simply because one cannot
find a $J$ which preserves both. If such a $J$ existed, then all the
intersections between $V_{1}$ and $V_{2}$ would be positive, contradicting the
fact that exceptional divisors have negative self intersection.
This example illustrates the fact that a normal crossings divisor is not a
purely symplectic notion, but rather one also needs the existence of a tamed
almost complex structure $J$ adapted to the crossings. The positivity of
intersections of all branches is a necessary condition for such a $J$ to exist
in general.
###### Remark 1.10
One could ask what are the necessary and sufficient conditions for $V$ inside
a symplectic manifold $(X,\omega)$ to be a normal crossings divisor with
respect to some $J$ on $X$. Clearly $V$ should be locally the transverse
intersection of symplectic submanifolds, and furthermore this intersection
should be positive. If we assume that the branches of $V$ are moreover
orthogonal wrt $\omega$ then the existence of a compatible $J$ is
straightforward (see the end of Appendix).
###### Example 1.11
Symplectic Lefschetz pencils or fibrations provide another source of examples
of symplectic normal crossings divisors. Assume $X$ is a symplectic manifold
which has a symplectic Lefschetz fibration with a symplectic section, for
example one coming from Donaldson Theorem [Do2] where the section comes from
blowing up the base locus. Gomph [GoS] showed that in this case there is an
adapted almost complex structure $J$ to this fibration. We could then take $V$
the union of the section with a bunch of fibers, including possibly some
singular fibers.
###### Example 1.12
(Donaldson divisors) Assume $V$ is a normal crossings divisor in
$(X,\omega,J)$ and that $[\omega]$ has rational coefficients. We can use
Donaldson theorem [Do] to obtain a smooth divisor $D$ representing the
Poincare dual of $k\omega$ for $k>>0$ sufficiently large, such that $D$ is
$\varepsilon$-$J$-holomorphic and $\eta$-transverse to $V$ (see also [Au]).
Choosing carefully the parameters $\eta$ and $\varepsilon$, one can then find
a sufficiently small deformation of $J$ such that $V\cup D$ is also a normal
crossings divisor.
###### Remark 1.13
The definition of a normal crossings divisor works well under taking products
of symplectic manifolds with divisors in them. If $V_{i}$ is a normal
crossings divisor in $X_{i}$ for $i=1,2$ then
$\pi_{1}^{-1}(V_{1})\cup\pi_{2}^{-1}(V_{2})=V_{1}\times X_{2}\cup X_{1}\times
V_{2}$ is a normal crossings divisor in $X_{1}\times X_{2}$, with normal model
$\pi_{1}^{*}N_{1}\sqcup\pi_{2}^{*}N_{2}$. Note that even if $V_{i}$ were
smooth divisors, then the divisor in the product $X_{1}\times X_{2}$ is still
singular along $V_{1}\times V_{2}$.
###### Remark 1.14
The definition of a normal crossings divisor also behaves well under
symplectic sums. Assume $U_{i}\cup V$ is a symplectic divisor in $X_{i}$ for
$i=1,2$ such that the normal bundles of $V$ in $X_{i}$ are dual. If $U_{i}$
intersect $V$ in the same divisor $W$ then Gomph’s argument [Go] shows that
the divisors $U_{i}$ glue to give a normal crossings divisor
$U_{1}\\#_{W}U_{2}$ in the symplectic sum $X_{1}\\#_{V}X_{2}$.
###### Remark 1.15
A special case of symplectic normal crossings divisor $V$ (with simple
crossings) is the union of codimension 2 symplectic submanifolds which
intersect orthogonally wrt $\omega$, and whose local model matches that of
toric divisors in a toric manifold. This is a case that fits in the exploded
manifold set-up of Brett Parker (see Example 5.3 in the recent preprint [P]),
so in principle one should be able to compare the relative invariants we
construct in this paper with the exploded ones of [P]. It is unclear to us
what is exactly the information that the exploded structure records in this
case, and what is the precise relation between the two moduli spaces. But
certainly the relatively stable map compactification we define in this paper
seems to be quite different from the exploded one, so it is unclear whether
they give the same invariants, even in the case when $V$ is smooth.
###### Remark 1.16
In a related recent preprint, Gross and Seibert define log GW invariants in
the algebraic geometry setting [GrS]. If $V$ is a normal crossings divisor in
a smooth projective variety $X$, then it induces a log structure on $X$.
However, even in the case $V$ is a smooth divisor, Gross and Siebert explain
that the stable log compactification they construct is quite different from
the relatively stable map compactification constructed earlier in that context
by J. Li [Li] (and which agrees with that of [IP1] in this context). So a
priori, even when $V$ is smooth, the usual relative GW invariants may be
different from the log GW invariants of [GrS]. The authors mention however
that in that case at least there is a map from the moduli space of stable
relative maps to that of stable log maps, which they claim could be used to
prove that the invariants are the same, though no proof of this claim is
available yet. Presumably there is also a map from the relatively stable map
compactification that we construct in this paper to the appropriate stable log
compactification in the more general case when $V$ is a normal crossings
divisor in a smooth projective variety.
In a related paper [AC] Abramovich and Chen also explain how, in the context
of algebraic geometry, the construction of a log moduli space when $V$ is a
normal crossings divisor (with simple crossings) follows from the case when
$V$ is smooth by essentially functorial reasons. Again, it is unclear to us
how exactly the two notions of log stable maps of [GrS] and [AC] are related
in this case.
###### Remark 1.17
One note about simple normal crossings vs general normal crossings: they do
complicate the topology/combinatorics of the situation, but if set up
carefully the analysis is unaffected. If the local model of $X$ is holomorphic
near $V$ (as is the case in last two examples above), even if $V$ did not have
simple crossings, one could always blow up the singular locus $W$ of $V$ to
get a total divisor $\pi^{-1}(V)=Bl(V)\cup E$ with simple normal crossings in
$Bl(X)$, where $E$ is the exceptional divisor. Blowing up in the symplectic
category is a more delicate issue, but when using the appropriate local model,
one can always express (a symplectic deformation) of the original manifold
$(X,V)$ as a symplectic sum of its blow up $(Bl(X),Bl(V))$ along the
exceptional divisor $E$ with a standard piece $({\mathbb{P}},V_{0})$ involving
the normal bundle of the blowup locus. Since we are blowing up the singular
locus of $V$, the proper transform $Bl(V)$ intersects nontrivially the
exceptional divisor $E$; the symplectic sum $Bl(X)\\#_{E}{\mathbb{P}}=X$ then
also glues $Bl(V)$ to the standard piece on the other side to produce $V$, as
in Remark 1.14. Therefore a posteriori, after proving a symplectic sum formula
for the relative GW of normal crossings divisors passing through the neck of a
symplectic sum, one could also express the relative GW invariants of the
original pair $(X,V)$ as universal expressions in the relative GW invariants
of its the blow up and those of the piece obtained from the normal bundle of
the blow-up locus.
We study some of the properties of (symplectic) normal crossings divisors in
more detail in the Appendix. This is also where we include a more detailed
description of the stratifications of the divisor $V$ which record how the
various local branches of $V$ intersect. In particular, each stratum is itself
a normal crossings divisor in the resolution of the next one, and we describe
its normalization. These provide a global way to encode the local information
about the way $J$-holomorphic curves meet various branches of $V$.
## 2 The Relative Moduli Space ${\cal M}_{s}(X,V)$
Assume now $(X,\omega,J)$ is a smooth symplectic manifold with a normal
crossings divisor $V$. We want to define the moduli space of stable
(perturbed) $J$-holomorphic maps into $X$ relative $V$, which will enter in
the definition of the relative GW invariant of the pair $(X,V)$ in the usual
way. We will follow closely the approach of [IP1] and [IP2] and explain how
many of the arguments there extend in this context.
The notion of stability always refers to finitely many automorphisms, but the
notion of automorphism is relative, that is it depends on the particular
setup. We will later on explain what exactly we mean by a stable map
$f:C\rightarrow X$ relative $V$; but any automorphism of the map will in
particular be an automorphism of its domain $C$.
###### Remark 2.1
(Stability and automorphisms) For the purpose of simplifying the discussion in
this paper, in all the local analysis arguments below we will implicitely
assume that all the domains $C$ are already stable, and have been furthermore
decorated to have non nontrivial automorphisms, as is discussed in Section 1
of [IP1]. First of all, any unstable domain components of $C$ are spheres, and
collapsing them gives an element $\mbox{\rm st}(C)$ of the Deligne-Mumford
moduli space $\overline{{\cal M}}_{g,n}$ of stable curves. If we assume to
begin with that we are in a situation where all domains are stable, then after
possibly decorating the domains by adding a finite amount of extra topological
information (like a Prym structure) we can also assume they now have no
nontrivial automorphisms; in particular, their moduli space $\overline{{\cal
M}}$ is smooth. In genus zero it is already the case that any stable domain
has no nontrivial automorphisms, and the Deligne-Mumford moduli space
$\overline{{\cal M}}_{0,n}$ is smooth; in higher genus we can replace
$\overline{{\cal M}}_{g,n}$ by its smooth finite cover, the moduli space of
Prym curves constructed by Looijenga in [Lo]. This will have the effect of
globally killing all nontrivial automorphisms of stable domains, thus making
analysis arguments like transversality much easier to setup, at the expense of
passing to a finite branched cover of the original moduli space.
Fix also a particular embedding
$\displaystyle\overline{{\cal U}}\rightarrow{\mathbb{P}}^{N}$ (2.1)
of the universal curve $\overline{{\cal U}}$ over the Deligne-Mumford moduli
space in genus zero or over the Prym moduli space in higher genus. The
embedding (2.1) gives a canonical choice of a complex structure $j$ on each
domain $C$ obtained by restricting the complex structure on ${\mathbb{P}}^{N}$
to the fiber $st(C)$ of the universal curve; the unstable domain components,
if any, already have a canonical $j_{0}$ on them. So the embedding (2.1)
provides a global slice to the action of the reparametrization group on the
moduli space of maps, as long as their domains are stable; otherwise there is
still a residual action coming from automorphisms of the unstable components.
This embedding also simultaneously gives us a simple type of global
perturbation $\nu$ of the holomorphic map equation
$\displaystyle\overline{\partial}_{jJ}f(z)=\nu(z,f(z))$ (2.2)
coming from $\overline{{\cal U}}\times X$ as it sits as a smooth submanifold
inside ${\mathbb{P}}^{N}\times X$. The perturbation $\nu$ then vanishes on all
the unstable components of the domain, so these are $J$-holomorphic. In the
case all the domains are stable to begin with, or more generally when the
restriction of $f$ to the unstable part of the domain is a simple
$J$-holomorphic map, this type of perturbation $\nu$ is enough to achieve all
the required transversality; in general there is still a problem achieving
transversality using this type of perturbation on the unstable part of the
domain that is multiply covered.
There are many ways to locally stabilize the domains to locally get oneself in
the situation described above; understanding how these local choices globally
patch together is at the heart of the construction of the virtual fundamental
cycle in GW theories, see Remark 1.9 of [IP1] and the references therein. For
the situation we discuss in this paper, we could for example add a Donaldson
divisor to $V$ as in Example 1.12; this will have the effect that all the maps
in the relatively stable map compactification will now have stable domains. It
is not a priori clear why one would get an invariant that way (independent of
the Donaldson divisor added), but that will be the topic of another upcoming
paper.
Furthermore, in this paper we will work only with $V$-compatible parameters
$(J,\nu)$ for the equation (2.2) as in [IP1]. So we will assume that $(J,\nu)$
satisfy the conditions of Definition 3.2 in [IP1] the normal direction to each
branch of $V$. These are conditions only on the 1-jet of $(J,\nu)$ along $V$,
and are used to show that the contact order to $V$ is well defined and that
generically the stable map compactification does not have real codimension one
boundary strata. See Remark A.3. in the Appendix for a discussion of the space
of $V$-compatible parameters $(J,\nu)$.
### 2.1 The relative moduli space and the contact information
The construction of the relative moduli space takes several stages; in this
section we describe the main piece
$\displaystyle{\cal M}_{s}(X,V)$
containing stable $(J,\nu)$-holomorphic maps $f:C\rightarrow X$ into $X$
without any components or nodes in $V$, such that all the points in
$f^{-1}(V)$ are marked, and come decorated by the sequence $s$ of
multiplicities recording the order of contact of $f$ to each local branch of
$V$.
There is a fair amount of discrete topological/combinatorial data that needs
to be kept track of in the definition of ${\cal M}_{s}$. We also discuss the
construction of the leading order section which keeps track of more
information than the ordinary evaluation map, and construct an enhanced
evaluation map using it. This information will then be crucial in the later
sections when we construct the relatively stable map compactification.
We start by explaining how the discussion in Section 4 of [IP1] extends to the
case when $V$ is a normal crossings divisor. Assume $f:C\rightarrow X$ is a
stable $(J,\nu)$ holomorphic map into $X$ that has no components or nodes in
$V$, such that all the points in $f^{-1}(V)$ are marked. Assume $x$ is one of
the marked points such that $f(x)$ belongs to the (open) stratum of $V$ where
$k$ local branches meet.
Choose local coordinates $z$ about the point $x$ in the domain, and locally
index the $k$ branches of $V$ meeting at $f(x)$ by some set $I$. See the
discussion in the Appendix on how to regard the indexing set $I$ in an
intrinsic manner. For each $i\in I$, choose also a local coordinate at $f(x)$
in the normal bundle to the branch labeled by $i$, see also (A.4.). Lemma 3.4
of [IP1] then implies that the normal component $f_{i}$ of the map $f$ around
$z=0$ has an expansion:
$\displaystyle f_{i}(z)=a_{i}z^{s_{i}}+O(|z|^{s_{i}})$ (2.3)
where $s_{i}$ is a positive integer and the leading coefficient $a_{i}\neq 0$.
The multiplicity $s_{i}$ is independent of the local coordinates used, and
records the order of contact of $f$ at $x$ to the $i$’s local branch of $V$.
Thus each point $x\in f^{-1}(V)$ comes with the following information:
* (a)
a depth $k(x)\geq 1$ that records the codimension in $X$ of the open stratum
of $V$ containing $f(x)$;
* (b)
an indexing set $I(x)$ of length $k(x)$ that keeps track of the local branches
of $V$ meeting at $f(x)$;
* (c)
a sequence of positive multiplicities $s(x)=(s_{i}(x))$ indexed by $i\in
I(x)$, recording the order of contact of $f$ at $x$ to each of the $k(x)$
local branches of $V$.
We will think of the indexing set $I(x)$ as part of the information contained
in the sequence $s(x)$. There are several ways to encode this: one way is
regard $s(x)$ as a map $s(x):I(x)\rightarrow{\mathbb{N}}_{+}^{k(x)}$ defined
by $i\mapsto s_{i}(x)$, the multiplicity of contact at $x$ to the branch
indexed by $i\in I(x)$. Either way, the sequence $s(x)$ keeps track of how the
local intersection number of $f$ at $x$ with $V$ is partitioned into
intersection numbers to each local branch of $V$. The sequence $s$ is obtained
by putting together all the $s(x)$ for all $x\in f^{-1}(V)$ and keeps track of
all the contact information of $f$ to $V$.
###### Remark 2.2
To keep notation manageable we will also include in $s$ the information about
ordinary marked points (i.e. for which $f(x)\notin V$). For such points, we
make the convention that the depth $k=0$, $I=\emptyset$ and $s(x)=\emptyset$.
Then each marked point $x$ now has a depth $k\geq 0$: depth 0 corresponds to
an ordinary marked point, mapped into $X\setminus V$, depth 1 is mapped into
the smooth locus of $V$ and has attached just one positive multiplicity, etc.
This concludes our description of the moduli space
$\displaystyle{\cal M}_{s}(X,V)$
consisting of stable $(J,\nu)$-holomorphic maps $f:C\rightarrow X$ that have
no components or nodes in $V$ and such that all the points in the inverse
image of $V$ are marked points of $C$ and furthermore decorated according to
the sequence $s$. If $P$ denotes the collection of marked points of $C$, each
marked point $x\in P$ has an associated sequence $s(x)$ which records the
order of contact of the map $f$ at $x$ to each local branch of $V$, including
the information about the indexing set $I(x)$ of the branches. The cardinality
$k(x)=|I(x)|$ of $I(x)$ records the depth of $x$ while the degree of $s(x)$
$\displaystyle|s(x)|\;\mathop{=}\limits^{\rm def}\;\sum_{i\in I(x)}s_{i}(x)$
is the local intersection number of $f$ at $x$ with $V$. In particular, the
total degree of $s$ is purely topological:
$\displaystyle|s|=\sum_{x\in P}|s(x)|=f^{*}(V)=A\cdot V$ (2.4)
where $A\in H_{2}(X)$ is the homology class of the image of $f$. We also
denote by $\ell(s)$ the length of $s$, i.e. the total number of marked points
$x\in P$ of $C$.
###### Remark 2.3
To keep the notation simple, we will think of $s$ as recording ALL the
topological information about the stable maps $f:C\rightarrow X$; in
particular this includes the homology class $A_{s}$ of the image of $f$ and
the topological type $\Sigma_{s}$ of the domain of $f$, and so its Euler
characteristic $\chi_{s}=\chi(\Sigma_{s})$. In the discussion above, the
domain $\Sigma_{s}$ of $f$ could be disconnected, in which case its components
are unordered. We can also include in $s$ not just the homology class of the
image of $f$, but also its appropriate relative homology class, i.e. the
information about the rim tori, see Section 5 of [IP1]. The construction there
is purely topological, so it easily extends to the case when $V$ is a normal
crossings divisor. We will not explicitly describe it in this paper.
With this notation, the arguments of Lemma 4.2 in [IP1] immediately extend in
this context to show that for generic $V$-compatible $(J,\nu)$ the moduli
space ${\cal M}_{s}(X,V)$ is a smooth orbifold of dimension
$\displaystyle{\rm dim\;}{\cal M}_{s}(X,V)=2c_{1}(TX)A_{s}+({\rm
dim}X-6)\frac{\chi(\Sigma_{s})}{2}+2\ell(s)-2A_{s}\cdot V$ (2.5)
Note that this dimension does not depend on the particular partition of the
intersection number of $V$ and $A$, only on the total number $\ell(s)$ of
marked points (which include ordinary marked points in our convention).
The ordinary evaluation map at one of the decorated marked points $x\in P$ is
defined by
$\displaystyle\mbox{\rm ev}_{x}:{\cal M}_{s}(X,V)\rightarrow V_{I(x)}$ (2.6)
$\displaystyle\mbox{\rm ev}_{x}(f)=f(x)$
where $V_{I(x)}$ is the depth $k(x)$ stratum of $V$ labeled by the indexing
set $I(x)$ of the branches of $V$. As before, this evaluation map includes the
ordinary marked points when we make the convention that $V_{\emptyset}=X$.
Putting together all the marked points $x\in P(s)$ labeled by the sequence $s$
we get the corresponding ordinary evaluation map
$\displaystyle\mbox{\rm ev}:{\cal M}_{s}(X,V)$ $\displaystyle\rightarrow$
$\displaystyle V_{s}=\mathop{\prod}\limits_{x\in P(s)}V_{I(x)}$ (2.7)
whose image is the product of all the strata corresponding to the marked
points. It is important to note the image of the evaluation map above: there
are several other choices that may seem possible (like simply $X^{\ell(s)}$),
but this is the only choice for which the evaluation map can be transverse,
without loosing important information.
###### Remark 2.4
In the above discussion for simplicity we identified the marked point $x$
which is an actual point in the domain of $f$ with its marking (index) in $P$;
usually one talks about the $p$’th marked point $x_{p}$ and about $\mbox{\rm
ev}_{p}$, the evaluation at the $p$’th marked point.
When the depth $k(x)\geq 2$, the evaluation map (2.7) still does not record
enough information to state the full matching conditions that appear in the
relatively stable map compactification. We will also need to record the
leading coefficient of the expansion (2.3). For each $f:C\rightarrow X$ in
${\cal M}_{s}(X,V)$ and each marked point $x\in f^{-1}(V)$ let $a_{i}(x)$ be
leading coefficient (2.3) of $f$ at $x$ in the normal direction $N_{i}$ to the
branch labeled by $i\in I(x)$. As explained in Section 5 of [IP2], $a_{i}(x)$
is naturally an element of $(N_{i})_{f(x)}\otimes(T_{x}^{*}C)^{s_{i}(x)}$ so
it defines a section of the bundle
$\displaystyle\mbox{\rm ev}_{x}^{*}N_{i}\otimes L_{x}^{s_{i}(x)}$ (2.8)
where $L_{x}$ is the relative cotangent bundle to the domain at the marked
point $x$. If we denote by $E_{s,x}$ the bundle over the moduli space whose
fiber at $f$ is
$\displaystyle\mathop{\bigoplus}\limits_{i\in I(x)}\mbox{\rm
ev}_{x}^{*}N_{i}\otimes L_{x}^{s_{i}(x)}$ (2.9)
then the leading order section at $x$ is defined by
$\displaystyle\sigma_{x}:{\cal M}_{s}(X,V)$ $\displaystyle\rightarrow$
$\displaystyle E_{s,x}$ (2.10) $\displaystyle\sigma_{x}(f)$ $\displaystyle=$
$\displaystyle(a_{i}(x))_{i\in I(x)}$
and records the leading term coefficient in each one of the $k(x)$ normal
directions at $x$ labeled by $I(x)$. This section will turn out to record a
lot of crucial information, and will be studied in more detailed later. It was
already used in [I] to essentially get an isomorphism between the relative
cotangent bundle of the domain and that of the target for the moduli space of
branched covers of ${\mathbb{P}}^{1}$.
###### Remark 2.5
The target of (2.10) may not be globally a direct sum of line bundles, as its
fibers may intertwine as we move along $V_{I(x)}$, the same way the $k(x)$
branches of $V$ do when the indexing set $I(x)$ has nontrivial global
monodromy, see discussion in the Appendix.
The enhanced evaluation map is defined by
$\displaystyle\mbox{\rm Ev}_{x}:{\cal M}_{s}(X,V)$ $\displaystyle\rightarrow$
$\displaystyle{\mathbb{P}}_{s(x)}(N{V_{I(x)}})$ (2.11) $\displaystyle\mbox{\rm
Ev}_{x}(f)$ $\displaystyle=$ $\displaystyle[\sigma_{x}(f)]$
where the ${\mathbb{P}}_{s(x)}(N{V_{I(x)}})$ denotes the weighted
projectivization with weight $s(x)$ of the normal bundle $N{V_{I(x)}}$ of the
depth $k(x)$ stratum $V_{I(x)}$. More precisely,
${\mathbb{P}}_{s(x)}(N{V_{I(x)}})$ is a bundle over $V_{I(x)}$ whose fiber is
the weighted projective space obtained as the quotient by the
${\mathbb{C}}^{*}$ action with weight $s_{i}(x)$ in the normal direction
$N_{i}$ to the branch labeled by $i\in I(x)$. Globally these branches may
intertwine as discussed above in Remark 2.5.
Of course, if $\pi:{\mathbb{P}}_{s(x)}(N{V_{I(x)}})\rightarrow V_{I(x)}$ is
the projection then
$\displaystyle\pi\circ\mbox{\rm Ev}=\mbox{\rm ev}$
which explains the name; $\mbox{\rm Ev}_{x}$ is a refinement of $\mbox{\rm
ev}_{x}$ only when the depth $k(x)\geq 2$.
###### Remark 2.6
Note that by construction the leading order terms are all nonzero, so the
image of $\sigma_{x}$ is away from the zero sections of each term in (2.10).
The image of the enhanced evaluation map similarly lands in the complement of
all the coordinate hyperplanes in the target of (2.11).
So far we defined the moduli space ${\cal M}_{s}(X,V)$ of stable maps into $X$
without any components or nodes in $V$, and with fixed topological information
described by $s$. This moduli space ${\cal M}_{s}(X,V)$ may not be compact,
simply because we could have sequence of maps $f_{n}$ in ${\cal M}_{s}$ whose
limit in the stable maps compactification is a map $f$ with some components in
$V$. Then the contact information of $f$ to $V$ becomes undefined along its
components that lie in $V$, and so in particular $f$ does not belong to ${\cal
M}_{s}(X,V)$. Note that this is the only reason why ${\cal M}_{s}(X,V)$ fails
to be compact:
###### Lemma 2.7
Consider a sequence $\\{f_{n}\\}$ of maps in ${\cal M}_{s}(X,V)$ and assume
its limit $f$ in the usual stable map compactification has no components in
$V$. Then $f\in{\cal M}_{s}(X,V)$.
Proof. A priori, there are two reasons why $f$ would fail to be in ${\cal
M}_{s}$:
1. (a)
$f$ has a node in $V$ or
2. (b)
the contact information of $f$ is not given by $s$.
Note that case (b) includes the cases when in the limit multiplicity of
intersection jumps up or when a depth $k$ marked point falls into a higher
depth stratum of $V$.
Since ALL points in $f_{n}^{-1}(V)$ are already marked, indexed by the same
set $P$, they persist as marked points for the limit $f$, in particular they
are distinct from each other and from the nodes of $f$. On the other hand, let
$\widetilde{f}$ denote the lift of $f$ to the normalization of its domain.
Since $f$ has no components in $V$, then each point in $\widetilde{f}^{-1}(V)$
has a well defined sequence $s_{0}$ that records the local multiplicity of
intersection of $\widetilde{f}$ at that point with each local branch of $V$.
At those points of $\widetilde{f}^{-1}(V)$ which were limits of the marked
points in $f_{n}^{-1}(V)$, the multiplicity $s_{0}(x)\geq s(x)$, as the
mutiplicity could go up when the leading coefficients converge to 0. But then
$\displaystyle[f]\cdot
V=\sum_{x\in\widetilde{f}^{-1}(V)}s_{0}(x)\geq\sum_{x\in
P}s_{0}(x)\geq\sum_{x\in P}s(x)=[f_{n}]\cdot V$
Since $[f_{n}]=[f]$ then $\widetilde{f}^{-1}(V)=P$, which means that $f$ has
no nodes in $V$, rulling out (a), and that $s_{0}(x)=s(x)$ for all $x\in P$,
which rules out (b). $\Box$
## 3 Rescaling the target
Assume now $\\{f_{n}\\}$ is a sequence of maps in ${\cal M}_{s}(X,V)$ whose
limit $f$ in the stable maps compactification has some components in $V$. We
will use the methods of [IP1], to rescale the target normal to $V$ to prevent
this from happening. The analysis there is mostly done semi-locally (in a
neighborhood of $V$) and so those parts easily extend to this situation. But
as we will see below, the topology of the normal crossings divisor now enters
crucially in a couple of steps. First step is to describe the effect of
rescaling on the target $X$. It is modeled on the process of rescaling a disk
about the origin, but now performed fiberwise in the normal direction to $V$.
### 3.1 Brief review of [IP1]
We begin by briefly reviewing the situation in Section 6 of [IP1], where it
was assumed that $V$ is a smooth divisor. In local coordinates, if $x$ is a
fixed local coordinate normal to $V$, rescaling $X$ by a factor of
$\lambda\neq 0$ means we make a change of coordinates in a neighborhood of $V$
in the normal direction to $V$:
$\displaystyle x_{\lambda}=x/\lambda.$ (3.1)
Under rescaling by an appropriate amount $\lambda_{n}$, depending on the
sequence $\\{f_{n}\\}$, in the limit we will get not just a curve in $X$
(equal to the part of $f$ that did not lie in $V$), but also a curve in the
compactification of $N_{V}$, i.e. in
$\displaystyle\mathbb{F}={\mathbb{P}}(N_{V}\oplus{\mathbb{C}}).$
Here $\mathbb{F}$ is a ${\mathbb{C}}{\mathbb{P}}^{1}$ bundle over $V$, with a
zero and infinity section $V_{0}$ and $V_{\infty}$. Under the rescaling (3.1),
$x_{\lambda}$ can be thought instead as a coordinate on $\mathbb{F}$ normal to
$V_{0}$. Let $y=1/x_{\lambda}$ be the corresponding coordinate normal to
$V_{\infty}$ inside $\mathbb{F}$, so that (3.1) becomes
$\displaystyle xy=\lambda$ (3.2)
This procedure has the infinity section $V_{\infty}$ of $\mathbb{F}$ naturally
identified with $V$ in $X$ such that furthermore their normal bundles are dual
to each other, i.e.
$\displaystyle N_{V/X}\otimes N_{V_{\infty}/\mathbb{F}}\cong{\mathbb{C}}$
(3.3)
is trivial. This identification globally encodes the local equations (3.2)
because $x,y$ are local sections of $N_{V/X}$ and $N_{V_{\infty}/\mathbb{F}}$
respectively.
###### Remark 3.1
Once an identification (3.3) is fixed, then for any (small) gluing parameter
$\lambda\in{\mathbb{C}}^{*}$, equation (3.2) is exactly the local model of the
symplectic sum $X_{\lambda}$ of $X$ and $\mathbb{F}$ along $V=V_{\infty}$ with
gluing parameter $\lambda$. Of course, topologically
$X\\#_{V=V_{\infty}}\mathbb{F}=X$. This means that an equivalent point of view
to rescaling $X$ by a factor of $\lambda$ normal to $V$ is to regard $X$ as
the symplectic sum $X_{\lambda}$ of $X$ and $\mathbb{F}$ with gluing parameter
$\lambda$, with the above choice of coordinates and identifications, including
(3.3). The advantage of this perspective is that the rescaled manifolds
$X_{\lambda}$ now fit together as part of a smooth total space $Z$ as its
fibers over $\lambda$, and converge there as $\lambda\rightarrow 0$ to the
singular space
$\displaystyle X_{0}=X\mathop{\cup}\limits_{V=V_{\infty}}\mathbb{F},$ (3.4)
obtained by joining $X$ to $\mathbb{F}$ along $V=V_{\infty}$.
Denote by $U_{\lambda}$ the tubular neighborhood of $V$ in $X$ described in
coordinates by $|x|\leq|\lambda|^{1/2}$ and by $V_{\lambda}$ the complement of
the tubular neighborhood of $V_{\infty}$ in $\mathbb{F}$ described in
coordinates by $|y|\geq|\lambda|^{1/2}$. From this perspective, rescaling $X$
around $V$ by $\lambda\neq 0$ gives rise to a manifold $X_{\lambda}$ together
with a diffeomorphism
$\displaystyle R_{\lambda}:X\rightarrow X_{\lambda}$ (3.5)
which is the identity outside $U_{\lambda}$ and which identifies $U_{\lambda}$
with $V_{\lambda}$ by rescaling it by a factor of $\lambda$, or equivalently
via the equation (3.2). As $\lambda\rightarrow 0$, $U_{\lambda}$ shrinks to
$V$ inside $X$, but it expands in the rescaled version $X_{\lambda}$ to
$\mathbb{F}\setminus F_{\infty}$. So in the limit as $\lambda\rightarrow 0$,
the rescaled manifolds $X_{\lambda}$, with the induced almost complex
structures $J_{\lambda}=(R_{\lambda}^{-1})^{*}J$ converge to the singular
space $X_{0}$ defined by (3.4) with an almost complex structure $J_{0}$ which
agrees with $J$ on $X$ and is ${\mathbb{C}}^{*}$ invariant on the $\mathbb{F}$
piece.
After rescaling the sequence $\\{f_{n}\\}$ by appropriate amounts
$\lambda_{n}$, the new sequence $R_{\lambda_{n}}(f_{n})$ has a limit inside
$Z$ which is now a map into $X_{0}$ satisfying a matching condition along
$V_{\infty}=V$, described in more details later on. Of course, in general,
different components may fall in at different rates, so we need to rescale
several (but finitely many) times to catch all of them, and in the limit we
get a map into a building with several levels.
### 3.2 Rescaling the manifold $X$ normal to $V$
Assume now $V$ is a normal crossings divisor. We next describe the effect on
the manifold $X$ of rescaling around $V$ (in a tubular neighborhood of $V$).
Using our local models, we could extend the discussion above independently in
each normal direction to $V$, so normal to each open stratum of ${V^{k}}$, we
could rescale in $k$ independent directions. However, globally these
directions may intertwine, and not be independent, so one has to be careful
how to globally patch these local pictures. Here is where we use the fact that
the normal bundle $N$ was defined over the normalization $\widetilde{V}$ of
$V$. We will rescale normal to $V$ using the ${\mathbb{C}}^{*}$ action in this
normal bundle.
###### Remark 3.2
The ${\mathbb{C}}^{*}$ action in the complex line bundle $N$ induces in fact
several different actions. The one we will use in this paper is the diagonal
${\mathbb{C}}^{*}$ action in the normal bundle to each stratum $V^{k}$ of $V$.
When the normalization of $V$ has several components we have a
${\mathbb{C}}^{*}$ action for each component. In particular, when the divisor
$V$ has simple crossings, then we also have a local $({\mathbb{C}}^{*})^{k}$
action on $X$ normal to each $V^{k}$; essentially this happens only when the
crossings are simple.
When we rescale once $X$ normal to the normal crossings divisor $V$, in level
one we get several pieces, one for each piece $V^{k}$ of the stratification of
$V$ according to how many local branches meet there. The level zero unrescaled
piece is still $(X,V)$ as before. But now level one
$\displaystyle\mathbb{F}_{V}=\mathop{\sqcup}\limits_{k\geq 1}\mathbb{F}_{k}$
(3.6)
consists of several pieces $\mathbb{F}_{k}$, one for each depth $k\geq 1$. The
first piece is
$\displaystyle\mathbb{F}_{1}={\mathbb{P}}(N_{V}\oplus{\mathbb{C}}),$
a ${\mathbb{P}}^{1}$ bundle over the $\widetilde{V}$, the normalization of
$V$, obtained by compactifying the normal bundle
$N_{V}\rightarrow\widetilde{V}$ by adding an infinity section. Similarly, the
$k$’th piece
$\displaystyle\mathbb{F}_{k}\longrightarrow\widetilde{V^{k}}$
is a $({\mathbb{P}}^{1})^{k}$ bundle over the normalization
$\widetilde{V^{k}}$ of the closed stratum $V^{k}$, described in more details
in the Appendix. What is important here is that $\mathbb{F}_{k}$ is a bundle
over a smooth manifold $\widetilde{V^{k}}$ which it is obtained by separately
compactifying each of the $k$ normal directions to $V$ along $V^{k}$, see
(A.4.). This means that its fiber at a point $p\in\widetilde{V^{k}}$ is
$\displaystyle\mathbb{F}_{k}=\mathop{\times}\limits_{i\in
I}{\mathbb{P}}(N_{i}\oplus{\mathbb{C}})$
where $N_{i}$ is the normal direction to the $i$’th branch, and $I$ is an
indexing set of the $k$ local branches of $V$ meeting at $p$. Globally, these
${\mathbb{P}}^{1}$ factors intertwine as dictated by the global monodromy of
the $k$ local branches of $V$.
Each piece $\mathbb{F}_{k}$ comes with a natural normal crossings divisor
$\displaystyle W^{k}=D_{k,\infty}\cup D_{k,0}\cup F_{k}$ (3.7)
obtained by considering together its zero and infinity divisors plus the fiber
$F_{k}$ over the (inverse image of the) higher depth strata of $V^{k}$. The
construction of the divisor $W^{k}$ is described in more details the Appendix,
but here let us just mention that $D_{k,0}$ is the zero divisor in
$\mathbb{F}_{k}$ where at least one of the ${\mathbb{P}}^{1}$ coordinates is
equal to $0$, while the fiber divisor $F_{k}$ is the restriction of
$\mathbb{F}_{k}$ to the stratum of $\widetilde{V^{k}}$ coming from the higher
depth stratum $V^{k+1}$.
###### Remark 3.3
The ${\mathbb{C}}^{*}$ action in the normal bundle to $V$ induces a fiberwise,
diagonal ${\mathbb{C}}^{*}$ action on each piece $\mathbb{F}_{k}$; the action
preserves the divisor $W^{k}$, but not pointwise.
To keep notation manageable, we make the convention
$\displaystyle(\mathbb{F}_{0},W^{0})\mathop{=}\limits^{def}(X,V)$ (3.8)
This is consistent with our previous conventions that $V^{0}=X$, in which case
$\mathbb{F}_{0}={\mathbb{P}}(N_{V^{0}/X}\oplus{\mathbb{C}})=X$ and
$W_{0}=F_{0}=V$ is the fiber divisor over the lower stratum, as the zero and
infinity divisors are empty in this case. However, one difference that this
notation obscures is the fact that while $X=\mathbb{F}_{0}$ is on level 0
(unrescaled), the rest of the pieces $\mathbb{F}_{k}$ for $k\geq 1$ are all on
level 1 (all appeared as the result of rescaling once normal to $V$).
###### Definition 3.4
A level one building $X_{1}$ with zero divisor $V_{1}$ is obtained by
identifying the fiber divisor $F_{k}$ of $\mathbb{F}_{k}$ with the infinity
divisor $D_{k+1,\infty}$ of $\mathbb{F}_{k+1}$:
$\displaystyle
X_{1}=\mathop{\bigcup}\limits_{F_{k}=D_{k+1,\infty}}\mathbb{F}_{k}$
Denote by $W_{1}$ the singular locus of $X_{1}$ where all the pieces are
attached to each other, by $V_{1}$ the (singular) divisor in $X_{1}$ obtained
from the zero divisors, and let $D_{1}=W_{1}\cup V_{1}$ be the total divisor:
$\displaystyle(X_{1},D_{1})=\mathop{\bigcup}\limits_{F_{k}=D_{k+1,\infty}}(\mathbb{F}_{k},\;F_{k}\cup
D_{k,\infty}\cup D_{k,0})$ (3.9)
So the level one building comes with a resolution
$(\widetilde{X}_{1},\widetilde{D}_{1})=\mathop{\bigsqcup}\limits_{k\geq
0}(\mathbb{F}_{k},\;F_{k}\cup D_{k,\infty}\cup D_{k,0})$ and an attaching map
$\displaystyle\xi:(\widetilde{X}_{1},\widetilde{V}_{1})\rightarrow(X_{1},V_{1}).$
(3.10)
It also comes with a collapsing map to level zero $(X_{0},V_{0})=(X,V)$:
$\displaystyle p:(X_{1},V_{1})\rightarrow(X,V)$ (3.11)
which is identity on level 0, but which collapses the fiber of each level one
piece $\mathbb{F}_{k}$, $k\geq 1$.
###### Remark 3.5
The precise identifications required to construct this building are also
described in the Appendix. It is easy to see that both $F_{k}$ and
$D_{k+1,\infty}$ are normal crossings divisors, so in fact what we identify is
their normalizations, via a canonical map (A.6.). Furthermore, their normal
bundles are canonically dual to each other, see (A.7.), and the
${\mathbb{C}}^{*}$ action on $N$ induces an anti-diagonal ${\mathbb{C}}^{*}$
action in the normal bundle
$\displaystyle N_{F_{k}}\oplus N_{D_{k+1,\infty}}$
of each component $F_{k}=D_{k+1,\infty}$ of the singular divisor $W_{1}$ of
$X_{1}$, where $k\geq 0$.
###### Example 3.6
Assume $X$ has 4 real dimensions, and that the normal crossings divisor
$\displaystyle V=V_{1}\mathop{\cup}\limits_{p_{1}=p_{2}}V_{2}$
is the union of two submanifolds $V_{1}$ and $V_{2}$ intersecting only in a
point $p=p_{1}=p_{2}$. After rescaling once, we get 3 main pieces $X$,
$\mathbb{F}_{1}$ and $\mathbb{F}_{2}$ together this an attaching map, see the
left hand side of Figure 1. Here $\mathbb{F}_{1}$ is a ${\mathbb{P}}^{1}$
bundle over $\widetilde{V^{1}}=V_{1}\sqcup V_{2}$, while $\mathbb{F}_{2}$ is
just ${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$ (over the point
$\widetilde{V}^{2}=p$). The divisor $D_{1,\infty}\subset\mathbb{F}_{1}$ is a
copy of $\widetilde{V}=V_{1}\sqcup V_{2}$ and it is attached to
$F_{0}=V=V_{1}\mathop{\cup}\limits_{p_{1}=p_{2}}V_{2}\subset X$. Similarly,
$D_{2,\infty}\subset\mathbb{F}_{2}$ is
${\mathbb{P}}^{1}\times\infty\cup\infty\times{\mathbb{P}}^{1}$, and it is
attached to $F_{1}$, which is the disjoint union of two fibers of
$\mathbb{F}_{1}$ over the points $p_{1}$ and $p_{2}$ in $V_{1}\sqcup V_{2}$.
Note that $\mathbb{F}_{1}$ does not descend as a bundle over $V$: the two
fibers of $\mathbb{F}_{1}$ over singular locus $p_{1}$ and $p_{2}$ are not
identified with each other, but rather each gets identified with different of
fibers of $\mathbb{F}_{2}$.
Figure 1: The pieces of a level 1 building
###### Example 3.7
Assume $X$ has 4 real dimensions, but the normal crossings divisor $V$ has
only one component, self intersecting itself in just a point $p$. Locally, the
situation looks just like the one in Example 3.6, with $V$ having two local
branches meeting at $p$. The only difference is that globally now
$\widetilde{V}$ has only one connected component containing both points
$p_{1}$ and $p_{2}$, see right hand side of Figure 1.
###### Remark 3.8
In the discussion above, we had a rescaling parameter $\lambda$ normal to $V$,
which means that we considered the action of $\lambda\in{\mathbb{C}}^{*}$ on
the normal bundle $N$ over $\widetilde{V}$. If $\widetilde{V}$ has several
connected components
$\displaystyle\widetilde{V}=\mathop{\sqcup}\limits_{c\in C}\widetilde{V}_{c}$
(3.12)
then we could independently rescale normal to each one of them; this gives a
$\lambda\in({\mathbb{C}}^{*})^{C}$ action, rather then just the diagonal one
we considered before. Rescalling in all these independent directions now gives
a multi-building, where each floor has a level associated to each connected
component of $\widetilde{V}$. We could talk about a floor which is on level
one normal to some of the components, but level zero normal to other
components.
By iterating the rescaling process, we obtain level $m$ buildings where we
rescale $m$ times normal to $V$, or more generally multi-buildings with
$m_{c}$ levels in the normal direction to each connected component
$\widetilde{V}_{c}$ of $\widetilde{V}$.
###### Definition 3.9
A level $m$ building is a singular space $X_{m}$ with a singular divisor
$V_{m}$, called the zero section, that is obtained recursively from
$(X_{m-1},V_{m-1})$ by iterating the level one building procedure. In
particular, a level $m$ building comes with a resolution
$(\widetilde{X}_{m},\widetilde{D}_{m})$ and an attaching map
$\xi:(\widetilde{X}_{m},\widetilde{D}_{m})\rightarrow(X_{m},D_{m})$ that
attaches all the floors together producing the singular locus $W_{m}$ of
$X_{m}$, where $D_{m}=W_{m}\cup V_{m}$ is the total divisor. It also comes
with a collapsing map
$\displaystyle p_{m}:(X_{m},V_{m})\rightarrow(X_{m-1},V_{m-1})$
that collapses the level $m$ floor or more generally a collapsing map that
collapses any subset $J$ of the levels $\\{1,\dots,m\\}$.
For example,
$\displaystyle p:(X_{m},V_{m})\rightarrow(X_{0},V_{0})$ (3.13)
collapses all positive levels (but leaves the level zero unaffected).
Note that as we add floors, the building grows bigger in several (local)
directions. Starting with $(X_{m-1},V_{m-1})$, we then construct depth $k$
pieces
$\displaystyle\mathbb{F}_{k,m}=\mathbb{F}_{k}(V_{m-1}).$
on level $m$. Since the divisor $V_{m-1}$ has several pieces (it is itself a
rescaled version of $V_{0}$), the number of depth $k\geq 2$ pieces increases
(at least locally) as the building grows new levels.
###### Remark 3.10
Note that we used depth to measures how many local branches of $V$ meet at a
point. On the other hand, as a result of rescaling, we now also get levels,
which measure how many times the target has been rescaled.
###### Example 3.11
Consider the simplest model of a level $m$ building, the $m$ times rescaled
disk
$\displaystyle(D^{2})_{m}=D^{2}\mathop{\cup}\limits_{0=\infty}{\mathbb{P}}^{1}\mathop{\cup}\limits_{0=\infty}\dots\mathop{\cup}\limits_{0=\infty}{\mathbb{P}}^{1}$
(3.14)
on which in some sense all the other level $m$ buildings are modeled on. Its
resolution has $m+1$ components, each indexed by a level $l=0,\dots,m$ with
$D^{2}$ on level zero (unrescaled). The total divisor also has $m+1$
components, each similarly labeled by a level $l=0,\dots,m$ with the $0\in
D^{2}$ on level one. This defines a lower-semicontinuous level map $l$ on
$(D^{2})_{m}$. It is discontinuous precisely at points $y$ in the singular
divisor, where it has two limits, indexing the two lifts $y_{\pm}$ of $y$ to
consecutive levels, where $y_{+}=0$ is on the same level as $y$ while
$y_{-}=\infty$ is on the next level. Each point on the resolution comes with a
sign $\varepsilon=\pm$ or 0, keeping track of whether its coordinate is
$\infty$, 0 or neither. Then $\varepsilon=0$ corresponds to a smooth point of
$(D^{2})_{m}$, while $\varepsilon=\pm$ corresponds to a point in the
resolution of the total divisor. Intrinsically, $\varepsilon$ keeps track of
the weight of the ${\mathbb{C}}^{*}$ action on each piece.
###### Example 3.12
Suppose next we are in the situation of Example 3.6. The first level had three
components: two are ${\mathbb{P}}^{1}$ bundles over $V_{1}$ and respectively
$V_{2}$ and another is a ${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$ over a
point. A second level would have five components. Two of them are still
${\mathbb{P}}^{1}$ bundles over $V_{1}$ and $V_{2}$ respectively, but there
are now three copies of ${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$. The way they
come about is as follows. The zero divisor $V_{(1)}$ of the first floor
consists now of 4 pieces: $V_{1}$, $V_{2}$ but also two ${\mathbb{P}}^{1}$’s
intersecting in a point $p^{1}$ (coming from the zero section of depth two
piece $\mathbb{F}_{2,1}$ on the first floor). When we rescale again to get the
second floor, $\mathbb{F}_{2,2}$ still a
${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$ over the point $p^{1}$, but
$\mathbb{F}_{1,2}$ is now larger, it is a ${\mathbb{P}}^{1}$ bundle over
$(V_{1}\sqcup{\mathbb{P}}^{1})\sqcup(V_{2}\sqcup{\mathbb{P}}^{1})$, so it
really has four pieces: a ${\mathbb{P}}^{1}$ bundle over $V_{1}$ and
respectively $V_{2}$, as was the case in level one, but also two other
${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$ pieces coming from rescaling over the
two ${\mathbb{P}}^{1}$ fibers in $V_{(1)}$. All together this level 2 building
has $4$ copies of ${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$; in general, such
level $m$ building will have $m^{2}$ copies of
${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$, and $m$ copies of the
${\mathbb{P}}^{1}$ bundle over $V_{1}\sqcup V_{2}$.
Figure 2: A level 2 building
###### Example 3.13
Consider again the situation in Example 3.6. Because
$\widetilde{V}=V_{1}\sqcup V_{2}$ has two connected components, we can also
independently rescale normal to each of them, getting instead a multi-
building. For example, a level (2,2) multibuilding in this case looks just
like the level $2$ building described above, except that the pieces might be
on different levels when regarded in different directions. For example, the 4
pieces ${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$ now land one on each level
$(i,\;j)$ for $i,\;j=1$ or 2, while before one of them (the one on level
(1,1)) landed on level 1, while the other three landed on level 2. More
generally, the level $m$ building from the example above can be regarded as a
level $(m,m)$ multibuilding when we rescale independently in the two
directions, with exactly one copy of ${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$
on each level $(i,\;j)$ for $i,\;j=1,\dots m$. But in this case we can also
have a multi-building with different number of levels in each direction, for
example we could have just one level normal to $V_{1}$ but three normal to
$V_{2}$.
###### Example 3.14
Consider now the situation of Example 3.7. Locally everything looks the same
as in Example 3.12 and even near $p$ we have two independent local directions
in which we could rescale as in Example 3.13. However, because $\widetilde{V}$
is now connected, globally there is only one scaling parameter
$\lambda\in{\mathbb{C}}^{*}$ normal to $V$, so the two local scaling
parameters at $p$ have to be related to each other (they are essentially
equal).
###### Remark 3.15
In general, for any level $m$ building $X_{m}$, even if globally we may not be
able to have separate directions of rescaling, at least semi-locally the
picture always looks like part of a multilevel building. More precisely, fix a
depth $k$ point $p$ in $V$, and locally index by $i\in I$ the $k$ branches of
$V$ coming together at $p$. A neighborhood $U_{p}$ of fiber $F_{p}$ over $p$
of the collapsing map $X_{m}\rightarrow X$ of (3.13) is a product of a small
neighborhood $O_{p}$ of $p$ in the depth $k$ stratum and $k$ copies of an
$m$-times rescaled disk $(D^{2})_{m}$ of (3.14), one factor for each one of
the $k$ branches of $V$ at $p$. This describes not only the tower of
$(m+1)^{k}$ pieces of the resolution $\widetilde{X}_{m}$ with its total
divisor $\widetilde{D}_{m}$, but also their attaching map, just as in Example
3.11, except that now we have $k$ directions to keep track of instead of just
one.
In particular, each point $y\in X_{m}$ (which projects to $p$ under the
collapsing map $X_{m}\rightarrow X$) comes with a multi-level map
$\displaystyle l:I\rightarrow\\{0,\dots,m\\}$ (3.15)
as if it was part of a multilevel building (with $k$ independent directions),
where we would separately keep track of the level $l_{i}$ for each one of the
directions $i\in I$. Of course, as part of $X_{m}$ this piece is on level
$l=\mathop{\max}\limits_{i}l_{i}$, but it appears as part of each one of the
local levels $l_{1},\dots,l_{k}$. Furthermore, a point $\widetilde{y}$ in the
resolution $\widetilde{X}_{m}$ comes not only with the multi-level map (3.15)
of its image in $X_{m}$, but also a multi-sign map
$\displaystyle\varepsilon:I\rightarrow\\{0,\pm 1\\}$ (3.16)
that keeps track of whether it is equal to zero, infinity or neither in the
$i$’th direction. This allows us for example to keep track of the various
branches of the total divisor, and the fact that they come in dual pairs
indexed by opposite multi-sign maps $\varepsilon$; note that in general we can
have a point which is in the zero divisor in some of the directions, on
infinity divisor in other directions, and then in some other directions on
neither, and this is precisely what $\varepsilon$ records.
Equivalently, over a depth $k$ point $p$ of $V$, we can instead index each
local piece $P$ of the resolution $\widetilde{X}_{m}$ by a
partition of $I$ into $I_{0},\dots,I_{m}$. (3.17)
The local piece $P(I_{0},\dots,I_{m})$ is on level $l$ with respect to the
branches indexed by $I_{l}$, and thus can be thought as obtained by rescaling
$l$ times $X$ near $p$ in the $I_{l}$ directions; for example, when $I_{0}=I$,
that piece is completely unrescaled and corresponds to a piece of the level
zero $X$. More generally, for any local piece indexed by such a partition, the
directions in $I_{0}$ are unrescaled, while the remaining $j$ directions are
rescaled (at least once). Then
$I_{0}$ determines a unique lift $p_{0}$ of the point $p$ to the resolution
$\widetilde{V^{j}}$ (3.18)
of the depth $j$ stratum of $V$ while $I_{1},\dots,I_{m}$ index the directions
of the
fibers
$({\mathbb{P}}^{1})^{j_{1}}\times\dots\times({\mathbb{P}}^{1})^{j_{m}}=({\mathbb{P}}^{1})^{j}$
of $\mathbb{F}_{j}$ over $\widetilde{V^{j}}$. (3.19)
This identifies the local piece $P(I_{0},\dots,I_{m})$ with a neighborhood in
$\mathbb{F}_{j}$ of the fiber over $p_{0}$. Keeping track of which coordinates
in the fiber of $\mathbb{F}_{j}$ are infinity, zero or neither (which induces
a further partition of $I$ into $I^{\pm}$ and $I^{0}$) then allows us to index
any open stratum of the total divisor $\widetilde{D}_{m}$ in
$\widetilde{X}_{m}$ over the point $p$ by a
partition of $I$ into $I_{j}^{\pm},I_{j}^{0}$ for $j=0,\dots,m$. (3.20)
Then $I^{+}$ records the branches of the zero and fiber divisor, while $I^{-}$
those of the infinity divisor.
###### Remark 3.16
The ${\mathbb{C}}^{*}$ action of $N_{V}$ induces a $({\mathbb{C}}^{*})^{m}$
action on a level $m$ building such that each factor
$\alpha_{l}\in{\mathbb{C}}^{*}$ rescales the level $l\geq 1$ piece of $X_{m}$
normal to its zero divisor, fixes the level zero pointwise, and preserves the
total divisor and its stratification, but not pointwise. As before, this is
modeled by the $({\mathbb{C}}^{*})^{m}$ action on the rescaled disk
$(D^{2})_{m}$ in which each factor $\alpha_{l}\in{\mathbb{C}}^{*}$ acts on the
${\mathbb{P}}^{1}$ component of $(D^{2})_{m}$ in level $l\geq 1$, but now
extended as the diagonal action to the product of the disks in each normal
direction to $V$. More precisely, each component of the resolution $X_{m}$
whose (local) multi-levels are $l_{1},\dots,l_{k}$, is acted by
$\alpha_{l_{i}}$ on its ${\mathbb{P}}^{1}$ factor in direction $i$ for each
$l_{i}\geq 1$.
### 3.3 Local model near the divisor
Rescaling $X$ by a factor of $\lambda\neq 0$ along a normal crossings divisor
$V$ similarly gives rise to manifold we denote $X_{\lambda}$ and an
identification
$\displaystyle R_{\lambda}:X\rightarrow X_{\lambda}$ (3.21)
exactly as in (3.5). Just as described in §3.1, the manifold $X_{\lambda}$ has
two regions, one is the complement of the $|\sqrt{\lambda}|$-tubular
neighborhood $U_{\lambda}$ of $V$ in $X$, on which $R_{\lambda}$ is the
identity, and the other one is identified with $O_{\lambda}$, the complement
of the $|\sqrt{\lambda}|$-tubular neighborhood of the singular divisor $W_{1}$
in $\mathbb{F}_{V}$. The only difference now is that we have several
overlapping local models, coming from the stratification of $V$.
Still, this perspective allows us to think of the rescaled $X$ as a sequence
of manifolds $X_{\lambda}$ with varying $J_{\lambda}=R_{\lambda}^{*}J$, which
as $\lambda\rightarrow 0$ converge to a level one building $X_{1}$. In fact,
$X_{\lambda}$ can be thought as some sort of iterated symplectic sum: fix a
level one building $X_{1}$ as in Definition 3.4, with appropriate
identifications along corresponding divisors, including fixed isomorphisms
(A.7.). For each (small) gluing parameter $\lambda\in{\mathbb{C}}^{*}$ we get
the ’symplectic sum’ $X_{\lambda}$ (diffeomorphic to $X$, but with a deformed
symplectic form) and such that $X_{\lambda}$ converges to $X_{1}$ as
$\lambda\rightarrow 0$. This approach is mentioned in Remark 7.7 of [IP1] and
expanded on in [IP2]; it turns out for this paper to be more convenient than
the approach of section 6 of [IP1].
We next describe the rescaling procedure in more detail. We will work in
regions which are obtained from neighborhoods of depth $k$ strata of $V$ after
removing neighborhoods of the higher depth stratum, where we have nice local
models. Denote by $U_{\delta}$ the $\delta$-tubular neighborhood of $V$ in
$X$, that is the image under $\iota$ of $\widetilde{U}_{\delta}$, the
$\delta$-disk bundle in the normal bundle model
$N_{V}\rightarrow\widetilde{V}$ over the resolution of $V$, and by
$\displaystyle A(r,R)=\iota(\widetilde{U}_{R}\setminus\widetilde{U}_{r})$
(3.22)
the union of annular regions about $V$ in $X$, which will give rise to the
necks. Note that the region $A$ still intersects $V$ around depth $k\geq 2$
points, so it is very different from the region $U_{R}\setminus U_{r}$, an
important distinction that will become relevant later in the paper.
The neighborhood $U_{\delta}$ has subregions $U_{\delta}^{k}$ corresponding to
a product neighborhood of the depth $k$ stratum $V^{k}$, over which $\iota$ is
$k$-to-1. Each region $U_{\delta}^{k}$ can be then regarded as a subset of
$\mathbb{F}_{k}$, where it is identified, after fiberwise rescaling by
$\lambda$ with the complement of a neighborhood of the divisor $F_{k}\cup
D_{k,\infty}$ in $\mathbb{F}_{k}$ to obtain $X_{\lambda}$.
More precisely, assume $p$ is a depth $k$ point of $V$ (but away from the
higher depth strata); locally index the $k$ branches $I$ of $V$ coming
together at $p$, and choose local coordinates $u_{1},\dots,u_{k}$ normal to
each one of these branches (so in particular the $j$’th branch is given by
$u_{j}=0$) such that the $\delta$-tubular neighborhood $U_{\delta}$ of $V$ is
given in these coordinates by $|u_{i}|\leq\delta$ for some $i\in I$. The
region $U_{\delta}^{k}$ is then described in these coordinates by
$\displaystyle|u_{i}|\leq\delta\mbox{ for all }i=1,\dots,k$ (3.23)
as long as $p\in V^{k}$ but outside the $\delta$ neighborhood of the higher
depth stratum $V^{k+1}$. This means that under the change of coordinates
$v_{i}=\lambda/u_{i}$ for each $i\in I$, the region $U_{\delta}^{k}$ is
identified with a region $O_{\delta}(\lambda)$ in $\mathbb{F}_{k}$ described
by the equations
$\displaystyle|v_{i}|\geq|\lambda|\delta^{-1}\mbox{ for all }i=1,\dots,k$
(3.24)
where $p\in V^{k}$ is outside the $\delta$ neighborhood of the higher depth
stratum. This region $O_{\delta}(\lambda)$ is nothing but complement of the
union of tubular neighborhoods of the infinity divisor $D_{k,\infty}$ and the
fiber divisor $F_{k}$ in $\mathbb{F}_{k}$, see (3.7). For
$\delta=|\sqrt{\lambda}|$ this change of coordinates therefore provides an
identification $R_{\lambda}$ of the $|\sqrt{\lambda}|$-tubular neighborhood of
$V$ in $X$ (broken into pieces as above) and the complement of the
$|\sqrt{\lambda}|$-tubular neighborhood of the singular divisor $W_{1}$ in
$\mathbb{F}_{V}$, exactly as suggested by the iterated symplectic sum
construction.
Therefore the (semi)-local model of $X_{\lambda}$ over such a point $p$ is
given by the locus of the equations
$\displaystyle u_{i}v_{i}=\lambda\quad\mbox{ where
}\quad|u_{i}|\leq\delta\quad\mbox{ for all }i\in I$ (3.25)
where $u_{i}$ is a coordinate in $N_{i}$, the normal bundle to the $i$’th
local branch of $V$ and $v_{i}$ is the dual coordinate in the dual bundle
$N_{i}^{*}$ (which is allowed to equal infinity). Note that these equations
are invariant under reordering of the branches, so they describe an intrinsic
subset of $N_{V^{k}}\times\mathbb{F}_{k}$ where our semi-local analysis will
take place.
The $\delta$-neck $N_{\lambda}(\delta)$ of $X_{\lambda}$ is the region in the
above coordinates where
$\displaystyle\mbox{ $|u_{i}|<\delta$ and $|v_{i}|<\delta$ for some }i$ (3.26)
and it globally corresponds to the annular region
$A(|\lambda|\delta^{-1},\delta)$ around $V$ in $X$. The upper hemisphere
region $H_{\lambda}(\delta)$ of $X_{\lambda}$ corresponds to the region
$A(|\lambda|,\delta)$ in $X$; in coordinates it is described by:
$\displaystyle\mbox{ $|u_{i}|<\delta$ and $|v_{i}|<1$ for some }i$ (3.27)
As $\lambda\rightarrow 0$, $X_{\lambda}$ converges to a level one building
$X_{1}$; for $\lambda$ sufficiently small, the part of $X_{\lambda}$ outside
the $\delta$-neck is canonically identified with the complement a certain
neighborhood of $W_{1}$ in $X_{1}$; as both $\lambda,\delta\rightarrow 0$,
this neighborhood expands to $X_{1}\setminus W_{1}$. The upper hemisphere
region $H_{\lambda}(\delta)$ similarly converge to the upper hemisphere of
$\mathbb{F}_{V}$. The neighborhoods $U_{\delta}^{k}\setminus U_{\delta}^{k+1}$
all fit inside $N_{V^{k}}\times\mathbb{F}_{k}$ where they are described by the
equations (3.25). In particular, over a point $p$ as above, the level one
building $X_{1}$ is described by the locus of the equations:
$\displaystyle u_{1}v_{1}=0,\quad\dots\quad,u_{k}v_{k}=0$ (3.28)
where $|u_{i}|\leq\delta$ for all $i$, regarded as an intrinsic subset of
$N_{V^{k}}\times\mathbb{F}_{k}$ (or more precisely inside the pullback of
$\mathbb{F}_{k}$ over $N_{V^{k}}$).
These describe the $2^{k}$ local pieces of $X_{1}$ coming together at $p$
along the singular locus $W_{1}$: each piece of $X_{1}$ is described by the
vanishing of exactly $k$ coordinates, but some of them may be $u_{i}\in N_{i}$
in which case the rest are the complementary indexed ones $v_{j}\in
N_{j}^{*}$. Furthermore, the divisor $W_{1}$ has several local branches, each
one described by the further vanishing of one of the remaining coordinates
$u_{i}$ or $v_{i}$, matching the description from Remark 3.15.
###### Remark 3.17
Of course, the rescaling procedure can be iterated finitely many times: start
with $X$, rescale it by $\lambda_{1}$, then rescale again the resulting
manifold by $\lambda_{2}$, etc. The limit as all $\lambda_{a}\rightarrow 0$ is
then a level $m$ building $X_{m}$, where we have similar semi-local models
described over a depth $k$ point $p$ as intrinsic subsets of
$(\mathbb{F}_{k}\times\mathbb{F}_{k})^{m}$, modeled in each direction $i$ by
the process of $m$ times rescaling a disk at the origin, see Remark 3.15.
Therefore one way to describe this iterated rescaling is to start by choosing
coordinates $u_{i,0}=u_{i}$ on $X$ normal to $V$ at $p$, together with $m$
other sets of dual normal coordinates:
$u_{i,l}\in N_{i}\cup\infty$ and $v_{i,l}\in N_{i}^{*}\cup\infty$ with
$v_{i,l}=u_{i,l}^{-1}$ (3.29)
for all $l=1,\dots,m$ and all $i\in I$. These provide semi-local coordinates
on $X_{m}$ in a neighborhood of the fiber $F_{p}$ over $p$ of the collapsing
map $X_{m}\rightarrow X$. Rescaling $X$ means we
make a change of coordinates $u_{i,l-1}v_{i,l}=\lambda_{l}$ at step $l$,
(3.30)
for each $l=0,\dots,m$ and all $i\in I$.
## 4 The refined limit of a sequence of maps
Consider now a sequence $\\{f_{n}\\}$ of maps in ${\cal M}_{s}(X,V)$ whose
limit (in the usual stable map compactification) has some components in $V$
and thus is not in ${\cal M}_{s}(X,V)$. We will describe how to rescale $X$
once around $V$ and construct a refined limit $f$ into a level one building
$X_{1}$ which has no nontrivial components in the singular locus, and fewer
nontrivial components in the zero divisor. This will be the key main step in
the inductive rescaling procedure used in the next section to construct the
relatively stable map compactification.
We also want to describe in more details the collection of such possible
limits $f$. In the case $V$ is a smooth divisor, we proved in [IP2] that the
limit satisfies a matching condition along the singular divisor. The arguments
used there are semi-local (that is they involve a further analysis of what
happens only in a tubular neighborhood of $V$), and extend to the case when
$V$ is a normal crossings divisor by similarly working in neighborhoods of
each depth $k$ stratum of $V$, where we have $k$ different normal directions
to $V$. In the case the limit $f$ has no trivial components in $W_{1}$, we
will also get a matching condition along $W_{1}$. First of all, we will see
that similarly $f^{-1}(W_{1})$ consists of nodes of the domain. The naive
condition is that at each such node, $f$ has matching intersection points with
$W_{1}$, including the multiplicities, as was the case in [IP1]. But it turns
out that at points of depth $k\geq 2$, this is not enough to define relative
GW invariants, and needs to be further refined.
First of all, according to our conventions which match those of [IP2], every
map $f:\Sigma\rightarrow X$, will be regarded as a map
$(f,\varphi):\Sigma\rightarrow X\times\overline{{\cal U}}_{g,n}$, where the
second factor is the universal curve of the domains. The energy of $f$ in a
region $N$ of the target is then defined to be
$\displaystyle E(f;N)=\frac{1}{2}\int_{f^{-1}(N)}|df|^{2}+|d\varphi|^{2}$
(4.31)
Denote by $\alpha_{V}$ the minimum quanta of energy that a stable holomorphic
map into $V$ has. Note that according to our conventions, the energy also
includes the energy of domain $\Sigma$, which is stable when
$f_{*}[\Sigma]=0$, and therefore $\alpha_{V}>0$.
The key idea of Chapter 3 of [IP2] was that when $V$ is smooth and $\delta>0$
small, the limits of maps which have energy at most $\alpha_{V}/2$ in the
$\delta$-neck cannot have any components in the singular locus $V=V_{\infty}$
and thus have well defined leading coefficients along $V=V_{\infty}$ which are
furthermore uniformly bounded away from 0 and infinity (the particular bound
depends on $\delta$ and the choice of metrics).
In the case when $V$ is a normal crossings divisor, it will no longer be true
that the limit of maps which have small energy in the neck has no components
in the singular locus. Now we could have some components in level one whose
energy is smaller than $\alpha_{V}/2$: these are stable maps into
$\mathbb{F}_{V}$, but whose projection to $V$ does not have a stable model. In
the case $V$ was smooth, these correspond precisely to what we called trivial
components in Definition 11.1 of [IP2]. When $V$ is a normal crossings
divisor, the appropriate extension of that definition is then the following:
###### Definition 4.1
A trivial component in $\mathbb{F}_{k}$ is a nonconstant holomorphic map
$f:({\mathbb{P}}^{1},0,\infty)\rightarrow(\mathbb{F}_{k},D_{0}\cup
D_{\infty})$ whose image lands in a fiber $({\mathbb{P}}^{1})^{k}$ of
$\mathbb{F}_{k}$. In coordinates
$\displaystyle f(z)=(a_{1}z^{s_{1}},\dots a_{k}z^{s_{k}})$
where some, but not all of the $a_{i}$ could be zero or infinity.
This is the only kind of stable map into $(\mathbb{F}_{k},D_{0}\cup
D_{\infty}\cup F_{k})$ which does not have a stable model when projected into
$D_{0}$: its domain is an unstable sphere with just two marked points and thus
its projection has energy zero. Note that as a result of the rescaling
process, we have uniform control only on the energy of the projection, as the
area of the fiber of $\mathbb{F}_{k}$ could be arbitrarily small. We are now
ready to state the first rescaling result:
###### Proposition 4.2
Consider $\\{f_{n}:C_{n}\rightarrow X\\}$ a sequence of maps in ${\cal
M}_{s}(X,V)$ which converge, in the usual stable map compactification, to a
limit $f_{0}:C_{0}\rightarrow X$ which has at least one component in $V$.
Then there exists a sequence $\lambda_{n}\rightarrow 0$ of rescaling
parameters such that after passing to a subsequence, $R_{\lambda_{n}}f_{n}$
have a refined limit $g:C\rightarrow X_{1}$ with the following properties:
* (a)
$g$ is a map into a level one building $X_{1}$ refining $f_{0}$:
$\displaystyle C\;\;$ $\displaystyle\mathop{\longrightarrow}\limits^{g}$
$\displaystyle{X_{1}}$ $\displaystyle\;\;\downarrow{\mbox{\rm st}}$
$\displaystyle\downarrow{p}$ (4.32) $\displaystyle C_{0}\;\;$
$\displaystyle\mathop{\longrightarrow}\limits^{f_{0}}$ $\displaystyle X$
* (b)
$g$ has no nontrivial components in the singular divisor $W_{1}$;
* (c)
$g$ has at least one nontrivial component in level one which does not lie
entirely in $V_{1}$, and thus has fewer nontrivial components in the zero
divisor $V_{1}$ compared with $f_{0}$;
Any refined limit $g$ that has properties (a)-(c) is unique up to rescaling
the level one of the building by an overall factor
$\lambda\in{\mathbb{C}}^{*}$.
Proof. In the case $V$ is a smooth divisor, we proved this in Section 6 of
[IP1] (see also Section 3 of [IP2]). The arguments used there to construct the
refined limit are semi-local, in a neighborhood of $V$, and if set up right,
easily extend to the case when $V$ is a normal crossings divisor. The main
rescaling argument consists of two parts, first the construction of the
refined limit by rescaling the target near $V$, and later on a further
analysis of the properties of this refined limit. For the first part of the
argument, we work separately in neighborhoods of depth $k$ stratum $V^{k}$ but
away from the higher depth strata; for the second part of the argument, we
work in the necks, where the transition between these local models happens. As
this is one of the crucial steps in the construction of the relatively stable
map compactification, we include below the complete details of both of these
arguments.
Step 0. Preliminary considerations. Assume for simplicity that the domains
$C_{n}$ are smooth (otherwise work separately on each of their components) and
that the original limit $C_{0}$ is a stable curve (see Remark 2.1).
Around each point $x\in C_{0}$, denote by $B(x,\varepsilon)$ the ball about
$x$ of radius $\varepsilon$ in the universal curve, and by
$B_{n}(x,\varepsilon)$ its intersection with $C_{n}$. Around each node $x$ of
$C_{0}$ we can choose local coordinates $z,w$ on the universal curve such that
the domains $C_{n}$ are described by
$\displaystyle zw=\mu_{n}(x)$ (4.33)
where $\mu_{0}(x)=0$. Denote by $D$ the collection of nodes of $C_{0}$, and by
$\widetilde{C}_{0}$ the resolution of $C_{0}$. For each point
$x\in\widetilde{C}_{0}$, denote by $\gamma_{\varepsilon}(x)$ the oriented
boundary of the $\varepsilon$-disk about $x$ in $\widetilde{C}_{0}$, and by
$\gamma_{n,\varepsilon}(x)$ the corresponding boundary component of
$B_{n}(x,\varepsilon)$.
Denote by $P_{n}\subset C_{n}$ the collection of marked points of $f_{n}$,
which include all the points in $f_{n}^{-1}(V)$, with their contact
information recorded by $s$. As $n\rightarrow\infty$, they converge to the
marked points $P_{0}$ of $C_{0}$, but the original limit $f_{0}$ has only a
partially defined contact information to $V$, which we describe next. Each
component $\Sigma$ of $C_{0}$ has a depth $k(\Sigma)$, defined as the maximum
$k$ such that $f_{0}(\Sigma)\subset V^{k}$. The restriction of $f_{0}$ to
$\Sigma$ has a well defined contact multiplicity to the next stratum
$V^{k+1}$; each such contact point has an associated contact information (2.3)
along $V^{k+1}$, including its depth $k(x)>k$ (with respect to the
stratification of $V$). For each $j>k$, will denote by
$R_{k}^{j}\subset\widetilde{C}_{0}$ the collection of contact points of depth
$j$ that belong to a component of depth $k$. Denote by $R$ the union of all
these contact points, and by $R_{k}$ those that have depth $k$ while by
$R^{j}$ those that belong to a depth $j$ component. Also denote by
$f^{k}:C^{k}\rightarrow V^{k}$ the restriction of $f_{0}$ to $C^{k}$, the part
of $C_{0}$ consisting of depth $k$ components; in fact, $f^{k}\in{\cal
M}_{s^{k}}(\widetilde{V}^{k},\widetilde{V}^{k+1})$ and
$\displaystyle f_{0}^{-1}(V^{k}\setminus V^{k+1})=(C^{k}\setminus R^{k})\sqcup
R_{k}$ (4.34)
This allows us to break-up the domain of $C_{0}$ and $C_{n}$ into pieces whose
image lies in controlled regions where the rescaling happens (more precisely
in a neighborhood of $V^{k}$, but away from a neighborhood of the higher depth
stratum $V^{k+1}$). Recall that $U_{\delta}$ denotes the $\delta$-tubular
neighborhood of $V$, and $U_{\delta}^{k}$ denotes the corresponding
neighborhood of $V^{k}$, see (3.23).
The complement of the ’neck’ regions $B(x,\varepsilon)$ about all points $x\in
R$ (for $\varepsilon(x)>0$ sufficiently small) decomposes $C_{n}$ into pieces
$(C_{n}^{k})^{\prime}$, which limit as $n\rightarrow\infty$ and
$\varepsilon\rightarrow 0$ to $C^{k}\setminus R^{k}$. We will denote by
$C_{n}^{k}$ the union of the $(C_{n}^{k})^{\prime}$ piece together with all
the ’neck’ pieces around points in $R_{k}$. The boundary of $C^{k}_{n}$ then
decomposes as $\partial_{+}\sqcup-\partial_{-}$ where
$\displaystyle\partial_{+}C^{k}_{n}=\mathop{\sqcup}\limits_{x\in
R_{k}}\gamma_{n,\varepsilon}(x)\quad\mbox{ while
}\quad\partial_{-}C^{k}_{n}=\mathop{\sqcup}\limits_{x\in
R^{k}}\gamma_{n,\varepsilon}(x)$ (4.35)
The curve $C_{n}$ is obtained by joining $C_{n}^{k}$ along
$\partial_{-}C_{n}^{k}$ and $\partial_{+}C_{n}^{k}$ to lower and respectively
higher depth pieces $C_{n}^{j}$.
By construction and (4.34), $f_{0}$ maps $C^{k}_{0}$ in some neighborhood
$U^{k}_{M(\varepsilon)/2}$ of the depth $k$ stratum but away from some much
smaller neighhborhood $U^{k+1}_{2m(\varepsilon)}$ of the higher depth stratum,
such that the incoming part $f_{0}(\partial_{-}C_{0}^{k})$ of the boundary
ends up outside the neighborhood $U^{k}_{2m(\varepsilon)}$ of the depth $k$
stratum, while the outgoing part $f_{0}(\partial_{+}C_{0}^{k})$ of the
boundary ends up inside the $U^{k}_{M(\varepsilon)/2}$ neighborhood of the
depth $k+1$ stratum. The sizes of these tubular neighborhoods depend on
$f_{0}$, but can be chosen uniform in $k$ and also such that
$M(\varepsilon)\rightarrow 0$ as $\varepsilon\rightarrow 0$.
Fix $0<\delta<<1$ small enough so that, for all $k\geq 0$ the depth $k$ piece
$f^{k}$ of the original limit $f_{0}$ has very small energy in a
$\delta$-neighborhood of the higher depth stratum $V^{k+1}$:
the energy of $f^{k}$ in the region $U^{k+1}_{\delta}$ is less than
$\alpha_{V}/100$ (4.36)
Of course, we have assumed that $f_{0}$ has at least one component in $V$ thus
the energy of the restriction of $f_{0}$ to the complement of $C^{0}$ is at
least $\alpha_{V}$. Assume $\varepsilon>0$ sufficiently small so that
$2M(\varepsilon)<\delta$. Since $f_{n}\rightarrow f_{0}$ in the stable map
compactification as maps into $X$, for $n$ sufficiently large,
1. (i)
$f_{n}(C_{n}^{k})$ lies in $U^{k}_{M}\setminus U^{k+1}_{m}$;
2. (ii)
$f_{n}(\partial_{-}C_{n}^{k})$ lies $U^{k}_{M}\setminus U^{k}_{m}$ while
$f_{n}(\partial_{+}C_{n}^{k})$ lies in $U^{k+1}_{M}\setminus U^{k+1}_{m}$;
3. (iii)
the restriction of $f_{n}$ to $C_{n}^{k}$ has energy at most $\alpha_{V}/50$
in the region $U_{\delta}^{k+1}$;
4. (iv)
the restriction of $f_{n}$ to the complement $C_{n}^{0}$ has energy at least
$2\alpha_{V}/3$.
Step 1. Constructing the refined limit. Next we rescale $X$ around $V$ by a
certain amount $\lambda_{n}$ to catch a refined limit of the sequence $f_{n}$.
To find $\lambda_{n}$, we consider the energy $E(t)$ of $f_{n}$ inside the
annular region $A(t,\delta)$ of (3.22) around $V$ in $X$. Because $f_{n}$ was
assumed to be smooth, the energy $E(t)$ is a continuously decreasing function
in $t$, which is equal to 0 when $t=\delta$ and is at least $2\alpha_{V}/3$
when $t=0$ (because the restriction of $f_{n}$ to the complement of
$C_{n}^{0}$ lies in the $U_{\delta}$ by (i) and has energy at least
$2\alpha_{V}/3$ by (iv)). So for $n$ sufficiently large, we can find a
$\lambda_{n}\neq 0$ such that
the energy of $f_{n}$ in the annular region $A(|\lambda_{n}|,\delta)$ is
precisely $\alpha_{V}/2$ (4.37)
where $\delta>0$ is fixed such that (4.36). Then $\lambda_{n}\rightarrow 0$,
because if they were bounded below by $\mu>0$ then in the limit the energy in
the annular region $A(\mu,\delta)$ of $f_{0}$ and thus of $f^{0}$ would be
$\alpha_{V}/2$ which contradicts (4.36).
As the rescaling maps $R_{\lambda_{n}}:X\rightarrow X_{\lambda_{n}}$ of (3.21)
are identity outside the $|\lambda_{n}|^{1/2}$ neighborhood of $V$, the
restriction of $f_{n}$ to $C_{n}^{0}$ will be unaffected by rescaling and thus
will uniformly converge (on compacts away from the nodes) to the restriction
of $f^{0}$ to $C^{0}_{0}$. Taking also $\varepsilon\rightarrow 0$, we get
usual Gromov convergence of the restrictions of $R_{\lambda_{n}}f_{n}$ to
$C_{n}^{0}$ to a limit $g_{0}$, which is equal in this case to the original
depth zero part of the limit $f^{0}:C^{0}\rightarrow X$. In particular, the
restriction of $R_{\lambda_{n}}f_{n}$ to the outgoing part of the boundary
$\partial_{+}C_{n}^{0}$ uniformly converge to the restriction of $g^{0}$ to
$\partial_{+}C_{0}^{0}$, where we have a local model (2.3) around $V$.
Now proceed by induction on $k\geq 1$. Assume that for all
$\varepsilon_{n}\rightarrow 0$, after possibly making the $\varepsilon_{n}$
even smaller, the restrictions of $R_{\lambda_{n}}f_{n}$ to
$\mathop{\cup}\limits_{j<k}C_{n}^{j}(\varepsilon_{n})$ converge to a limit
$g:\mathop{\cup}\limits_{j<k}B_{j}\rightarrow\mathop{\cup}\limits_{j<k}\mathbb{F}_{j}$
which refines the depth less than $k$ part of the original limit$f_{0}$. This
in particular means that the boundary of $\mathop{\cup}\limits_{j<k}C_{n}^{j}$
converges to finitely many marked points of $\mathop{\cup}\limits_{j<k}B_{j}$
and that the restrictions of $R_{\lambda_{n}}f_{n}$ to this boundary converge
to the image under $g$ of these points.
We want to extend this for the restrictions to $\cup_{j\leq k}C_{n}^{j}$. Note
that by construction, $\cup_{j<k}C_{n}^{j}$ has only outgoing boundary, which
matches the incoming one of $\cup_{j\geq k}C_{n}^{j}$. Therefore it suffices
to only consider the restrictions to $C_{n}^{k}$ and show that after possibly
shrinking only the outgoing part of its boundary we get a convergent
subsequence to some $g_{k}:B_{k}\rightarrow\mathbb{F}_{k}$ which refines
$f^{k}:C^{k}\rightarrow V^{k}$. Then the restriction of sequence
$R_{\lambda_{n}}f_{n}$ to $\mathop{\cup}\limits_{j\leq k}C_{n}^{j}$ will
automatically converge to a continuous limit $g:\mathop{\cup}\limits_{j\leq
k}B_{j}\rightarrow\mathop{\cup}\limits_{j\leq k}\mathbb{F}_{j}$ with the
desired properties.
By (i), the restrictions of $f_{n}$ to $C_{n}^{k}$ lie in the
$\delta$-neighborhood $U^{k}_{\delta}$ of $V^{k}$, but outside the
$m(\varepsilon)$-tubular neighborhood of the next stratum $V^{k+1}$; after
rescaling by $\lambda_{n}$, we can identify this region with the complement of
the $|\lambda_{n}|\delta^{-1}$ neighborhood of the infinity divisor
$D_{k,\infty}$ in $\mathbb{F}_{k}$ and of the $m(\varepsilon)$ neighborhood of
the fiber divisor $F_{k}$ over the higher depth stratum, see (3.23) and
(3.23).
Next consider the restriction of $f_{n}$ to each piece
$\gamma_{n,\varepsilon}(x)$ of the boundary of $C_{n}^{k}$, which after
rescaling ends up in a corresponding annular region around $D_{k,\infty}$ and
respectively $F_{k}$. We claim that the image of $\gamma_{n,\varepsilon}(x)$
under $R_{\lambda_{n}}f_{n}$ can be capped off with a small energy disk in
$\mathbb{F}_{k}$ around this divisor. For the incoming pieces of the boundary
(i.e. for each $x\in R_{j}^{k}$ with $j<k$), this is because by induction
$R_{\lambda_{n}}(\gamma_{n,\varepsilon}(x))$ uniformly converges as
$n\rightarrow\infty$ to the restriction of $g(\gamma_{\varepsilon}(x))$ where
we already have a local model coming from $f^{j}$ for each $j<k$; in
particular, the intersection of these capping disks with the infinity section
is uniformly bounded. For the outgoing pieces of the boundary (i.e. for each
$x\in R_{k}^{j}$ with $j>k$) we have the local model of $f^{k}$ along
$V^{k+1}$, to which the unrescaled $f_{n}(\gamma_{n,\varepsilon}(x))$
converges uniformly for fixed $\varepsilon>0$. So after possibly choosing a
smaller $\varepsilon_{n}(x)>0$ (depending also on $n$ for each such outgoing
$x$), we can also cap off the image of $\gamma_{n,\varepsilon}(x)$ under
$R_{\lambda_{n}}f_{n}$ by a small energy disk about the fiber divisor over the
higher depth stratum (away from the infinity section).
The resulting homology class in $\mathbb{F}_{k}$ of the capped surface is
constant in $n$, because is determined by (i) the homology class of its
projection onto the zero section which depends only on $f_{0}$ and (ii) the
intersection with the infinity divisor, where we already have uniform control
by induction. This means a fortiori that the restriction of the maps
$R_{\lambda_{n}}f_{n}$ to $C_{n}^{k}$, which are $R_{\lambda_{n}}^{*}(J,\nu)$
holomorphic, have uniformly bounded energy in $\mathbb{F}_{k}$ (where the
symplectic area of the fibers is fixed, but small) and therefore a have Gromov
convergent subsequence to a $(J_{0},\nu_{0})$ limit
$g_{k}:B_{k}\rightarrow\mathbb{F}_{k}$, defined (after removing its
singularities) on a closed, possibly nodal curve $B_{k}$ which is obtained
from $C^{k}$ after possibly inserting some bubble trees. The convergence is
uniform on compacts away from the nodes of $B_{k}$ and in Hausdorf distance.
This constructs inductively a refined limit $g:C\rightarrow X_{1}$, defined
from $C=\cup B_{k}$ into a level one building (the depth $k$ is bounded by
${\rm dim\;}X$, so the process terminates in finitely many steps). By
construction, the refined limit $g$ has energy at most $\alpha_{V}/2$ around
the infinity section of $\mathbb{F}_{V}$, and therefore has no nontrivial
components there (as these would carry at least $\alpha_{V}$ energy), giving
(b). If $g$ had only trivial components in the first level but outside
$V_{1}$, then $g$ would have very small energy in the upper-hemisphere of
${\mathbb{P}}_{V}$, contradicting the choice (4.37). Therefore the limit
satisfies (c) as well. This concludes the proof of Lemma 4.2. $\Box$
Step 2: Further properties of the refined limit. We next want to understand in
more details the behavior of the refined limit $g:C\rightarrow X_{1}$
constructed by Lemma 4.2 around the singular divisor $W_{1}$ where the pieces
of the building are joined together; for that we will restrict ourselves to
the neck regions (both of the domain and of the target) where we will use the
local models (4.33) on the domain and (3.25) in the target.
First of all, every component $\Sigma$ of the refined limit $g:C\rightarrow
X_{1}$ that does not land in $W_{1}$ has a lift
$\widetilde{g}:\Sigma\rightarrow\widetilde{X}_{1}$ to the resolution
$\widetilde{X}_{1}$, where it has a well defined order of contact with
$\widetilde{W}_{1}$; the components that land in $V_{1}$ have only a partial
contact information to the higher depth strata of $V_{1}$; the components that
land in $W_{1}$ must project to a point under $X_{1}\rightarrow X$, and have
only a partial contact information to the higher depth strata of
$\widetilde{W}_{1}\cup V_{1}$. The contact information of $g$ refines the
partial contact information of the depth $k$ part of the original limit
$f_{0}$ to the higher depth stratum of $V$.
In this section we use the semi-local models for the target described in
Remark 3.15, and in particular describe the strata of the resolution of the
total divisor $\widetilde{D}_{1}$ in terms of strata of $V$ together with a
multilevel and a multisign map. Fix a point $x\in\Sigma$ such that $g(x)\in
W_{1}\cup V_{1}$, and denote by $p$ the image of $g(x)$ under the collapsing
map $X_{1}\rightarrow X$. The image $\widetilde{g}(x)$, which is a point in
$\widetilde{W}_{1}\cup V_{1}$, comes with both a multi-level map (3.15) and a
sign map (3.16) that associate to each local branch $i$ of $V$ at $p$ a local
level $l_{i}(x)=0$ or 1 and a sign $\varepsilon_{i}(x)=\pm 1,0$ recording
whether it is equal to infinity, zero or neither in that direction.
Furthermore, $\widetilde{g}$ has an associated contact information to
$\widetilde{W}_{1}$ and a possibly partial one to $V_{1}$ at $x$. This
information includes
a partition of $I$ into $I^{\pm}(x),I^{0}(x)$ and $I^{\infty}(x)$. (4.38)
For each $i\in I^{\pm}(x)$, $\varepsilon_{i}(x)=\pm$ (respectively) and we
also have a well defined contact multiplicity $s_{i}(x)>0$ to the total
divisor and a leading coefficient $a_{i}(x)\neq 0$ which is now regarded as an
element of
$\displaystyle a_{i}(x)\in
N_{i}^{\varepsilon_{i}(x)}\otimes(T_{x}^{*}\Sigma)^{s_{i}(x)}\quad\mbox{ for
all }i\in I^{\pm}(x)$ (4.39)
The directions $i\in I^{0}(x)$ have $\varepsilon_{i}(x)=0$ and correspond to
contact of order zero with the total divisor in those directions, so we set
$s_{i}(x)=0$ while the leading coefficient is $a_{i}(x)\in N_{i}$, the
coordinate of $\widetilde{g}(x)$ in the $i$’th direction (which is nonzero).
Finally, the directions $i\in I^{\infty}(x)$ have $\varepsilon_{i}(x)=+$ and
$l_{i}(x)=1$ and correspond to those directions in which $g(\Sigma)$ is
entirely contained in the zero divisor $V_{1}$. These count as undefined or
infinite order of contact $s_{i}(x)=\infty$ while their leading coefficient
$a_{i}(x)=0$. In local coordinates $z$ on $\Sigma$ at $x$, and normal
coordinates $u_{i,l}\in N_{i}$ to the zero divisor in each level $l$, $g(z)$
then has an expansion
$\displaystyle(u_{i,l})^{\varepsilon_{i}(x)}$ $\displaystyle=$ $\displaystyle
a_{i}(x)z^{s_{i}(x)}+O(|z|^{s_{i}(x)})\qquad\mbox{ for all $i\notin
I^{\infty}(x)$}$ (4.40)
###### Lemma 4.3
Consider a sequence $f_{n}\in{\cal M}_{s}(X,V)$ as in Lemma 4.2, and let
$g:C\rightarrow X_{1}$ denote its refined limit constructed there. Then $g$
has a lift to the resolution of the level one building, which comes with a
(possibly partial) contact information $s$ to $\widetilde{W}_{1}\cup V_{1}$ as
described above. If we denote by $C^{\prime}$ any intermediate curve
$C\rightarrow C^{\prime}\rightarrow C_{0}$, then all the contact points of $g$
descend to special points of $C^{\prime}$, and moreover:
* •
$s(x_{-})=s(x_{+})$ and $\varepsilon(x_{-})=-\varepsilon(x_{+})$ for each node
$x_{-}=x_{+}$ of $C^{\prime}$;
* •
$s(x)=s(x_{n})$ and $\varepsilon(x)=\varepsilon(x_{n})$ for each marked point
$x\in C^{\prime}$ which is the limit of marked points $x_{n}$ of $C_{n}$
(whenever both sides are defined). Furthermore any special fiber of
$C\rightarrow C^{\prime}$ or $C^{\prime}\rightarrow C_{0}$ is a string of
trivial components with two end points (broken cylinder).
Proof. First of all, we can decompose the domain $C$ into two pieces, $C^{0}$
and $B$ where $C^{0}$ consists of nontrivial components, and $B$ consisting of
components that get collapsed under the two maps $C\rightarrow C_{0}$ and
$X_{1}\rightarrow X$. Then each connected component $B_{i}$ of $B$ is an
unstable genus zero curve (bubble tree) with either one or two marked points,
corresponding to a special fiber of $C\rightarrow C_{0}$ over either:
1. (a)
a non special point in the case $B_{i}$ has only one marked point
2. (b)
a node in the case $B_{i}$ has two marked points and both belong to $C^{0}\cap
B$ or
3. (c)
a marked point in the case $B_{i}$ has two marked points, but only one belongs
to $C^{0}\cap B$ while the other is a marked point of $C$.
We have the same description for the special fibers of $C\rightarrow
C^{\prime}$ and of $C^{\prime}\rightarrow C_{0}$ for any intermediate curve
$C\rightarrow C^{\prime}\rightarrow C_{0}$.
For any point $x\in C_{0}$, choose local coordinates at $x$ on the universal
curve of the domains (containing $C_{0}$) and normal coordinates to $V$ in the
target $X$ at $p$ as above (where $p=f_{0}(x)$ is a depth $k\geq 0$ point of
$V$). Using the notations of the proof of Lemma 4.2, for
$\varepsilon,\delta>0$ sufficiently small and $n$ sufficiently large, the
image under $f_{n}$ of $B_{n}(x,\varepsilon)$ is mapped in the $\delta$
neighborhood of $p$ in $X$ but away from the $\delta$ neighborhood of the
depth $k+1$ stratum. Furthermore, since $f_{n}^{-1}(V)=P_{n}$ then $f_{n}$
maps $B_{n}(x,\varepsilon)\setminus P_{n}$ into the annular region
$O_{\delta}$
$\displaystyle 0<|u_{i}|<\delta\quad\mbox{ for all }i\in I$ (4.41)
of $N_{V^{k}}$ around $p$ (away from the higher depth stratum). This region is
homotopic to $(S^{1})^{k}$, one factor for each branch $i$ of $V$ at $p$.
If we also fix to begin with global coordinates on all bubble components in
$B$, we have a similar story for any point $x$ in any one of the intermediate
curves $C^{\prime}$: $f_{n}$ takes a sufficiently small punctured neighborhood
$B_{n}(x,\varepsilon)\setminus P_{n}$ into the annular region (4.41), where
now $B_{n}(x,\varepsilon)$ denotes the intersection of $C_{n}$ with the ball
$B(x,\varepsilon)$ about $x$ in the local model for domains containing their
intermediate limit $C^{\prime}$. Topologically, $B_{n}(x,\varepsilon)$ is
either a disk or an annulus, depending whether $x$ is a smooth point or a node
of $C^{\prime}$. In particular, for every lift $\widetilde{x}$ of $x$ to the
resolution of $C^{\prime}$ and thus to $\widetilde{C}$, the corresponding
boundary loop $f_{n}(\gamma_{n,\varepsilon}(\widetilde{x}))$ has a well
defined winding number $s_{i,n}(\widetilde{x})$ about the branch $i$ of $V$ at
$p$.
Furthermore, for $\varepsilon>0$ sufficiently small, $B_{n}(x,\varepsilon)$
contains no points from $P_{n}$ if $x$ is not a marked point of $C^{\prime}$
and otherwise it contains precisely one point $x_{n}\in P_{n}$ which limits to
$x$ as $n\rightarrow\infty$. This implies that, for $n$ sufficiently large,
and all $i\in I$,
1. (a)
if $x$ is not a special point of $C^{\prime}$ then the winding numbers
$s_{i,n}(\widetilde{x})=0$;
2. (b)
if $x_{\pm}\in\widetilde{C}$ correspond to a node $x$ of $C^{\prime}$ then
$s_{i,n}(x_{+})+s_{i,n}(x_{-})=0$.
3. (c)
if $x\in C^{\prime}$ is the limit of the marked points $x_{n}\in C_{n}$ then
$s_{i,n}(\widetilde{x})=s_{i}(x_{n})$;
But the winding numbers $s_{i,n}(\widetilde{x})$ of $f_{n}$ are related to
those of the refined limit $g$. This is simply because the winding numbers of
$f_{n}$ agree with those of the rescaled maps $R_{\lambda_{n}}f_{n}$, and
these converge uniformly on compacts away from the nodes of $C$ to the refined
limit $g:C\rightarrow X_{1}$, which has well defined winding numbers about the
zero section in all directions $i\notin I^{\infty}$. Since the loops
$\gamma_{n,\varepsilon}(\widetilde{x})$ stay away from all the nodes of $C$
(for $\varepsilon$ sufficiently small) then for $n$ sufficiently large:
$s_{i,n}(\widetilde{x})=\varepsilon_{i}(\widetilde{x})s_{i}(\widetilde{x})$
for all $i\notin I^{\infty}$ (4.42)
because of the expansion (4.40) of $g$ at $\widetilde{x}$. Note that this give
us no information about the winding numbers in the direction of $I^{\infty}$,
where the winding numbers of $g$ are undefined.
In particular, any contact point $\widetilde{x}$ of $g$ with $W_{1}\cup V_{1}$
or any of its strata has $s_{i}(\widetilde{x})>0$ in some direction $i$ which
rules out case (a): if $\widetilde{x}\in\widetilde{C}$ descends to a non
special point $x$ of $C^{\prime}$, then $s_{i,n}(\widetilde{x})=0$ by (a)
which contradicts (4.42).
In case (b), for any node $x$ of $C^{\prime}$ then
$s_{i,n}(x_{-})+s_{i,n}(x_{+})=0$ and so (4.42) implies that
$\displaystyle s_{i}(x_{-})=s_{i}(x_{+})\mbox{ and
}\varepsilon_{i}(x_{-})=-\varepsilon_{i}(x_{+})\quad\mbox{ for all }i\notin
I^{\infty}(x)\mathop{=}\limits^{def}I^{\infty}(x_{1})\cup I^{\infty}(x_{2})$
as both sides are well defined for such an $i$. If $x$ is the limit of contact
points $x_{n}$ of $f_{n}$ to $V$ then (4.42) implies that
$\displaystyle s_{i}(x)=s_{i}(x_{n})\mbox{ and
}\varepsilon_{i}(x)=\varepsilon_{i}(x_{n})\ \quad\mbox{ for all }i\notin
I^{\infty}(x)$
So for example $x$ is a contact point of $g$ with the zero divisor $V_{1}$ if
and only if $x_{n}$ was one for $f_{n}$ to $V$.
Finally, this discussion implies that there are no components of $B$ with just
one marked point (otherwise, contracting such component would give a curve
$C^{\prime}$ and a non special point $x$ on it which is impossible as case (a)
is ruled out for all intermediate curves $C^{\prime}$). Therefore all the
components of $B$ have precisely 2 special points, which must be the contact
points with the zero and the infinity divisor in their fiber, and thus are
indeed trivial components as in Definition 4.1. Furthermore, the only special
fibers of $C\rightarrow C^{\prime}$ or $C^{\prime}\rightarrow C_{0}$ are
strings of trivial components (broken cylinders). $\Box$
###### Remark 4.4
Lemma 4.3 and the discussion preceding it shows that each node $x$ of $C_{0}$
comes with a partition of the original indexing set $I$ into $I^{\infty}(x)$,
$I^{0}(x)$ and then $I(x)$. $I^{\infty}(x)$ records those directions in which
at least one of the local branches of $g$ lies in the total divisor $W_{1}$
while $I^{0}(x)$ records the directions in which the $i$’th coordinates of
both $\widetilde{g}(x_{\pm})$ are nonzero, so both branches stay away from the
total divisor in those directions. The remaining directions $i\in I(x)$ come
with a multiplicity $s_{i}(x)>0$ and opposite signs
$\varepsilon_{i}(x_{+})=-\varepsilon_{i}(x_{-})\neq 0$, recording the two
opposite sides of the level one building from which the two branches of $g$
come into the singular divisor, and also two leading coefficients
$a_{i}(x_{\pm})\neq 0$ which are naturally elements of
$\displaystyle a_{i}(x_{+})\in
N_{i}^{\varepsilon_{i}(x_{+})}\otimes(T_{x_{+}}C)^{s_{i}(x)}\quad\mbox{ and
}\quad a_{i}(x_{-})\in
N_{i}^{\varepsilon_{i}(x_{-})}\otimes(T_{x_{-}}C)^{s_{i}(x)}$ (4.43)
where $N_{i}$ is the branch of the normal bundle to $V$ indexed by $i\in
I(x)$.
###### Example 4.5
In the situation of Example 3.6 we could have two nodes of the domain mapped
to $p$, one between the components $X$ and $\mathbb{F}_{2}$ while the other
one between the two $\mathbb{F}_{1}$ components, but no node at $p$ between
say $X$ and $\mathbb{F}_{1}$ (as the branches of $g$ would not be on opposite
sides of the singular divisor in all local directions). In the first case the
node is between level 0 and level 1 (really local level (0,0) and (1,1)),
while in the second case it is between two level 1 floors, or more accurately
between a local level (0,1) and (1,0) floor.
###### Remark 4.6
As a consequence of Lemma 4.3, a refined limit $g:C\rightarrow X_{1}$ which
has no components in the total divisor $D_{1}=W_{1}\cup V_{1}$ lifts, after
labeling the nodes of $C$, to a unique map $\widetilde{g}$ into
$\widetilde{X}_{1}$, the resolution of the level one building $X_{1}$, with
$\displaystyle\widetilde{g}\in{\cal M}_{s_{\pm}\sqcup
s}(\widetilde{X}_{1},\widetilde{W}_{1}\cup\widetilde{V}_{1})$
where $s_{\pm}(x)$ records the extra contact information $s(x_{\pm})$ at the
pair of points $x_{\pm}$ corresponding to a node $x\in C$. The domain
$\widetilde{C}$ of $\widetilde{g}$ is a resolution of the domain $C$ of $f$.
The combined attaching map identifies pairs of marked points of
$\widetilde{C}$ to produce the nodes of $C$, and simultaneously attaches the
targets together to produce the singular locus $W_{1}$. This describes the
limit $g$ in terms of its normalization
$\widetilde{g}:\widetilde{C}\rightarrow\widetilde{X}_{1}$.
The attaching map $\xi$ that attaches the pieces of $\widetilde{X}_{1}$, when
restricted to a level $k$ stratum is a degree $2^{k}$ cover of $W_{1}$. At
each node $x_{1}=x_{2}$ mapped into this stratum we also have a partition of
the $2k$ normal directions $N_{V^{k}}\oplus N^{*}_{V^{k}}$ into two length $k$
dual indexing sets $I_{W}(x_{1})$ and $I_{W}(x_{2})$ that record the two
opposite local pieces of $\widetilde{X}_{1}$ containing the two local branches
of $g$ at that node. According to our setup, this information is already
encoded by $s_{\pm}$, as is the topological information about the domains and
the homology of the images, see Remark 2.3. Note that $C$ may have some usual
nodes (i.e. not mapped into $W_{1}$), but then according to our conventions
these give rise to marked points with depth $k=0$, also recorded in $s_{\pm}$,
see Remark 2.2.
Furthermore, the image of the normalization
$\widetilde{g}:\widetilde{C}\rightarrow\widetilde{X}_{1}$ under the evaluation
map (2.7) at the pairs of marked points giving the nodes:
$\displaystyle\mbox{\rm ev}_{s_{+}}\times\mbox{\rm ev}_{s_{-}}:{\cal
M}_{s\sqcup
s_{\pm}}(\widetilde{X}_{1},\widetilde{W}_{1}\cup\widetilde{V}_{1})\longrightarrow(W_{1})_{s_{+}}\times(W_{1})_{s_{-}}$
(4.44)
lands in the diagonal $\Delta$. We will call these the naive matching
conditions because when $s_{\pm}$ contains depth $\geq 2$ points, the
dimension of this stratum is in general bigger than the dimension of ${\cal
M}_{s}(X,V)$!
The refined limit $g$ is only well defined up to an overall rescaling
parameter $\lambda\in{\mathbb{C}}^{*}$ that acts by rescaling on the level one
of the building and by construction the limit $g$ has at least one component
which is not fixed by this action. This ${\mathbb{C}}^{*}$ action induces an
action on the moduli space of maps into $\widetilde{X}_{1}$, and thus the
normalization of such a limit is really in the inverse image of the diagonal
$\Delta$ under the map
$\displaystyle\mbox{\rm ev}_{s_{+}}\times\mbox{\rm ev}_{s_{-}}:{\cal
M}_{s\sqcup
s_{\pm}}(\widetilde{X}_{1},\widetilde{W}_{1}\cup\widetilde{V}_{1})/{\mathbb{C}}^{*}\longrightarrow(W_{1})_{s_{+}}\times(W_{1})_{s_{-}}$
(4.45)
We will include this information later on in this section. For now, let us
notice that:
###### Lemma 4.7
The difference between the expected dimensions of ${\cal M}_{s}(X,V)$ and that
of the stratum $\mbox{\rm ev}_{s_{\pm}}^{-1}(\Delta)$ of (4.44) is equal to
$\displaystyle{\rm dim\;}{\cal M}_{s}(X,V)-{\rm dim\;}\mbox{\rm
ev}_{s_{\pm}}^{-1}(\Delta)=2\sum_{x\in P(s_{+})}(1-k(x))$
where $P(s_{+})$ denotes the collection of marked points associated with the
sequence $s_{+}$.
Proof. This is a simple adaptation of the calculations of [IP2] to this
context. The expected dimension of $\mbox{\rm ev}_{s_{\pm}}^{-1}(\Delta)$ is
$\displaystyle{\rm dim\;}\mbox{\rm ev}_{s_{\pm}}^{-1}(\Delta)$
$\displaystyle=$ $\displaystyle{\rm dim\;}{\cal
M}_{\widetilde{s}}(\widetilde{X}_{1},\widetilde{W}_{1}\cup\widetilde{V}_{1})-{\rm
dim\;}(W_{1})_{s_{+}}$
where $\widetilde{s}=s\sqcup s_{\pm}$. So the difference is
$\displaystyle{\rm dim\;}{\cal M}_{s}(X,V)-{\rm dim\;}\mbox{\rm
ev}_{s_{\pm}}^{-1}(\Delta)={\rm dim\;}{\cal M}_{s}(X,V)-{\rm dim\;}{\cal
M}_{\widetilde{s}}(\widetilde{X}_{1},\widetilde{W}_{1}\cup\widetilde{V}_{1})+{\rm
dim\;}(W_{1})_{s_{+}}$
where
$\displaystyle{\rm dim\;}{\cal M}_{s}(X,V)=$ $\displaystyle=$ $\displaystyle
2c_{1}(TX)A_{s}+({\rm dim\;}X-6)\frac{\chi}{2}+2\ell(s)-2A_{s}V$
$\displaystyle{\rm dim\;}{\cal
M}_{\widetilde{s}}(\widetilde{X}_{1},\widetilde{W}_{1}\cup\widetilde{V}_{1})$
$\displaystyle=$ $\displaystyle
2c_{1}(T\widetilde{X}_{1})A_{\widetilde{s}}+({\rm
dim\;}X-6)\frac{\widetilde{\chi}}{2}+2\ell(\widetilde{s})-2A_{\widetilde{s}}\widetilde{W}_{1}-2A_{\widetilde{s}}V_{1}$
But $\chi=\widetilde{\chi}-2\ell(s_{+})$,
$\ell(\widetilde{s})=\ell(s)+2\ell(s_{+})$ and
$A_{s}V=|s|=A_{\widetilde{s}}V_{1}$, while Lemma 2.4 of [IP2] adapted to this
context gives
$\displaystyle
c_{1}(TX)A_{s}=c_{1}(T\widetilde{X}_{1})A_{\widetilde{s}}-2A_{\widetilde{s}}\widetilde{W}_{1}$
Therefore
$\displaystyle{\rm dim\;}{\cal M}_{s}(X,V)-{\rm dim\;}{\cal
M}_{\widetilde{s}}(\widetilde{X}_{1},\widetilde{W}_{1}\cup\widetilde{V}_{1})=({\rm
dim\;}X-2)\ell(s_{+})$
On the other hand, since the image under the evaluation map of each depth $k$
point lands in a codimension $2k$ stratum of $X_{1}$ then
$\displaystyle{\rm dim\;}(W_{1})_{s_{+}}=\sum_{x\in P(s_{+})}({\rm
dim\;}X-2k(x))$
and thus the difference in dimensions is exactly as stated. $\Box$
Even after dividing by the ${\mathbb{C}}^{*}$ action on the moduli space,
Lemma 4.7 still implies that if we want to construct a relatively stable map
compactification in the case when the normal divisor is singular (has depth
$k\geq 2$) pieces, then we would need some refined matching condition,
otherwise the boundary stratum is larger dimensonal than the interior.
Luckily, the existence of such refined compactification follows after a
careful examination of the arguments in [IP2].
It turns out that when $k\geq 2$, not all the maps $g:C\rightarrow X_{1}$
(without components in $W_{1}\cup V_{1}$) whose resolution $\widetilde{g}$
satisfies the naive matching condition can occur as limits after rescaling of
maps $f_{n}:C_{n}\rightarrow X$ in ${\cal M}_{s}(X,V)$. To describe those that
occur as limits, we use the results of Section 5 of [IP2]. For each node $x$
of $C$, we will work in the local models (4.33) on the domain and (3.25) in
the target, using the local coordinates as described. The results of Lemmas
4.2 and 4.3 can then be strengthened as follows:
###### Lemma 4.8
Consider $f_{n}:C_{n}\rightarrow X$ a sequence of maps in ${\cal M}_{s}(X,V)$
as in Lemma 4.2, and further assume that its refined limit $f:C\rightarrow
X_{1}$ constructed there has no components in the total divisor $D_{1}$. Then
for each node $x_{-}=x_{+}$ of $C$, we have the following relation:
$\displaystyle\lim_{n\rightarrow\infty}\frac{\lambda_{n}}{\mu_{n}(x)^{s_{i}(x)}}=a_{i}(x_{-})a_{i}(x_{+})\quad\mbox{
for each }i\in I(x)$ (4.46)
where $\mu_{n}(x)$ are the gluing parameters (4.33) describing $C_{n}$ at $x$
in terms of $C$, $\lambda_{n}$ is the sequence of rescaling parameters in the
target, while $s_{i}(x)>0$ and $a_{i}(x_{\pm})\neq 0$ are the contact
multiplicities and respectively the leading coefficients of the expansion
(4.40) of the refined limit $f$ at $x_{\pm}$ .
Proof. Note that according to our conventions, the condition (4.46) is vacuous
($I(x)=\emptyset$) unless the image of the node $x$ is mapped to the singular
locus. For each node $x$ of $C$ that is mapped to a depth $k(x)\geq 1$ stratum
of $W_{1}$ with matching multiplicities $s(x)$, we will work in the local
models described above, where we can separately project our sequence into each
direction $i\in I(x)$, where the local model is that of rescaling a disk
around the origin, as explained in more details in the next section. The
projections are now maps into a rescaled family of disks, precisely the
situation to which Lemma 5.3 of [IP2] applies to give (4.46) in each
direction, after using the expansions (4.40). $\Box$
###### Remark 4.9
The local model of ${\cal M}_{s}(X,V)$ near a limit point $f:C\rightarrow
X_{1}$ which has no components in $W_{1}$ is described by tuples
$(\widetilde{f},\mu,\lambda)$ satisfying an enhanced matching condition
(4.47). Here $\widetilde{f}:\widetilde{C}\rightarrow\widetilde{X}_{1}$ is an
element of $\mbox{\rm ev}_{s_{\pm}}^{-1}(\Delta)\subset{\cal M}_{s\sqcup
s_{\pm}}(\widetilde{X},\widetilde{W}_{1})$ satisfying the naive matching
condition along $W_{1}$ described in Remark 4.6, $\lambda\in{\mathbb{C}}$ is
the gluing (rescaling) parameter of the target, and
$\mu\in\mathop{\bigoplus}\limits_{x\in D}T^{*}_{x_{-}}C\otimes T^{*}_{x_{+}}C$
is a gluing parameter of the domain such that they also satisfy the condition
$\displaystyle a_{i}(x_{-})a_{i}(x_{+})\mu(x)^{s_{i}(x)}=\lambda\qquad\mbox{
for all }i\in I(x)$ (4.47)
at each node $x\in D$ of the domain, where $a_{i}(x_{\pm})$ are the two
leading coefficients (4.43) of $\widetilde{f}$ in the $i$’th normal direction
$i\in I(x)$ at the node $x\in D$.
Intrinsically, the gluing parameters $\mu$ in the domain are sections of the
bundle
$\displaystyle\mathop{\bigoplus}\limits_{x\in D}L_{x_{-}}\otimes L_{x_{+}}$
while the gluing parameter $\lambda$ in the target is naturally a section of
the bundle $N\otimes N^{*}\cong{\mathbb{C}}$. The condition (4.47) can be
expressed as $k(x)$ conditions on the leading coefficients:
$\displaystyle a_{i}(x_{-})a_{i}(x_{+})=\lambda\cdot\mu(x)^{-s_{i}(x)}$
at each node. If we fix a small $\lambda\neq 0$, the existence of a
$\mu(x)\neq 0$ satisfying these relations imposes a $2k(x)-2$ dimensional
condition on the leading coefficients, which is exactly what was missing in
Lemma 4.7.
###### Remark 4.10
Notice that the enhanced matching conditions become linear if we take their
log:
$\displaystyle\log a_{i}^{+}(x)+\log a_{i}^{-}(x)=\log\lambda-
s_{i}(x)\log\mu(x)$ (4.48)
which makes the transversality of this condition easier to prove, and also
hints to the connection with log geometry. Here $\log$ is the appropriate
extension of the map $\log:{\mathbb{C}}^{*}\rightarrow{\mathbb{R}}\times
S^{1}$ defined by $\log z=\log|z|+i\arg z$ to the intrinsic bundles in
question.
###### Remark 4.11
There is another other way to read (4.46). It implies that if indeed the limit
$f:C\rightarrow X_{1}$ does not have any components in $W_{1}\cup V_{1}$ then
of course all its leading coefficients $a_{i}(x_{\pm})\neq 0,\infty$.
Therefore for each fixed node $x$ of $C$ its contact multiplicities in the
normal directions must be equal to each other:
$\displaystyle s_{i}(x)=s_{j}(x)\mbox{ for all }i,j\in I(x).$
Furthermore, the normalization
$\widetilde{f}:\widetilde{C}\rightarrow\widetilde{X}_{1}$ of the limit $f$
must satisfy the enhanced matching condition at each node $x\in D$ i.e. the
image of $\widetilde{f}$ under the enhanced evaluation map
$\displaystyle\mbox{\rm Ev}_{x_{\pm}}:{\cal M}_{s\cup
s_{\pm}}(\widetilde{X}_{1},\widetilde{W}_{1})\rightarrow{\mathbb{P}}_{s(x)}(NW_{I(x_{+})})\times{\mathbb{P}}_{s(x)}(NW_{I(x_{-})})$
(4.49)
lands in the antidiagonal
$\displaystyle\Delta_{\pm}=\\{\;([a_{i}],[a_{i}^{-1}])\;|\;a_{i}\neq
0,\infty\mbox{ for }i\in I(x)\;\\}$
Recall that for each node, the two normal bundles in the target are
canonically dual to each other, or more precisely that the normal directions
to the singular locus $W_{1}$ come in dual pairs, and that $k$ of these
directions are indexed by $I(x_{-})$ and the other $k$ dual ones are indexed
by $I(x_{+})$.
Note that the limit $f$ is well defined only up to an overall rescaling
parameter $\lambda\in{\mathbb{C}}^{*}$ that acts on the level 1 of the
building, and in fact the enhanced evaluation map descends to a map on the
quotient
$\displaystyle\mbox{\rm Ev}_{\pm}:{\cal M}_{s\cup
s_{\pm}}(\widetilde{X}_{1},\widetilde{W}_{1})/{\mathbb{C}}^{*}\rightarrow\mathop{\prod}\limits_{x\in
D}{\mathbb{P}}_{s(x)}(NW_{I(x_{+})})\times{\mathbb{P}}_{s(x)}(NW_{I(x_{-})})$
(4.50)
that combines together the enhanced evaluation maps at all the the nodes $D$
of $C$.
###### Lemma 4.12
For generic $V$-compatible perturbations $(J,\nu)$, the dimension of the
inverse image of the antidiagonal $\Delta_{\pm}$ under the enhanced evaluation
map (4.50) is
$\displaystyle{\rm dim\;}\mbox{\rm Ev}_{x_{\pm}}^{-1}(\Delta_{\pm})={\rm
dim\;}{\cal M}_{s}(X,V)-2.$
Proof. By construction $f$ has at least one nontrivial component in level 1,
which means that the ${\mathbb{C}}^{*}$ action on the level one is without
fixed points. It is then straightforward to check that for generic
$V$-compatible perturbations $(J,\nu)$ the enhanced evaluation map (4.50) is
transverse to the antidiagonal (at least when assuming the domain $C$ is
stable, see Remark 2.1). The calculations in Lemma 4.7 together with the fact
that the enhanced matching conditions impose an extra $2k(x)-2$ dimensional
condition for each node imply immediately the result. $\Box$
Unfortunately, in the presence of a depth $k\geq 2$ point, we cannot rescale
the target such that the limit $f$ has no components in the singular locus
$W_{1}$. The most we can do is to make sure it has no nontrivial components
there, but at the price of getting several trivial components stuck in the
singular divisor. Below are a couple of simple examples that illustrate this
behavior.
###### Example 4.13
Consider the situation of Example 3.7. Assume $f$ is a fixed stable map into
$X$ which has two contact points $x_{1}$ and $x_{2}$ with $V$, both mapped
into the singular locus $p$ of $X$, but such that $x_{1}$ has multiplicity
(1,1) while $x_{2}$ has multiplicity (1, 2) to the two local branches of $V$
at $p$. This means that in local coordinates $z_{i}$ around $x_{i}$ on the
domain and $u_{1},u_{2}$ on the target around $p$, the map $f$ has the
expansions
$\displaystyle f(z_{1})=(a_{11}z_{1},\;a_{21}z_{1})\qquad
f(z_{2})=(a_{12}z_{2},\;a_{22}z_{2}^{2})$
with finite, nonzero leading coefficients $a_{ij}$. Now add another marked
point $x_{0}$ to the domain. As either $x_{0}\rightarrow x_{1}$ or
$x_{0}\rightarrow x_{2}$ a constant component of $f$ is falling into $p$.
Let’s look at the case $x_{0}\rightarrow x_{1}$. Assume $x_{0}$ has coordinate
$z_{1}=\varepsilon$ so $f(x_{0})=(a_{11}\varepsilon,\;a_{21}\varepsilon)$ and
$\varepsilon\rightarrow 0$. Following the prescription of [IP2], we need to
rescale the target by $\lambda=\varepsilon$ to catch the constant component
falling in. So in coordinates $u_{11}=u_{1}/\lambda$ and
$u_{21}=u_{2}/\lambda$ we get
$\displaystyle
f_{1\lambda}(z_{1})=(a_{11}z_{1}/\lambda,a_{21}z_{1}/\lambda)\qquad
f_{2\lambda}(z_{2})=(a_{12}z_{2}/\lambda,a_{22}z_{2}^{2}/\lambda)\ $
In the domain, when we let $w_{1}=z_{1}/\varepsilon=z_{1}/\lambda$ then
$f_{1\lambda}$ converges to a level one nontrivial component which in the
coordinates $u_{11}$ and $u_{21}$ has the expansion
$\displaystyle f_{1}(w_{1})=(a_{11}w_{1},\;a_{21}w_{1})$
This component lands in $\mathbb{F}_{2}$ and contains the marked points
$x_{0}$ and $x_{1}$ (with coordinates $w_{1}=1$ and respectively $w_{1}=0$),
so it is the original component of $f$ that was falling into $p$ as
$x_{1}\rightarrow x_{0}$.
But when we rescale the target by $\lambda=\varepsilon$, the other piece of
$f$ at $x_{2}$ also gets rescaled, and limits to trivial components in
$\mathbb{F}_{2}$. If we rescale the domain by
$w_{21}=z_{2}/\sqrt{\varepsilon}$ then
$f_{2\lambda}(w_{21})=(a_{12}w_{2}/\sqrt{\varepsilon},\;a_{22}w_{2}^{2})$
converges to a trivial map in the neck
$\displaystyle f_{21}(w_{21})=(\infty,\;a_{22}w_{21}^{2})$
while if we rescale the domain by $w_{22}=z_{2}/\varepsilon$ then
$f_{2\lambda}(w_{22})=(a_{12}w_{22},a_{22}w_{22}^{2}/\varepsilon)$ also
converges to a trivial map in the zero divisor
$\displaystyle f_{21}(w_{22})=(a_{12}w_{22},\;0)$
Putting all these together, we see that the limit of $f$ as $x_{0}\rightarrow
x_{1}$ consists of a map into a level 1 building, which has one component $f$
on level zero and 3 components $f_{1}$, $f_{21}$ and $f_{22}$ on level one
(all mapped into $\mathbb{F}_{2}$). But only $f_{1}$ is a nontrivial component
while the other two components are trivial, one of them mapped to the singular
locus between $\mathbb{F}_{2}$ and $\mathbb{F}_{1}$ while the other one is
mapped into the zero divisor of $\mathbb{F}_{2}$, see Figure 3(a).
Figure 3(a): Limit as $x_{0}\rightarrow x_{1}$ (b): Limit as $x_{0}\rightarrow
x_{2}$
One can also see what happens when $x_{0}\rightarrow x_{2}$. Then the limit is
a map into a level 2 building, which now has 5 rescaled components, only one
of them nontrivial (the one containing $x_{0}$). The piece of $f$ containing
$x_{1}$ now gives rise to two trivial components $f_{11}$ and $f_{12}$ one on
level (1,1) and the other on level (2,2) piece $\mathbb{F}_{2}$. On the other
hand, the piece of $f$ containing $x_{2}$ gives rise to three components, the
first one a trivial component in the neck between the level 1 piece
$\mathbb{F}_{1}$ and $\mathbb{F}_{2}$, the next one a nontrivial component
mapped to the level (1,2) piece $\mathbb{F}_{2}$ and the last piece is a
trivial component mapped into the zero divisor of the level (2,2) piece
$\mathbb{F}_{2}$, see Figure 2(b).
Note that the only nontrivial component in this case lands in level one with
respect to one of the directions, but also in level two with respect to the
other direction, so we have a nontrivial component in each local level.
###### Example 4.14
If we assumed instead that we were the case of the Example 3.6, were the two
normal directions at $p$ are globally independent, then the limit in the case
$x_{0}\rightarrow x_{1}$ would look just the same. However, the limit when
$x_{0}\rightarrow x_{2}$ would have fewer components, as we now can rescale
independently by two factors $\lambda_{1}=\varepsilon$ and
$\lambda_{2}=\varepsilon^{2}$ getting a level (1,1) building. The limit then
has only 3 pieces on level one, all of them mapped to $\mathbb{F}_{2}$, but
again only the piece containing $x_{0}$ is nontrivial. The other two pieces
come from rescaling the piece of $f$ containing $x_{1}$ so they are both
trivial, the first one in the neck between $\mathbb{F}_{2}$ and the level (1,
0) piece $\mathbb{F}_{1}$ while the other component lands in the zero divisor
of the level (1,1) piece $\mathbb{F}_{2}$.
Figure 4(a) Example 4.14 (b) Example 4.15
###### Example 4.15
Finally, consider the case when $V$ is a union of the first two coordinate
lines in ${\mathbb{P}}^{2}$ and let the stable maps
$f_{\varepsilon}:{\mathbb{P}}^{1}\rightarrow X$ defined in homogenous
coordinates by $f_{\varepsilon}(z)=[\varepsilon z,\varepsilon z^{-1},1]$, all
containing a marked point $x$ with coordinate $z=1$. Then as
$\varepsilon\rightarrow 0$ the image of marked point
$f(x)=[\varepsilon,\varepsilon,1]$ falls into $p=[0,0,1]$. Rescaling the
target around $p$ to prevent this gives rise to a level 1 building. The limit
$f$ has now three components, all on level 1. Out of these, only one is
nontrivial and is mapped into $\mathbb{F}_{2}$ (the one containing $x$) while
the other two are trivial, each mapped into the zero section of a different
$\mathbb{F}_{1}$ piece.
The examples above show that we cannot avoid trivial components in the neck or
in the zero divisor when $k(x)\geq 2$. The trivial components are uniquely
determined by the behaviour of the rest of the curve and the rescaling
parameter, and are there only to make the limit continuous, such that the maps
converge in Hausdorff distance to their limit. The trivial components satisfy
only some partial version of the matching conditions, because some of their
leading coefficients (but not all!) are either zero or infinity.
## 5 The general limit of a sequence of maps
Now we are ready to describe what kind of maps appear as limits of maps in
${\cal M}_{s}(X,V)$. To construct the limit we will inductively rescale the
sequence $f_{n}:C_{n}\rightarrow X$ to prevent (nontrivial) components from
sinking into $V$, as described in the previous section. The limit therefore we
will be a $(J_{0},\nu_{0})$-holomorphic map $f:C\rightarrow X_{m}$ in a level
$m$ building, with no nontrivial components in the total divisor $D_{m}$, and
which will satisfy a certain enhanced matching condition at depth $k\geq 2$
points. Note however that unlike the case of [IP1], in the limit there might
be some trivial components lying in the total divisor, and this is something
that cannot be avoided when $k\geq 2$, see Examples 4.13 and 4.14 above. Also,
the matching conditions are much more involved in this case, and are trickier
to state because of the presence of these trivial curves. We start with the
following:
###### Proposition 5.1
Consider $\\{f_{n}:C_{n}\rightarrow X\\}$ a sequence of maps in ${\cal
M}_{s}(X,V)$. Then there exists an $m\geq 0$ and a sequence of rescaling
parameters $\lambda_{n}=(\lambda_{n,1},\dots,\lambda_{n,m})$ such that after
passing to a subsequence, the rescaled sequence $R_{\lambda_{n}}f_{n}$ has a
continuous limit $f:C\rightarrow X_{m}$ into a level $m$ building with the
following properties:
1. (a)
$f$ is a refinement of the stable map limit $f_{0}:C_{0}\rightarrow X$:
$\displaystyle C\;\;$ $\displaystyle\mathop{\longrightarrow}\limits^{f}$
$\displaystyle{X_{m}}$ $\displaystyle\;\;\downarrow{\mbox{\rm st}}$
$\displaystyle\downarrow{p}$ (5.1) $\displaystyle C_{0}\;\;$
$\displaystyle\mathop{\longrightarrow}\limits^{f_{0}}$ $\displaystyle X$
2. (b)
$f$ has a lift $\widetilde{f}:\widetilde{C}\rightarrow\widetilde{X}_{m}$ to
the resolution of $X_{m}$ which comes with a full contact information to the
total divisor $D_{m}$ for each nontrivial component, and a possibly partial
one to the higher depth stratum for each trivial component;
3. (c)
for any intermediate curve $C\rightarrow C^{\prime}\rightarrow C_{0}$ all the
contact points of $\widetilde{f}$ descend to special points of $C^{\prime}$,
and moreover
* •
$s(x_{-})=s(x_{+})$ and $\varepsilon(x_{-})=-\varepsilon(x_{+})$ for each node
$x_{-}=x_{+}$ of $C^{\prime}$;
* •
$s(x)=s(x_{n})$ and $\varepsilon(x)=\varepsilon(x_{n})$ for each marked point
$x\in C^{\prime}$ which is the limit of marked points $x_{n}$ of $C_{n}$
(whenever both sides are defined). Furthermore, $C$ is obtained from $C_{0}$
by inserting strings of trivial components $B_{x}$ (broken cylinders) either
between two branches $x_{\pm}$ of a node $x$ of $C_{0}$ or else at a marked
point $x$ of $C_{0}$;
4. (d)
$f$ is relatively stable, that is for each $l\geq 1$, $f$ has at least one
nontrivial component on level $l$ in some (local) direction.
The limit $f$ is unique up to the $({\mathbb{C}}^{*})^{m}$ action that
rescales on $X_{m}$ (described in Remark 3.16).
Proof. The existence of the refined limit follows immediately by iterating the
rescaling procedure of Proposition 4.2, applied at each step $l$ to the
previously rescaled sequence. Because both the topological type of the domain
as well as the homology class of the image of $f_{n}$ is fixed (being part of
$s$ according to our conventions), we have a uniform bound $E$ on the energy
of this sequence, and therefore after passing to a subsequence we get a limit
$f_{0}:C_{0}\rightarrow X$ in the usual stable map compactification, which may
have some components in $V$, but they would carry at least $\alpha_{V}>0$
energy. So we can rescale once using Lemma 4.2 to get a refined limit
$g_{1}:C_{1}\rightarrow X_{1}$ which has at least $\alpha_{V}/2$ energy in
level one and fewer nontrivial components in the zero section. After
inductively rescaling around the new zero section at most $100E/\alpha_{V}$,
this process terminates, constructing a limit with properties (a) and (d) and
which furthermore has no more nontrivial components in the zero divisor.
Note that as a result of iterating the rescaling procedure of Proposition 4.2,
we can only arrange that for each $l\geq 1$ there exists a nontrivial
component with $l$ as one of its (many) local levels, but not necessarily one
in the global level $l$, as illustrated in Figure 3(a) of Example 3.7 (see
Remark 3.15 for the difference between local level and global level). This is
because the annular regions (3.22) where we arrange at each step $l$ to have
energy $\alpha_{V}/2$ have nontrivial overlap around depth $k\geq 2$ strata.
Furthermore, the limit constructed this way has no nontrivial components in
the singular divisor $W_{m}$ because by construction it has only energy
$\alpha_{V}/2$ in the upper hemisphere region of each level $l\geq 1$. This
means that the components of the resolution $\widetilde{C}$ of the domain $C$
come in two types: (i) nontrivial components, which are not mapped inside the
total divisor and correspond to the components of $C_{0}$ and (ii) components
which are collapsed to a point under the two maps $C\rightarrow C_{0}$ and
$X_{m}\rightarrow X$. Each special fiber of $C\rightarrow C_{0}$ is an
unstable rational curve (bubble tree) with one or two marked points.
For each nontrivial component $\Sigma$ of $\widetilde{C}$, $f$ has a unique
lift $\widetilde{f}:\Sigma\rightarrow\widetilde{X}_{m}$ to the resolution,
which has a well defined contact information along the total divisor just as
described in the discussion preceding the proof of Lemma 4.3, except that we
now have more than one level. In particular, each point $x\in\Sigma$ comes
with both a multilevel map (3.15) and a multi-sign map (3.16). For each $i\in
I^{\pm}(x)$, $\varepsilon_{i}(x)=\pm$ (respectively) and we also have a well
defined contact multiplicity $s_{i}(x)>0$ to the total divisor and a leading
coefficient $a_{i}(x)\neq 0$ just as in (4.39). We also have the same
expansion (4.40) for all $i\in I$, as now $I^{\infty}(x)=\emptyset$. The
nontrivial components of $C$ therefore combine to give a partial lift
$\widehat{f}$ of $f$ which is an element of ${\cal
M}_{\widetilde{s}}(\widetilde{X}_{m},\widetilde{D}_{m})$.
This leaves us with the special fibers of $C\rightarrow C_{0}$, each one an
unstable rational curve (bubble tree) with one or two marked points whose
image gets collapsed to a point under $X_{m}\rightarrow X$. Each component
$\Sigma$ that does not land in $D_{m}$ also has a unique lift to the
resolution $\widetilde{X}_{m}$ and a well defined contact information to
$D_{m}$, just like the nontrivial components did; the components that land in
$D_{m}$ have several lifts to the resolution and only a partial contact
information to the higher depth strata of $D_{m}$, so
$I^{\infty}(x)\neq\emptyset$ in this case. Each lift is a stable map into a
fiber $({\mathbb{P}}^{1})^{j}$ of one of the many components of
$\widetilde{X}_{m}$, where it has a well defined contact information to the
zero and infinity divisor of this fiber.
Next, the proof of Lemma 4.3 extends to the case when we have several levels
of rescaling parameters (as long as they are all nonzero), giving property (c)
of the limit. $\Box$
In the remaining part of this section, we describe the behavior of the refined
limit $f$ in a neighborhood of the fiber $B_{x}$ of $C\rightarrow C_{0}$ over
a point $x\in C_{0}$. As we have seen, $B_{x}$ is either a point $x$ or else
it is a string of trivial components with two end points $x_{\pm}$ (broken
cylinder). In the later case, which can happen only when $x$ is a special
point of $C_{0}$, we order the components $\Sigma_{r}$ of $B_{x}$ in
increasing order as we move from one end to the other, and make the following
definition
the stretch of a point $x\in C_{0}$ is $r(x)=$ the number of components of
$B_{x}=\mbox{\rm st}^{-1}(x)$ (5.2)
where by convention $r(x)=0$ whenever $B_{x}=x$.
Properties of the limit $f$ around $B_{x}$. Consider now any node $x$ of
$C_{0}$ with its two branches $x^{\pm}$, and let
$p_{\pm}=\widetilde{f}(x^{\pm})\in\widetilde{X}_{m}$ be its two images in the
resolution of $X_{m}$, while $p=f_{0}(x)$ is the common image in $X$. By
construction $f(B_{x})$ lies in the fiber $F_{p}$ over $p$ of the collapsing
map $X_{m}\rightarrow X$, where we can separately work one normal direction to
$V$ at a time. Fix any of the directions $i\in I$ indexing the branches of $V$
at $p$, and let $\pi_{i}$ be the projection onto that direction, defined on a
neighborhood $U_{p}$ of $F_{p}$ in the semi-local model described in Remark
3.17. The target of $\pi_{i}$ is nothing but the (global) model of the
deformation of a disk $D^{2}$ in $N_{i}$ which is being rescaled $m$ times at
0; it is described in terms of the coordinates $u_{i,l}$ and
$v_{i,l}=u_{i,l}^{-1}$ by
$\displaystyle u_{i,l-1}v_{i,l}=\lambda_{i,l}\quad\mbox{for all }l=1,\dots,m$
for any collection $\lambda=(\lambda_{i,l})$ of small rescaling parameters.
Choose also local coordinates $z,w$ at $x_{\pm}$ which then induce local
coordinates on the universal curve of the domains at $x$ (the one containing
$C_{0}$, which was assumed to be stable); the nearby curves are then described
in the ball $B(x,\varepsilon)$ by
$\displaystyle zw=\mu(x)$
where intrinsically the gluing parameters $\mu$ are local coordinates at
$C_{0}$ on the moduli space of stable curves. Similarly choose (global)
coordinates $z_{j},\;w_{j}=z_{j}^{-1}$ on the $j$’th component
${\mathbb{P}}^{1}$ of $B_{x}$, where $1\leq j\leq r(x)$, and where we set
$z_{0}=z$ and $w_{r(x)+1}=w$. These provide global coordinates in the
neighborhood $O_{x}$ of $B_{x}$ obtained as the inverse image of
$B(x,\varepsilon)$ under the collapsing map $C\rightarrow C_{0}$, in which the
nearby curves are described by
$\displaystyle z_{r-1}w_{r}=\mu(x_{r})\quad\mbox{for all }r=1,\dots,r(x)+1$
(5.3)
where $\mu(x_{r})$ is the gluing parameter at the $r$’th node $x_{r}$ of of
$B_{x}^{\prime}$, the intersection of $C$ with $O_{x}$. In particular
$\displaystyle\mu(x)=\prod_{r}\mu(x_{r})$ (5.4)
Note that $B^{\prime}_{x}$ has $r(x)+2$ components $\Sigma_{r}$, the first and
the last are the disks about $x^{\pm}$ while the remaining ones are the
spherical components of $B_{x}$.
For each component $\Sigma$ of $B_{x}$, $f$ may only have a partial contact
information along the singular divisor at the two points $0_{\Sigma}$ and
$\infty_{\Sigma}$. In fact, in the coordinates on both the domain and target
described above, $f|_{\Sigma}$ has an associated coefficient
$a_{i}(0_{\Sigma})=a_{i}^{-1}(\infty_{\Sigma})\neq 0$ and a contact
multiplicity $s_{i}(\Sigma)>0$ for all $i\in I^{\pm}(\Sigma)$, see Definition
4.1. Furthermore, because we already know that we have matching contact
multiplicities at each node in all directions in which both sides are defined,
then $s_{i}(\Sigma)=s_{i}(x)$ for all $i\notin I^{\infty}(\Sigma)$. In the
remaining directions $f$ still has a coefficient $a_{i}(\Sigma)$ which is 0 or
$\infty$; the contact multiplicity $s_{i}(\Sigma)$ is technically undefined,
but we can define it to be $s_{i}(x)$ for all $i\in I$.
Consider next the restriction $f^{i}$ of $\pi_{i}\circ f$ to $B_{x}^{\prime}$;
then after collapsing all the constant components (those for which $i\notin
I^{\pm}(\Sigma)$) it has a stable map model
$f^{i}:B^{\prime}_{i}\rightarrow(D^{2})_{m}$ defined on a slightly bigger
curve $B^{\prime}_{i}$ containing $B_{i}$. We also have a similar description
of the nearby curves in terms of $B_{i}^{\prime}$, but where now at each node
$y$ of $B_{i}^{\prime}$ the gluing parameter (5.3) is
$\displaystyle\mu(y)=\prod_{z}\mu(z)$ (5.5)
where the product is over all nodes $z$ of $C$ in the fiber of the collapsing
map $B_{x}\rightarrow B_{i}$ at $y$. This formula extends (5.4) which would
correspond to the collapsing map $B_{x}\rightarrow x$.
###### Lemma 5.2
Using the notations above, assume $f:C\rightarrow X_{m}$ is the limit of the
sequence $f_{n}:C_{n}\rightarrow X$ as in Proposition 5.1. Fix a node $x$ of
$C_{0}$ and denote by $\mu_{n}(x_{r})$ the corresponding parameters (5.3)
describing the domains $C_{n}$ in a neighborhood of the fiber $B_{x}$ of
$C\rightarrow C_{0}$ over $x$. Finally, fix a direction $i\in I(x)$ of $V$ at
$f_{0}(x)$.
If $B_{i}\neq x$ then the restriction of $f^{i}$ to $B_{i}$ is a degree
$s_{i}(x)$ chain of trivial components in $(D^{2})_{m}$ connecting the points
$\pi_{i}(p_{+})$ to $\pi_{i}(p_{-})$, both of which must be on the total
divisor of $(D^{2})_{m}$. In particular for each node $y$ of $B_{i}^{\prime}$,
$f^{i}(y_{-})=f^{i}(y_{+})$ lands in the total divisor, with the two branches
of $f$ landing on opposite sides of the divisor. Furthermore, $f$ satisfies
the following enhanced matching condition at $y$:
$\displaystyle\lim_{n\rightarrow\infty}\frac{\lambda_{n,l_{i}(y)}}{\mu_{n}(y)^{s_{i}(x)}}=a_{i}(y_{+})a_{i}(y_{-})$
(5.6)
where $a_{i}(y_{\pm})$ and $s_{i}(y_{\pm})=s_{i}(x)$ are the two leading
coefficients of $f$, and respectively the contact multiplicity, $l_{i}(y)$ is
the level of $f(y_{\pm})$ (equal to the largest of the two consecutive levels
of the lifts $\widetilde{f}(y_{\pm}))$, while $\mu(y)$ is defined by (5.5).
When $B_{i}=x$ then $f^{i}(x_{-})=f^{i}(x_{+})$; if this lands in the total
divisor of $(D^{2})_{m}$, then $f$ satisfies the corresponding enhanced
matching condition at $x$ in the direction $i$:
$\displaystyle\lim_{n\rightarrow\infty}\frac{\lambda_{n,l_{i}(x)}}{\mu_{n}(x)^{s_{i}(x)}}=a_{i}(x_{-})a_{i}(x_{+})$
(5.7)
Proof. This follows by refining the arguments in the proof of Lemma 4.3, using
also the information described above. Of course we work locally in the
neighborhoods $O_{x}$ and $U_{x}$ of $x$ and $p$ described above, where we can
separately project onto the $i$’th direction. Denote by $C_{n}^{\prime}$ the
intersection of $C_{n}$ with $O_{x}$.
Because we already know that $R_{\lambda_{n}}f_{n}$ converge to $f$, the
projections $h^{i}_{n}$ of their restrictions to $C_{n}^{\prime}$ will also
converge, and the limit will be precisely
$f^{i}:B_{i}^{\prime}\rightarrow(D^{2})_{m}$. In fact, $h^{i}_{n}$ are maps
from $C_{n}^{\prime}$ into nothing but $D^{2}_{\lambda_{n}}$, the $m$-times
rescaled disk using the rescaling parameters
$\lambda_{n}=(\lambda_{n,1},\dots,\lambda_{n,m})$, and that is precisely the
situation in which Lemma 5.3 of [IP2] applies to give enhanced matching
conditions at each mode $y$ of $B_{i}^{\prime}$. $\Box$
###### Remark 5.3
The domain $C$ of the limit $f$ therefore is obtained from $C_{0}$ by
inserting strings of trivial spheres $B_{x}$ to stretch the image curve across
the levels $l_{i}(x^{\pm})$ in a zig-zagging fashion either (a) between
$x_{-}$ and $x_{+}$ if $x$ is a node of $C_{0}$ or (b) at a contact point
$x_{-}$ of $C_{0}$ with its respective zero section to stretch it all the way
to a contact point $x_{+}\in C$ that is mapped to the zero section $V_{m}$.
The neck region of $C_{n}$ at $x$ is roughly equal to a trivial cylinder
mapped in the fiber of the neck of the target over $f_{0}(p)$, which then gets
further stretched, possibly several times to accommodate the rescaling done to
catch all the nontrivial components of $f$.
More precisely, each component of $B_{x}$ comes with an associated multi-level
map (3.15) and its two special points come with opposite but partially defined
multi-sign maps (3.16). Similarly both points $f(x_{\pm})$ have an associated
multi-level map and a multi-sign map (now defined everywhere) pointing into
opposites sides of the singular divisor in each direction (towards each
other). As we have seen in Example 4.15, $\widetilde{f}(x_{+})$ could be on a
higher level compared to $\widetilde{f}(x_{-})$ in some directions, and lower
level in some other directions so we denote
$\displaystyle l_{i}^{-}(x)=\min_{\pm}\\{l_{i}(f(x_{\pm}))\\}\quad\mbox{ and
}\quad l_{i}^{+}(x)=\max_{\pm}\\{l_{i}(f(x_{\pm}))\\}$ (5.8)
The chain of trivial components connects $f(x_{-})$ and $f(x_{+})$ in the
fiber $F_{p}$ of $X_{m}$ over $p=f_{0}(x)$ such that the levels change by
either zero or one in a monotone way in each fixed direction, and also at each
step we move in at least one direction. In particular, for each direction
$i\in I$, and each level $l_{i}^{-}(x)\leq l\leq l_{i}^{+}(x)$, there is
precisely one node $y_{i,l}(x)$ of $B^{\prime}_{i}$ on level $l$ in direction
$i$, which lifts to two points $y^{\pm}_{i,l}(x)\in\widetilde{C}$ at which $f$
has a well defined contact information in direction $i$, together with the
string of trivial components $B_{i,l}(x)\subset B_{x}$ of $C$ on which $f$ is
constant in direction $i$ and thus which are precisely all the level $l$
components of $B_{x}$ in direction $i$. Note that for fixed $i$, the
complement of $\sqcup_{l}B_{i,l}$ consists of precisely those components of
$f$ which are non-constant when projected into the $i$’th direction, one for
each level $l_{i}^{-}(x)\leq l\leq l_{i}^{+}(x)$. So an equivalent way to keep
track of this information is to instead record these. For each $i$ fixed, and
each level $l$ between $l_{i}(x_{\pm})$ there exists a unique component
$\Sigma_{r_{l,i}}$ of $B_{x}$ which is on level $l$ in direction $i$. As $i$
is fixed and $l$ moves from $l_{i}(x_{-})$ to $l_{i}(x_{+})$, the list
$r_{l,i}$ is strictly increasing according to our conventions.
###### Remark 5.4
Furthermore, because the right hand side of (5.6) is finite and nonzero,
eliminating the intermediate coefficients $\mu(y)$ from the enhanced matching
conditions a fortiori gives conditions on the relative rates of convergence of
the rescaling parameters (involving the contact multiplicities), extending
those of Remark 4.11. The precise formulas of which relative rates to consider
depend on the contact multiplicities $s_{i}(x)$ plus finite combinatorial
information from $B_{x}$: for each component $\Sigma$ of $B_{x}$, we need to
know its multilevel map $l_{\Sigma}$ and the directions $I^{\infty}(\Sigma)$
in which the coefficients of $f$ are zero or infinity. This will determine in
particular which components get collapsed when we project onto direction $i$
and thus each string $B_{i,l}(x)\subset B_{x}$ on which $f$ is a constant
mapped to level $l$ in that direction. The enhanced matching condition also
imposes further restriction on this combinatorial information. For example
assume $x\in C_{0}$ is a node such that
$B_{i_{1},l_{1}}(x)=B_{i_{2},l_{2}}(x)$. If the levels $l_{1}=l_{2}$ then the
multiplicities $s_{i_{1}}(x)=s_{i_{2}}(x)$ must be the same, while if
$s_{i_{1}}(x)\neq s_{i_{2}}(x)$ then $l_{1}\neq l_{2}$ and the relative rate
of convergence to zero of the two rescaling parameters in these two levels
must be related, more precisely the two rates of convergence of
$\lambda_{n,l_{j}}^{1/s_{j}}$ as $n\rightarrow 0$ are equal (as their limit is
a bounded, nonzero constant involving the leading coefficients of $f$). This
was the case in Example 4.13 (b).
Intrinsically, the relative rates of convergence of Lemma 5.2 can be
reinterpreted as follows. Consider the following system of linear equations in
the variables $\beta(l)$ for each level $l\geq 1$ and $\alpha(z)$ for each
node $z$ of $C$:
$\displaystyle s_{i}(x)\sum_{z\in D_{i,l}(x)}\alpha(z)=\beta(l)$ (5.9)
for all directions $i\in I$ and all levels $l_{i}^{-}(x)\leq l\leq
l_{i}^{+}(x)$, where $D_{i,l}(x)$ is the collection of nodes of $C$ which
project to $x$ under $C\rightarrow C_{0}$ and which are on level $l$ in
direction $i$. Note that the system (5.9) depends only on the topological type
of the limit $f:C\rightarrow X_{m}$, and more precisely on the contact
multiplicities and local levels of each node $z$ of $C$ in all the directions
normal to $V$.
###### Corollary 5.5
Consider the situation of Lemma 5.2. Then after passing to a further
subsequence of $f_{n}$, the limit $f:C\rightarrow X_{m}$ satisfies the
following enhanced matching conditions: for each level $l\geq 1$ and each node
$z$ of $C$ there exist positive rational numbers $\beta(l),\;\alpha(z)>0$
which are solutions of (5.9) and also nonzero constants $d(l),\;c(z)\neq 0$
such that
$\displaystyle a_{i}(y_{i,l}^{-}(x))a_{i}(y_{i,l}^{+}(x))\left(\prod_{z\in
D_{i,l}(x)}c(z)\right)^{s_{i}(x)}=d(l)$ (5.10)
for all directions $i\in I$ and all levels $l_{i}^{-}(x)\leq l\leq
l_{i}^{+}(x)$, where $y_{i,l}(x)$ is the unique level $l$ node of
$B_{i}^{\prime}$.
In fact, for each node $x$ of $C_{0}$ there are some parameters
$t_{n,[x]}\rightarrow 0$ as $n\rightarrow\infty$ such that
$\displaystyle\lim_{n\rightarrow\infty}\frac{\lambda_{n,l}}{t_{n,[x]}^{\beta(l)}}=d(l)\quad\mbox{
and
}\quad\lim_{n\rightarrow\infty}\frac{\mu_{n}(z)}{t_{n,[x]}^{\alpha(z)}}=c(z)$
(5.11)
for each level $l$ between the minimum and the maximum (global) levels of $x$,
and respectively for each node $z$ of $C$ that projects to $x$ under
$C\rightarrow C_{0}$.
Proof. As we have seen above, the conclusion of Lemma 5.2 imposes conditions
on the relative rates of convergence to zero of both the rescaling parameters
$\lambda_{n,l}$ in the target, and also those of the domain $\mu_{n}(y)$.
Denote by $T$ the collection of levels $l\geq 1$ together with all the nodes
$y$ of any intermediate curves $C^{\prime}$ with $C\rightarrow
C^{\prime}\rightarrow C_{0}$ (including $C^{\prime}=C$ or $C_{0}$). For each
$p\in T$ let $t_{n,p}=\lambda_{n,l}$ if $p=l$ or respectively
$t_{n,p}=\mu_{n}(y)$ if $p=y$, where $\mu(y)$ is as in (5.5). Introduce an
equivalence relation on $T$ as follows: $p_{1}\sim p_{2}$ if there exists a
positive rational number $\alpha$ such that $t_{n,p_{1}}^{\alpha}/t_{n,p_{2}}$
is uniformly bounded away from zero and infinity for $n$ large. This
partitions $T$ into equivalence classes $[p]$, each one corresponding to an
independent (over ${\mathbb{Q}}$) direction of convergence to zero of these
parameters. As $T$ is finite, after passing to a further subsequence of
$f_{n}$, we can arrange that all these quotients have a finite, nonzero limit.
Therefore the exist some $d(l)\neq 0$ and $\beta(l)\in{\mathbb{Q}}_{+}$ for
each level $l\geq 1$, and $c(y)\neq 0$ and $\alpha(y)\in{\mathbb{Q}}_{+}$ for
each $y\in T$ such that:
$\displaystyle\lim_{n\rightarrow\infty}\frac{\mu_{n}(y)}{t_{n,[y]}^{\alpha(y)}}=c(y)\quad\mbox{
and
}\quad\lim_{n\rightarrow\infty}\frac{\lambda_{n,l}}{t_{n,[l]}^{\beta(l)}}=d(l)$
But note that if $\Sigma$ is any component of $B_{x}$ with $z_{1}$, $z_{2}$
its two nodes, and $y$ the node coming from them after contracting $\Sigma$
then (5.5) becomes $\mu(y)=\mu(z_{1})\mu(z_{2})$. Then the above asymptotics
imply that for each fixed $x\in C_{0}$: (a) all the points $y\in T$ that
project to $x$ are equivalent with each other and (b) all the levels $l$
between $l^{-}(x)=\min_{i}l_{i}^{-}(x)$ and $l^{+}(x)=\max_{i}l_{i}^{+}(x)$
are also equivalent to each other. Plugging the asymptotics above into (5.6)
then implies that (a) and (b) are also equivalent, proving (5.11) and reducing
(5.6) to (5.9) and (5.10). $\Box$
###### Remark 5.6
Each trivial component of $C$ that lands in $D_{m}$ a priori comes with only a
partially defined contact information to $D_{m}$, which enters in the
equations (5.9) and (5.10). However, as we shall see below, knowing the
contact information of all the nontrivial components then allows us to
formally extend the contact information of the trivial components even in the
directions $I^{\infty}$ in which their coefficients are zero or infinity, and
thus the geometric contact information is technically undefined. For example,
we have already seen that we can associate to each node $z$ of $C$ that
contracts to $x\in C_{0}$ a multiplicity $s_{i}(z)=s_{i}(x)$ in all directions
$i$, which matches the geometric contact multiplicity in all the directions
$i\notin I^{\infty}$ where that is defined.
###### Theorem 5.7
Consider $\\{f_{n}:C_{n}\rightarrow X\\}$ a sequence of maps in ${\cal
M}_{s}(X,V)$. Then there is a sequence of rescaling parameters $\lambda_{n}$
such that after passing to a subsequence, $R_{\lambda_{n}}f_{n}$ has a
continuous limit $f:C\rightarrow X_{m}$ that has the properties (a)-(d) of
Proposition 5.1, plus the following extra properties:
1. (e)
each trivial component $(\Sigma,\,x_{-},x_{+})$ of $C$ comes with a fixed
isomorphism that identifies $T_{x_{-}}\Sigma$ with $T_{x_{+}}^{*}\Sigma$,
together with a multiplicity $s_{i}(x_{-})=s_{i}(x_{+})$, two opposite signs
$\varepsilon_{i}(x_{-})=-\varepsilon_{i}(x_{+})\neq 0$ and two dual elements
$a_{i}(x_{\pm})\in N_{i}^{\varepsilon_{i}(x_{\pm})}\otimes
T_{x_{\pm}}^{*}\Sigma$ for each direction $i\in I(\Sigma)$, which agree with
the usual contact information to $D_{m}$ in the directions in which that can
be geometrically defined.
2. (f)
at each node $y\in C$, $f$ satisfies the naive matching condition:
$\displaystyle
f(y_{-})=f(y_{+}),\quad,s(y_{-})=s(y_{+}),\quad\varepsilon(y_{-})=-\varepsilon_{i}(y_{+})$
(5.12)
while each marked point $y$ of $C$, together with its full contact information
appears as the limit of corresponding marked points $y_{n}$ of $C_{n}$.
3. (g)
there exists a solution of the linear system of equations (5.9) in the first
quadrant (i.e. $\beta(l)>0$ for each level $l\geq 1$ and $\alpha(z)>0$ for
each node $z$ of $C$);
4. (h)
there exist nonzero constants $c(z)\neq 0$ for each node $z$ of $C$ such that:
$\displaystyle a_{i}(y_{-})a_{i}(y_{+})c(y)^{s_{i}(x)}=1$ (5.13)
for each node $y_{-}=y_{+}$ of $C$ that contracts to $x$ in $C_{0}$ and for
all directions $i\in I(x)$.
The limit $f$ satisfying all these conditions is unique up to the action of a
nontrivial subtorus $T_{s}$ of $({\mathbb{C}}^{*})^{m}$ which preserves the
conditions (5.13).
Proof. We first use Proposition 5.1 to obtain some limit $f:C\rightarrow
X_{m}$, defined up to the $({\mathbb{C}}^{*})^{m}$ action on $X_{m}$, and
which has all the properties described there. Fix such a representative
$f:C\rightarrow X_{m}$ of the limit, and for each point $x$ of $C_{0}$ choose
coordinates around $B_{x}$ in the domain and respectively around the fiber
$F_{p}$ of $X_{m}\rightarrow X$ over $p=f_{0}(x)$ as described before Lemma
5.2. Note that when $B_{x}\neq x$ this involves a choice of dual coordinates
$w_{j}=z_{j}^{-1}$ at the two end points of the trivial component $\Sigma_{j}$
which intrinsically corresponds to a choice of an isomorphism between the
tangent space to $\Sigma_{j}$ at one of the points and its dual at the other
point.
Recall that each special fiber $B_{x}$ of $C\rightarrow C_{0}$ was a string of
trivial components with two end points $x_{\pm}$ (broken cylinder), which
occurred only when $x$ was a special point of $C_{0}$. We make the convention
that if $x$ is a marked point of $C_{0}$ then the end $x_{+}$ of $B_{x}$
corresponds to the marked point $x\in C$ while the other end $x_{-}$ is where
$B_{x}$ gets attached to the rest of the components of $C$.
Next, Lemma 5.2 implies that the limit $f$ satisfies the naive matching
conditions (f) at all the special points $y$ of any of the intermediate curves
$C\rightarrow C^{\prime}\rightarrow C_{0}$ in all the directions $i\notin
I^{\infty}(y)$. For each trivial component $(\Sigma,y^{-},y^{+})$ of $C$ that
is part of the trivial string $B_{x}$ with two end points $x_{\pm}$, and for
all the directions $i\in I^{\infty}$ we formally set $s_{i}(y)=s_{i}(x)$ and
$\varepsilon_{i}(y_{\pm})=\varepsilon_{i}(x_{\pm})=-\varepsilon_{i}(x_{\mp})$
respectively. With this choice the naive matching conditions (f) are satisfied
now in all directions $i\in I$.
Corollary 5.5 then implies that after possibly passing to a further
subsequence, the limit $f$ satisfies both condition (h) and (5.10).
Multiplying together several of the equations (5.10) and using the fact that
each trivial component has reciprocal coefficients at the two end points we
get the following relation at each node $x\in C_{0}$:
$\displaystyle
a_{i}(x^{-})a_{i}(x^{+})c(x)^{s_{i}(x)}=\prod_{l=l_{i}^{-}(x)}^{l_{i}^{+}(x)}d(l)$
(5.14)
for all directions $i\in I$, where $c(x)=\prod_{z}c(z)$ is the product over
all nodes $z$ of $C$ which contract to $x$ under $C\rightarrow C_{0}$, and
more generally for the node $x_{r-1}^{-}=x_{r}^{+}$ of $B_{x}^{\prime}$:
$\displaystyle
a_{i}(x^{-})a_{i}(x_{r}^{+})\cdot\left(\prod_{j=1}^{r}c(x_{j}^{+})\right)^{s_{i}(x)}=\prod_{l=l_{i}(x^{-})}^{l_{i}(x_{r}^{+})}d(l)$
(5.15)
for all $i\notin I^{\infty}(x_{r}^{+})$. But the target comes with a
$({\mathbb{C}}^{*})^{m}$ action where
$(\alpha_{1},\dots,\alpha_{m})\in({\mathbb{C}}^{*})^{m}$ acts on the level $l$
coordinates $u_{l,i}=v_{l,i}^{-1}$ described in Remark 3.17 by mapping them to
$\alpha_{l}\cdot u_{l,i}=(\alpha_{l}^{-1}\cdot v_{l,i})^{-1}$ and therefore
also acts on the rescaling parameters $\lambda_{l}=u_{l-1,i}v_{l,i}$ by
$\alpha_{l-1}\alpha_{l}^{-1}$. Because we already know that the two local
branches of $f$ point in opposite directions of the singular divisor at
$x^{\pm}$ (or equivalently the weights
$\varepsilon_{i}(x^{-})=-\varepsilon_{i}(x^{+})$ of this action are opposite
on the two branches) this means that $\alpha\in({\mathbb{C}}^{*})^{m}$ acts on
the product of the leading coefficients $a_{i}(x^{-})a_{i}(x^{+})$ or
equivalently on the relative rescaling parameter
$\prod_{l=l_{i}^{-}(x)}^{l_{i}^{+}(x)}\lambda_{n,l}$ by precisely
$\alpha_{l_{i}^{-}(x)-1}\alpha_{l_{i}^{+}(x)}^{-1}$. But then after acting on
the rescaling parameters $\lambda_{n}$ (or equivalently on the limit $f$) by
$\displaystyle\alpha_{l}=\prod_{j=1}^{l}d(j)\mbox{ for each $l\geq 1$}$
the equations (5.14) and (5.15) become respectively
$\displaystyle a_{i}(x^{-})a_{i}(x^{+})c(x)^{s_{i}(x)}$ $\displaystyle=$
$\displaystyle 1$ (5.16)
all directions $i\in I$ and
$\displaystyle
a_{i}(x^{-})a_{i}(x_{r}^{+})\cdot\left(\prod_{j=0}^{r}c(x_{r}^{+})\right)^{s_{i}(x)}$
$\displaystyle=$ $\displaystyle 1$ (5.17)
for all directions $i\notin I^{\infty}(x_{r}^{+})$. If we define
$a_{i}(x_{r}^{+})$ by (5.17) even for those directions $i\in
I^{\infty}(x_{r}^{+})$, then this gives us a way to uniquely decorate the
trivial components of the limit as required in (e) once the nonzero constants
$c(z)\neq 0$ are chosen such that (5.16). With these decorations, the full
enhanced matching conditions (5.13) are then satisfied at all nodes $y$ of $C$
in all directions $i\in I$. $\Box$
###### Remark 5.8
In fact, we not only have a $({\mathbb{C}}^{*})^{m}$ action on the target, but
also have a similar $({\mathbb{C}}^{*})^{d}$ action on the domain, where $d$
is the number of nodes $z$ of $C$. Therefore we have a combined
$({\mathbb{C}}^{*})^{d}\times({\mathbb{C}}^{*})^{m}$ action on both the
leading coefficients of $f$ and also on the gluing parameters $\mu(z)$ of the
domain and respectively $\lambda_{l}$ of the target, which leave the enhanced
matching equations (5.10) invariant. For each relatively prime integer
solutions $\beta(l)>0$, $\alpha(z)>0$ of (5.9) we also get a
${\mathbb{C}}^{*}$ subgroup that acts on $\mu(z)$ by $t^{\alpha(z)}$ and on
$\lambda_{l}$ by $t^{\beta(l)}$, see also (5.11). In fact, the subgroup of
$({\mathbb{C}}^{*})^{d+l}$ that preserves the product of the leading
coefficients of $f$ at each node $y_{i,l}(x)$ is precisely described by the
condition (5.9), and therefore its projection $T$ onto ${\mathbb{C}}^{m}$
leaves the equations (5.13) invariant.
## 6 The Relatively Stable Map Compactification
We are finally ready to define a relatively stable map into a level $m$
building, which describes precisely the types of limits of maps in ${\cal
M}_{s}(X,V)$ we could get after rescaling.
As we have seen in the previous sections, we will need to include trivial
components that are mapped into the total divisor. Because not all their
coefficients are zero or infinity, they are in fact stable maps into
$\widetilde{X}_{m}$, but a priori they have only a partial contact information
with the total divisor $D_{m}$, technically undefined in the directions
$I^{\infty}$ where their coefficients are zero or infinity; there was however
a way to formally extend this contact information in the remaining directions
as well.
So starting with this section, we make the convention that a trivial component
already comes with the extra choices in these directions to get a full (but
possibly formal) contact information to $D_{m}$. More precisely, each trivial
component $(\Sigma,x^{+},x^{-})$ comes first with a fixed complex isomorphism
that identifies $T_{x^{+}}\Sigma$ with $T_{x^{-}}^{*}\Sigma$, together with a
multiplicity $s_{i}(x^{+})=s_{i}(x^{-})$, two opposite signs
$\varepsilon_{i}(x_{-})=-\varepsilon_{i}(x_{+})\neq 0$ and two dual elements
$a_{i}(x_{\pm})\in N_{i}^{\varepsilon_{i}(x_{\pm})}\otimes
T_{x_{\pm}}^{*}\Sigma$ for each direction $i\in I$, which we require to agree
with the usual contact information to $D_{m}$ in the directions in which that
can be geometrically defined. We will denote by
$\displaystyle{\cal M}^{triv}_{s}(\widetilde{X}_{m},\widetilde{D}_{m})$ (6.1)
the space of such ”decorated” trivial components and include them as part of
${\cal M}_{s}(\widetilde{X}_{m},\widetilde{D}_{m})$, as they now come with a
fully defined evaluation map and enhanced evaluation map defined using the
extra decorations. Several crucial differences still remain between the maps
that have an actual geometric contact information along $D_{m}$ and these
trivial components, as we will see below.
With this convention, we are ready to make the following:
###### Definition 6.1
A map from $C$ into a level $m$ building $(X_{m},V_{m})$ is a continuous
function $f:C\rightarrow X_{m}$ with the following properties. The map $f$
comes with a lift $\widetilde{f}:\widetilde{C}\rightarrow\widetilde{X}_{m}$
which is an element of ${\cal M}_{s}(\widetilde{X}_{m},\widetilde{D}_{m})$,
and a contraction $f_{0}:C_{0}\rightarrow X$ partitioning the domain
$\widetilde{C}=\widetilde{C}_{0}\sqcup B$ into two types of components: (a)
nontrivial components, none of which is entirely contained in $D_{m}$ and (b)
trivial (decorated) components that are contracted under $C\rightarrow C_{0}$
and $X_{m}\rightarrow X$. We require that at each node $y_{-}=y_{+}$ of $C$,
$f$ satisfies the naive matching condition:
$\displaystyle f(y_{+})=f(y_{-}),\quad s(y_{+})=s(y_{-})\quad\mbox{ and
}\quad\varepsilon(y_{+})=-\varepsilon(y_{-})$
while none of the marked points of $C$ are mapped to the infinity divisor.
The naive matching condition is equivalent to the fact that $\widetilde{f}$
belongs to the inverse image of the diagonal under the usual evaluation map at
pairs of marked points giving the nodes $D$ of $C$:
$\displaystyle\mbox{\rm ev}_{s_{-},s_{+}}:{\cal
M}_{s}(\widetilde{X}_{m},\widetilde{D}_{m})\longrightarrow W_{s_{-}}\times
W_{s_{+}}$ (6.2)
extending (4.44). Recall that the normal direction to the singular divisor $W$
come in dual pairs, and here $s_{-}$ denotes the contact information
associated to the branch $x_{-}$ of $\widetilde{C}$ which includes not just
the multiplicities $s_{i}(x^{-})$ but also the levels $l_{i}(x^{-})$, the
signs $\varepsilon_{i}(x^{-})$ and the indexing set $I(x^{-})$ of the branches
of the total divisor $D_{m}$ which record the particular strata $W_{s_{-}}$ of
the singular divisor $W$ that $f(x_{-})$ belongs to (which according to our
conventions is equal to $X_{m}$ when $x$ is an ordinary marked point, with
empty contact multiplicity). The condition (6.2) encodes both the fact that
all the contact points of $\widetilde{f}$ are special points of $C$, but also
the fact that $C$ has no nodes on the zero divisor $V_{m}$ away from the
singular divisor $W$. The leftover contact points $x$ of $\widetilde{C}$ which
are not nodes must be therefore mapped to a stratum of the zero divisor
$V_{m}$ (away from the singular divisor $W$) and record the contact
information of $f$ along the zero divisor.
It follows from this definition that $C$ is obtained from $C_{0}$ by possibly
inserting strings of trivial components $B_{x}$ (broken cylinders) either
between two branches $x_{\pm}$ of a node $x$ of $C_{0}$ or else at a marked
point $x_{0}$ of $C_{0}$; the fact that the signs $\varepsilon_{i}$ are
opposite at each node imply that each chain $B_{x}$ moves in a monotone zig-
zagging fashion in the fiber of $X_{m}$ over $f_{0}(x)$, exactly as described
in Remark 5.3 (note that the level changes only in those directions in which
the contact information is the geometric).
###### Definition 6.2
We say that a map $f:C\rightarrow X_{m}$ as in Definition 6.1 satisfies the
enhanced matching condition if its resolution $\widetilde{f}$ is in the
inverse image of the antidiagonal $\Delta^{\pm}$ under the enhanced evaluation
map at pairs of marked points $y^{\pm}$ giving the nodes $D$ of $C$:
$\displaystyle\mbox{\rm Ev}_{s_{-},s_{+}}:{\cal
M}_{s}(\widetilde{X}_{m},\widetilde{D}_{m})$ $\displaystyle\rightarrow$
$\displaystyle\prod_{y\in
D}{\mathbb{P}}_{s(y)}(NW_{I(y_{-})})\times{\mathbb{P}}_{s(x)}(NW_{I(y_{+})})$
(6.3)
This condition extends (4.49), and keeps track not only of the image of
$\widetilde{f}(y_{\pm})$ in the singular divisor $W$ but also on its leading
coefficients $a_{i}(y_{\pm})$ as elements of two dual normal bundles
$NW_{I(y_{-})}\cong N^{*}W_{I(y_{+})}$.
Recall that there is a $({\mathbb{C}}^{*})^{m}$ action on a level $m$ building
which rescales each level $l\geq 1$ by a factor
$\lambda_{l}\in{\mathbb{C}}^{*}$, and which induces an action both on the
space of maps into the building $X_{m}$ and on their resolutions.
###### Definition 6.3
Consider the collection of maps $f:C\rightarrow X_{m}$ into a level $m$
building $(X_{m},V_{m})$ as in Definition 6.1, whose resolution
$\widetilde{f}\in{\cal M}_{s}(\widetilde{X}_{m},\widetilde{D}_{m})$ satisfies
the enhanced matching condition and whose full topological data $s$ is such
that the set of equations (5.9) has a positive solution (in the first
quadrant).
Such a map is called a relatively stable map into a level $m$ building
$(X_{m},V_{m})$ if furthermore for any level $l\geq 1$, there is at least one
nontrivial component of $f$ which has $l$ as one of its multi-levels.
Let $\overline{{\cal M}}_{s}(X,V)$ denote the collection of all relatively
stable maps into $X$, up to the $({\mathbb{C}}^{*})^{m}$ action on the level
$m$ building.
A relatively stable map is therefore an equivalence class of maps up to both
the reparametrizations of the domain and also rescalings of the target. The
space of solutions of the equations (5.9) describes a subtorus
$T_{s}\subset({\mathbb{C}}^{*})^{m}$ with the property that Ev descends to the
quotient:
$\displaystyle\mbox{\rm Ev}_{s_{-},s_{+}}:{\cal
M}_{s}(\widetilde{X}_{m},\widetilde{D}_{m})/T_{s}$ $\displaystyle\rightarrow$
$\displaystyle\prod_{y\in
D}{\mathbb{P}}_{s(y)}(NW_{I(y_{+})})\times{\mathbb{P}}_{s(x)}(NW_{I(y_{-})})$
(6.4)
which otherwise may not be automatic, and thus has combinatorial implications
on the topological type of the maps considered, like those in Remark 5.4.
###### Remark 6.4
In the case $V$ has several components and we decide to rescale in $c$
independent directions, then we have a
$({\mathbb{C}}^{*})^{m_{1}}\times\dots\times({\mathbb{C}}^{*})^{m_{c}}$ action
on a (multi)level $m=(m_{1},\dots m_{c})$ building $(X_{m},V_{m})$ that we
will take the quotient by. A map $f:C\rightarrow X_{m}$ is then called
relatively stable if each multilevel $l=(l_{1},\dots l_{c})$ different from
(0, …, 0) has at least one nontrivial component.
The notion of stability therefore depends in how many independent directions
we rescaled the target in, and therefore on the particular group action we are
taking the quotient by. For example, a nontrivial component in the level (1,2)
of a multi-directional building counts as a nontrivial component in both level
1 and also in level 2 if we regard it as part of a uni-directional building,
see Example 4.13 (b). If there are several independent directions, there is
always a projection (stabilization) map from the relatively map
compactification $\overline{{\cal M}}_{s}(X,V)$ described above to a smaller
compactification obtained by collapsing some multi-levels containing only
trivial components (when regarded as independent multi-levels).
With these definitions, the results of section §5 and especially Theorem 5.7
becomes:
###### Theorem 6.5
Consider $\\{f_{n}:C_{n}\rightarrow X\\}$ a sequence of maps in ${\cal
M}_{s}(X,V)$. Then there is a sequence of rescaling parameters $\lambda_{n}$,
such that after passing to a subsequence, the rescaled sequence
$R_{\lambda_{n}}f_{n}$ has a unique limit $f:C\rightarrow X_{m}$ which is a
relatively stable map into $X_{m}$.
Lemma 4.12 also extends to a level $m$ building to give:
###### Lemma 6.6
For generic $V$-compatible $(J,\nu)$, the stratum of the relatively stable map
compactification $\overline{{\cal M}}_{s}(X,V$) that corresponds to maps into
a level $m\geq 1$ building is codimension at least $2$.
Note that in the presence of higher depth strata, the codimension of the
stratum into a level $m$ building is not necessarily equal to $2m$, as again
illustrated by Example 4.13 (b). The stratum as $x_{1}\rightarrow x_{0}$ is
clearly only codimension 2, even though the limit is a map into a building
with 2 levels. The reason is that the enhanced matching conditions impose
further conditions on the rescaling parameters (in that case
$\lambda_{2}=\lambda_{1}$), and so the rescaling parameters are not anymore
independent variables. The complex codimension of the stratum is only the
number of independent rescaling parameters (or equivalently the dimension of
the torus $T_{s}$).
###### Remark 6.7
To keep the analysis concrete, we have implicitly assumed that after
collapsing back the trivial components all the domains of the maps are stable,
see Remark 2.1. The perturbation $\nu$ used in the Lemma above is coming then
from the universal curve $\overline{{\cal U}}\subset{\mathbb{P}}^{N}$, so it
vanishes on all the trivial components (which by definition have unstable
domain). In general, no matter what kind of other perturbations one may turn
on, they should always be chosen to vanish on the trivial components, as these
only play a topological role of recording certain identifications between
points on various levels of the building (and can be completely ignored up to
their combinatorial restrictions on the topological type of the maps $f$).
###### Remark 6.8
Note that if $J$ is integrable near $V$ then the weighted projective space
${\mathbb{P}}_{s}(NV)$ can be regarded as an exceptional divisor in the blow
up of the target normal to $V$, and the enhanced evaluation map is the usual
evaluation map into this exceptional divisor, relating it with the approach in
Davis’ thesis [Da] that worked very well in genus zero (but did not extend in
higher genus). The only difference here is that this blow up is now a weighted
blow up, which seems to keep better track of what happens in the limit when
the multiplicities $s_{i}$ are not equal, especially in higher genus. Of
course, being a weighted blow up, it has singular strata, which may look
problematic. Ignoring the fact that when the greatest common divisor
$gcd(s_{i})\neq 1$ the quotient is nonreduced, all the truly singular strata
correspond to when one of the coordinates are $0$ or $\infty$; but the image
of the enhanced evaluation map (6.3) avoids these strata anyway.
## 7 The relative GW invariant $GW(X,V)$
The upshot of the discussion in the previous two sections which culminated
with Theorem 6.5 is that any sequence $f_{n}$ in ${\cal M}_{s}(X,V)$, after
rescaling it and passing to a subsequence has a unique limit which is a
relatively stable map. This combined with Lemma 6.6 about the codimension of
the boundary strata allows us extend the discussion of Sections 7 and 8 of
[IP1] to the case of a normal crossings divisor. In particular, Theorem 8.1 of
[IP1] in this context becomes
###### Theorem 7.1
Assume $V$ is a normal crossings divisor in $X$ for some $V$-compatible pair
$(J,\omega)$. The space of relatively stable maps $\overline{{\cal
M}}_{s}(X,V)$ is compact and it comes with a continuous map
$\displaystyle\mbox{\rm st}\times\mbox{\rm Ev}:\overline{{\cal
M}}_{s}(X,V)\rightarrow\overline{{\cal M}}_{\chi(s),\ell(s)}\times\prod_{x\in
P(s)}{\mathbb{P}}_{s(x)}(NV_{I(x)})$ (7.1)
For generic $V$-compatible $(J,\nu)$ the image of $\overline{{\cal
M}}_{s}(X,V)$ under $\mbox{\rm st}\times\mbox{\rm Ev}$ defines a homology
class $GW_{s}(X,V)$ in dimension
$\displaystyle{\rm dim\;}\overline{{\cal M}}_{s}(X,V)=2c_{1}(TX)A_{s}+({\rm
dim\;}X-6)\frac{\chi_{s}}{2}+2\ell(s)-2A_{s}\cdot V$ (7.2)
This class $GW_{s}(X,V)$ is independent of the perturbation $\nu$ and is
invariant under smooth deformations of the pair $(X,V)$ and of $(\omega,J)$
through $V$-compatible structures; it is called the GW invariant of $X$
relative the normal crossings divisor $V$.
When $V$ is smooth, Ev is nothing but the usual evaluation map ev into $V_{s}$
and so combined with Example 1.8 we get the following:
###### Corollary 7.2
When $V$ is a smooth symplectic codimension 2 submanifold of $X$, the relative
GW invariant constructed in Theorem 7.1 agrees with the usual relative GW
invariant $GW(X,V)$, as defined in [IP1].
###### Remark 7.3
The gluing formula of [IP2] can be also extended this case to prove that for
generic $V$-compatible $(J,\nu)$ the local model of $\overline{{\cal
M}}_{s}(X,V)$ normal to a boundary stratum is precisely described by the
enhanced matching conditions i.e. all solutions of the enhanced matching
conditions glue to give actual $(J,\nu)$ holomorphic solutions in
$X_{\lambda}$. Technically speaking, the space of solutions to the enhanced
matching conditions is not an orbifold at $\lambda=0$, but rather it is a
branched manifold. This was also the case when $V$ was smooth, when the
enhanced matching conditions were automatically satisfied, i.e. given any
stable map into a building that satisfied the naive matching condition, and
any gluing parameter $\lambda\neq 0$ of the target one could always find
gluing parameters $\mu(x)$ at each node of the domain satisfying the enhanced
matching conditions; in fact, there were $s(x)$ different choices for each
node, thus the multiplicity in the gluing formula, and the source of the
branching in the moduli space. The easy fix in that case is to include in the
relatively stable map compactification also the corresponding roots of unity
separating the different choices of the gluing parameters $\mu(x)$ at each
node $x$, as explained for example in [I].
In the case when $V$ is a normal crossings divisor, the story is similar. More
precisely, the local model of the moduli space $\overline{{\cal M}}_{s}(X,V)$
near a limit point $f$ is described by tuples $(f,\mu,\lambda)$. Here
$f:C\rightarrow X_{m}$ is a map into a level $m$ building as in Definition 6.1
which is relatively stable, $(\lambda_{1},\dots\lambda_{m})\in(N\otimes
N^{*})^{m}\cong{\mathbb{C}}^{m}$ are the rescaling parameters of the target,
and $\mu\in\mathop{\bigoplus}\limits_{x\in D}L_{x_{+}}\otimes L_{x_{-}}$ are
all the gluing parameters of the domain $C$ (including those on trivial
components). The data $(f,\lambda,\mu)$ must also satisfy the enhanced
matching condition (5.10) from Corollary 5.5 at each node $x$ of $C_{0}$:
$\displaystyle a_{i}(y_{i,l}^{+}(x))\cdot
a_{i}(y_{i,l}^{-}(x))\cdot\left(\prod_{z\in
D_{i,l}(x)}\mu(z)\right)^{s_{i}(x)}=\lambda_{l}.$ (7.3)
for all directions $i\in I(x)$ and all levels $l_{i}^{-}(x)\leq l\leq
l_{i}^{+}(x)$, where $a_{i}(y_{i,l}^{\pm})$ are the leading coefficients of
$f$ and $y_{i,l}(x)$ is the unique level $l$ node of the projection
$\pi_{i}(B_{x}^{\prime})$ in direction $i$.
The existence of a family $\lambda_{l}\rightarrow 0,\;\mu(z)\rightarrow 0$ of
solutions to (7.3) is not automatic (it is equivalent to condition (g) of
Theorem 5.7) and therefore imposes combinatorial conditions on the topological
data $s$ of $f$, like those in Remark 5.4. The locus of the equations (7.3),
thought as equations in the parameters
$(\lambda,\mu)\in{\mathbb{C}}^{m}\times{\mathbb{C}}^{\ell(s)}$ is smooth for
$(\lambda,\mu)\neq 0$, but may be singular at $(\lambda,\mu)=0$ (it is only a
pseudo-manifold, or a branched manifold with several branches coming together
at 0). But we can instead use a refined compactification over
$(\lambda,\mu)=0$ which is smooth. One way is to re-express this local model
in terms of its link at the origin, which is smooth, as we have essentially
done in the proof of Theorem 5.7 to get (5.13). As we have seen there, we can
also eliminate the trivial components, reducing the equations (7.3) to the
following equations:
$\displaystyle
a_{i}(x^{-})a_{i}(x^{+})\mu(x)^{s_{i}(x)}=\prod_{l=l_{i}^{-}(x)}^{l_{i}^{+}(x)}\lambda_{l}.$
(7.4)
for all nodes $x$ of $C_{0}$ and all directions $i\in I(x)$.
### 7.1 Further directions
The next question is how the GW invariants relative normal crossings divisors
behave under degenerations. The degenerations we have in mind come in several
flavors.
The first type of degeneration is one in which the target $X$ degenerates, the
simplest case of that being the degeneration of a symplectic sum into its
pieces. This comes down to the symplectic sum formula proved in [IP2], but
where now we also have a divisor going through the neck. Consider for example
the situation
$\displaystyle(X,V)=(X_{1},V_{1})\\#_{U}(X_{2},V_{2})$
described in Remark 1.14, which means that $X=X_{1}\\#_{U}X_{2}$, and
simultaneously the divisor $V$ is the symplectic sum of $V_{1}$ and $V_{2}$
along their common intersection with $U$. Then the relative GW of the sum
$(X,V)$ should be expressed in terms of the relative GW invariants of the
pieces $(X_{i},V_{i}\cup U)$; this type of formula allows one for example to
compute the absolute GW invariants of a manifold obtained by iterating the
symplectic sum construction.
But there are other types of symplectic sums/smoothings of the target that
these relative GW invariants should enter. The next simplest example is either
the 3-fold sum or 4-fold sum defined by Symington in [S] (see also [MS]). Both
these constructions should have appropriate symplectic extensions to higher
dimensions involving smoothings $X_{\varepsilon}$ of a symplectic manifold $X$
self intersecting itself along a symplectic normal crossings divisor $V$. The
sum formula would then express the GW invariants of $X_{\varepsilon}$ in terms
of the relative GW invariants of $(X,V)$. A special case of this is what is
called a stable degeneration in algebraic geometry, in which case one has a
smooth fibration over a disk with smooth fiber $X_{\varepsilon}$ for
$\varepsilon\neq 0$ and whose central fiber $X_{0}$ has normal crossings
singularities.
There is also a related question when the target $X$ is fixed, but now the
divisor $V$ degenerates in $X$. The simplest case of that is the one in
Example 1.6, and serves as the local model of more general deformations. For
example, a slightly more general case would be when we have a family of smooth
divisors $V_{\varepsilon}$ degenerating to a normal crossings one $V_{0}$,
which let’s assume has at most depth 2 points (i.e. its singular locus $W$ is
smooth). After blowing up $W$, this case can be reduced to the case of a
symplectic sum of the blow up of $X$ with a standard piece ${\mathbb{P}}_{W}$,
constructed using the normal bundle of the singular locus $W$. The divisors
now go through the neck of the symplectic sum, but their degeneration happens
only in ${\mathbb{P}}_{W}$ (therefore involves only local information around
$W$). So if one can understand the degeneration locally near $W$, one can
again use the sum formula to relate the GW invariants of $(X,V_{\varepsilon})$
to those of $(X,V_{0})$.
The discussion in this paper should also extend to the case when the target
$X$ has orbifold singularities and the normal crossings divisor $V$ itself, as
well as its normal bundle has an orbifold structure. In this case the domains
of the maps should also be allowed to have orbifold singularities. So again we
have a very similar stratification of the domain and of the target $V$ but now
it has more strata depending also on the conjugacy classes of the isotropy
groups; the evaluation maps will now take that into account as well. The
corresponding enhanced matching condition will therefore include that
information, in the form of an additional balanced condition at each node as
in [AGV].
## Appendix A. Appendix
Assume $V$ is a normal crossings divisor in $(X,\omega,J)$ and that
$\iota:\widetilde{V}\rightarrow V$ is its resolution and
$\pi:N\rightarrow\widetilde{V}$ its normal bundle. In particular this means
that we have an immersion $\iota:(U,\widetilde{V})\rightarrow(X,V)$ from some
tubular neighborhood $U$ of the zero section $\widetilde{V}$ of $N$.
The divisor $V$ is stratified depending on how many local branches meet at a
particular point. Denote by $V^{k}$ the closed stratum of $V$ where at least
$k$ local branches of $V$ meet, and let $\overset{\circ}{V}\vphantom{V}^{k}$
be the open stratum where precisely $k$ local branches meet. Then
$\overset{\circ}{V}\vphantom{V}^{k}$ is both $\omega$-symplectic and
$J$-holomorphic and its normal bundle in $X$ is modeled locally on the direct
sum of the normal bundles to each local branch of $V$:
$\displaystyle
N_{{V^{k}},p}=\mathop{\bigoplus}\limits_{q\in\iota^{-1}(p)}N_{q}\quad=\mathop{\bigoplus}\limits_{i\in
I}N_{p_{i}}$ (A.1.)
where $\iota^{-1}(p)=\\{p_{i}\;|\;i\in I\\}$ indexes the $k$ local branches of
$V$ meeting at $p\in\overset{\circ}{V}\vphantom{V}^{k}$. When $V$ does not
have simple normal crossings these local branches may globally intertwine. The
global monodromy of $N_{V^{k}}$ is determined then by the monodromy of the
restriction
$\displaystyle\iota:\iota^{-1}(\overset{\circ}{V}\vphantom{V}^{k})\rightarrow\overset{\circ}{V}\vphantom{V}^{k}$
(A.2.)
which describes a degree $k$ cover of $\overset{\circ}{V}\vphantom{V}^{k}$,
its fibers indexing the $k$ independent directions of $N_{V^{k}}$ at $p$. The
original map $\iota$ is not a covering over the singular locus of $V^{k}$, but
it extends as a covering over the normalization $\widetilde{V^{k}}$ of this
stratum. The following lemma follows from the local model of a normal
crossings divisor:
###### Lemma A.1.
The closed stratum $V^{k}$ of $V$ has a normalization $\widetilde{V^{k}}$
which comes with a normal crossings divisor $W^{k}$ corresponding to the
inverse image of the higher depth stratum $V^{k+1}$:
$\displaystyle\iota_{k}:(\widetilde{V^{k}},W^{k+1})\rightarrow(V^{k},V^{k+1}).$
(A.3.)
The normal bundle to $V^{k}$
$\displaystyle\pi:N_{V^{k}}\rightarrow\widetilde{V^{k}}$ (A.4.)
is obtained as in (A.1.) from the line bundle $N\rightarrow\widetilde{V}$ and
a degree $k$ cover $\iota$ of $\widetilde{V^{k}}$ which extends (A.2.). By
separately compactifying each normal direction we get a
$({\mathbb{P}}^{1})^{k}$ bundle
$\displaystyle\pi_{k}:\mathbb{F}_{k}\rightarrow\widetilde{V^{k}}$ (A.5.)
which comes with a normal crossings divisor $D_{k,0}\cup D_{k,\infty}\cup
F_{k}$ obtained by considering together its zero and infinity divisors plus
the fiber $F_{k}$ over the divisor $W^{k+1}$ in the base. The normalizations
of these divisors come naturally identified
$\displaystyle\widetilde{F_{k}}\mathop{\longrightarrow}\limits^{\rho_{k}}_{\cong}\widetilde{D_{k+1,\infty}}\mathop{\longrightarrow}\limits_{\cong}\widetilde{D_{k+1,0}}$
(A.6.)
and their normal bundles are canonically dual to each other
$\displaystyle N_{F_{k}}\cong(N_{D_{k+1},\infty})^{*}\cong N_{D_{k+1},0}.$
(A.7.)
Proof. The local model allows us to construct the normalization
$\widetilde{V^{k}}$ of the closed stratum ${V^{k}}$ as a smooth manifold,
obtained by separating the branches that come together to form the next
stratum $V^{k+1}$ inside ${V^{k}}$, and simultaneously construct the
normalization $\widetilde{W^{k+1}}$ of the corresponding divisor $W^{k+1}$ of
(A.3.). The model for their normal bundles is induced from
$N\rightarrow\widetilde{V}$.
There are several slight complications when $k\geq 2$. First, there is no
direct map from the resolution $\widetilde{V^{k}}$ of the stratum $V^{k}$ of
$V$ to the depth $k$ stratum $\widetilde{V}^{k}$ of $\widetilde{V}$ (over
which $N$ is defined). However the normalization
$\widetilde{\widetilde{V}^{k}}$ of the depth $k$ stratum of $\widetilde{V}$ is
the degree $k$ cover of $\widetilde{V^{k}}$ whose fiber can be still thought
an indexing set for the $k$ local branches of $V$ meeting at $p$:
$\displaystyle\widetilde{\widetilde{V}^{k}}$
$\displaystyle\mathop{\longrightarrow}\limits^{\iota}$
$\displaystyle\widetilde{V^{k}}$ $\displaystyle\downarrow^{\iota_{k}}$
$\displaystyle\downarrow^{\iota_{k}}$ (A.8.)
$\displaystyle\widetilde{V}\supset{\widetilde{V}}^{k}$
$\displaystyle\mathop{\longrightarrow}\limits^{\iota}$
$\displaystyle{V^{k}}\subset V$
Here the vertical arrows are normalization maps. Therefore the pullback
$\iota_{k}^{*}N$ of normal bundle $N\rightarrow\widetilde{V}$ still induces
the same description of the normal bundle of $V^{k}$: at each point
$p\in\widetilde{V^{k}}$,
$\displaystyle
N_{V^{k},p}=\mathop{\bigoplus}\limits_{q\in\iota^{-1}(p)}(\iota_{k}^{*}N)_{q}\quad=\mathop{\bigoplus}\limits_{i\in
I}N_{p_{i}}$ (A.9.)
where $\iota^{-1}(p)=\\{p_{i}|i\in I\\}$ is the indexing set for the $k$ local
branches of $V$ meeting at $p$. Again, the cover $\iota$ may have nontrivial
global monodromy which will induce a global monodromy in the normal bundle
$N_{V^{k}}$ of $V^{k}$.
Similarly, the normalization $\widetilde{W^{k}}$ of the normal crossings
divisor $W^{k}$ of $\widetilde{V^{k-1}}$ is also a degree $k$ cover of the
normalization $\widetilde{V^{k}}$ of the depth $k$ stratum of $V^{k}$:
$\displaystyle\widetilde{W^{k}}$
$\displaystyle\mathop{\longrightarrow}\limits^{\iota}$
$\displaystyle\widetilde{V^{k}}$ $\displaystyle\downarrow^{\iota_{k}}$
$\displaystyle\downarrow^{\iota_{k}}$ (A.10.)
$\displaystyle\widetilde{V^{k-1}}\supset W^{k}$
$\displaystyle\mathop{\longrightarrow}\limits^{\iota_{k-1}}$
$\displaystyle{V^{k}}\subset V^{k-1}$
The vertical maps are resolution maps, while the fiber of the top map
corresponds to the indexing set of the $k$ branches of $V$; the bottom map is
the restriction of $\iota_{k-1}:\widetilde{V^{k-1}}\rightarrow V^{k-1}$ to the
corresponding divisor. This means that in particular that the upper left
corners of (A.8.) and (A.10.) are the same, even though the lower left corners
give two different factorizations:
$\displaystyle\widetilde{W^{k}}=\widetilde{\widetilde{V}^{k}}\mathop{\longrightarrow}\limits^{\iota}\widetilde{V}^{k}$
(A.11.)
The fiber of this map $\iota$ indexes the $k$ local branches of $V$ coming
together at a point $p\in\widetilde{V}^{k}$.
Next, the $({\mathbb{P}}^{1})^{k}$ bundle
$\displaystyle\pi_{k}:\mathbb{F}_{k}\longrightarrow\widetilde{V^{k}}$
is obtained by separately compactifying each of the $k$ normal directions to
$V$ along $V^{k}$, see (A.9.). This means that its fiber at a point
$p\in\widetilde{V^{k}}$ is
$\displaystyle\mathop{\times}\limits_{i\in
I}{\mathbb{P}}(N_{p_{i}}\oplus{\mathbb{C}})$ (A.12.)
where $I$ is an indexing set of the $k$ local branches of $V$ meeting at $p$.
Globally, the ${\mathbb{P}}^{1}$ factors of (A.12.) may intertwine.
Finally, the fiber divisor $F_{k}$ of $\mathbb{F}_{k}$ is by definition the
inverse image of the divisor $W^{k+1}$ of $\widetilde{V^{k}}$. Therefore its
normalization $\widetilde{F_{k}}$ is precisely the $({\mathbb{P}}^{1})^{k}$
bundle over the normalization $\widetilde{W^{k+1}}$, whose fiber is (A.12.).
Moreover, the normal bundle to $F_{k}$ is the pull-back of the normal bundle
of $W^{k+1}$ inside $\widetilde{V^{k}}$, which itself is the pullback of
$N\rightarrow\widetilde{V}$ by $\iota_{k}$ of diagram (A.8.), see also
(A.11.):
$\displaystyle N_{F_{k}}=\pi_{k}^{*}N_{W^{k+1}}=\pi_{k}^{*}\iota_{k}^{*}N$
(A.13.)
On the other hand, the infinity divisor $D_{k+1,\infty}$ is by definition the
divisor in $\mathbb{F}_{k+1}$ where at least one of the $k+1$ fiber
coordinates $({\mathbb{P}}^{1})^{k+1}$ is $\infty$, so its resolution
$\widetilde{D_{k+1,\infty}}$ is a $({\mathbb{P}}^{1})^{k}$ bundle; the base of
this bundle is itself a bundle over $\widetilde{V^{k+1}}$, whose fiber
consists of $k+1$ points, one for each of the $k+1$ directions of $V$ coming
together. Therefore by (A.11.), $\widetilde{D_{k+1,\infty}}$ also the
$({\mathbb{P}}^{1})^{k}$ bundle over the normalization $\widetilde{W^{k+1}}$
whose fiber is (A.12.). Furthermore, the normal bundle to the infinity divisor
is dual to the normal bundle to the zero divisor and thus it is canonically
identified to the corresponding pullback of $N^{*}\rightarrow\widetilde{V}$.
This means that we have a natural identification (A.6.) as
$({\mathbb{P}}^{1})^{k}$ bundles over $\widetilde{W^{k+1}}$ and also the
corresponding duality (A.7.) of their normal bundles. $\Box$
###### Example A.2.
(Local Structure) One can see all these different stratifications and their
resolutions in the local model, when $V$ is the union of the $n$ coordinate
hyperplanes in ${\mathbb{C}}^{n}$, so its resolution $\widetilde{V}$ consists
of $n$ disjoint planes ${\mathbb{C}}^{n-1}$. The strata $V^{k}$ of $V$ are
given by the vanishing of at least $k$ coordinates in ${\mathbb{C}}^{n}$,
while the strata $\iota^{-1}(V^{k})$ of $\widetilde{V}$ are given by the
vanishing of at least $k-1$ coordinates in each one of the $n$ disjoint planes
${\mathbb{C}}^{n-1}$ of $\widetilde{V}$. Therefore the resolution
$\widetilde{V^{k}}$ of $V^{k}$ consists of $\binom{n}{k}$ planes
${\mathbb{C}}^{n-k}$, while the resolution $\widetilde{\widetilde{V}^{k}}$ of
${\widetilde{V}}^{k}$ consists of $n\binom{n-1}{k-1}=k\binom{n}{k}$ such
planes, so the map
$\iota:\widetilde{\widetilde{V}^{k}}\rightarrow\widetilde{V^{k}}$ of (A.8.) is
indeed a degree $k$ cover, whose fiber labels the $k$ planes of $V$ coming
together at a point $p\in\widetilde{V^{k}}$. Finally, the divisor $W^{k+1}$
inside $\widetilde{V^{k}}$ corresponds to the coordinate hyperplanes in
$\widetilde{V^{k}}$, thus its resolution $\widetilde{W^{k+1}}$ consists of
$n\binom{n-1}{k}=(k+1)\binom{n}{k+1}$ planes ${\mathbb{C}}^{n-k-1}$, which is
the same as the resolution of $\widetilde{V}^{k+1}$. This explains the diagram
(A.10.) and the identification (A.11.).
We end this Appendix with a note about the space of $V$-compatible parameters
$(J,\nu)$ used in this paper. First of all, $J\in End(TX)$ is an almost
complex structure on $X$ compatible with $\omega$ and the perturbation $\nu$
is a section in a bundle over the product of the universal curve $\cal U$ and
$X$, see Remark 2.1. Now assume $V$ is a symplectic normal crossings divisor
in $X$. We say that the pair $(J,\nu)$ is $V$-compatible if the following
three conditions on their 1-jet along $V$ are satisfied (cf Definition 3.2 of
[IP2]):
1. (a)
$J$ preserves $TV$ and $\nu^{N}|_{V}=0$;
and for all $\xi\in N_{V}$, $v\in TV$ and $w\in TC$:
1. (b)
$[(\nabla_{\xi}J+J\nabla_{J\xi}J)(v)]^{N}=[(\nabla_{v}J)\xi+J(\nabla_{Jv}J)\xi]^{N}$
2. (c)
$[(\nabla_{\xi}\nu+J\nabla_{J\xi}\nu)(w)]^{N}=[(J\nabla_{\nu(w)}J)\xi]^{N}$
Here $\xi\rightarrow\xi^{N}$ is the orthogonal projection onto the normal
bundle $N_{V}$ of $V$; this uses the metric defined by $\omega$ and $J$ and
hence depends on $J$. Equivalently, assume the model for the normal bundle of
$V$ is $N_{V}\rightarrow\widetilde{V}$ where $\iota:\widetilde{V}\rightarrow
X$ is the model for $V$. Because by definition $(J,\omega)$ are adapted to the
divisor, then in particular all the branches of $V$ are symplectic and
preserved by $J$; this means that the corresponding metric induces a splitting
$\iota^{*}TX=T\widetilde{V}\oplus N_{V}$ and so the conditions above can be
understood, after pullback by $\iota$, to take place over the normalization
$\widetilde{V}$ of $V$.
###### Remark A.3.
We can always replace the perturbation $\nu$ by the path $t\nu$, $t\in[0,1]$
which gives us a retraction to the space ${\cal J}_{V}(X)$ of $V$-compatible
$J$’s, i.e. those that satisfy the first two conditions. There are also
further projections
$\displaystyle{\cal J}_{V}(X)\longrightarrow{\cal
J}_{V}^{1}(TX)\longrightarrow{\cal J}_{V}(TX)$ (A.14.)
that send a $V$-compatible $J$ on $X$ to its 1-jet normal to $V$, and then to
its restriction along $V$. In the case $V$ is a smooth symplectic submanifold
of $X$ we know that ${\cal J}_{V}(X)$ is nonempty and contractible. But this
may not be the case anymore when $V$ is singular. Already Example 1.9 shows
that the fact that ${\cal J}_{V}(TX)$ is nonempty is not automatic, so the
existence of a $J$ which preserves $V$ became part what we mean by a
symplectic normal crossings divisor $V$. As we explain below, this is enough
to guarantee the existence of a $V$-compatible $J$, i.e. one that also
satisfies the condition (b) on its 1-jet along $V$. The arguments also below
imply then that the space of $V$-compatible $J$’s is contractible.
First of all, the fiber of the first map in (A.14.) is clearly contractible,
and the arguments in the Appendix of [IP1] show that the fiber of the second
map is also contractible. The target of the last map is the space of complex
structures $J$ in $TX|_{V}$ which preserve $TV$ and are compatible with
$\omega$. These correspond to the space of metrics $g$ on the bundle
$\iota^{*}TX$ over the normalization $\widetilde{V}$ of $V$ which satisfy the
condition that they descend to the restriction of $TX$ to each depth $k$
stratum of $V$. But this space of metrics is convex, and thus ${\cal
J}_{V}(TX)$ is contractible (when nonempty).
In particular, when the branches of $V$ are symplectic and orthogonal wrt
$\omega$, one can first construct locally a metric $g$ for which
$N^{\omega}=N^{g}$ around each point on $V$; these can then be globally
patched together to give a compatible metric on $TX|_{V}$ and thus an $\omega$
compatible $J$ on $TX|_{V}$. Finally, one can extend this $J$ to a
$V$-compatible one.
In the discussion above have assumed that the symplectic form $\omega$ is
fixed. But we can similarly look at deformation space of $V$-compatible
triples $(\omega,J,\nu)$ or even allow for deformations of $V$, via smooth
deformations of the immersion $\iota:\widetilde{V}\rightarrow X$ and of its
normal bundle $N\rightarrow\widetilde{V}$ as long as the image stays a normal
crossing divisor.
## References
* [AGV] Abramovich, Dan; Graber, Tom; Vistoli, Angelo, Gromov-Witten theory of Deligne-Mumford stacks, Amer. J. Math. 130 (2008), 1337-1398.
* [AC] Dan Abramovich, Qile Chen, Stable logarithmic maps to Deligne–Faltings pairs II, preprint, arXiv:1102.4531.
* [Au] Auroux, Denis, A remark about Donaldson’s construction of symplectic submanifolds, J. Symplectic Geom. 1 (2002), 647-658.
* [Au2] Auroux, Denis, Mirror symmetry and $T$-duality in the complement of an anticanonical divisor, J. Gokova Geom. Topol. 1 (2007), 51-91.
* [Da] Joshua Davis, Degenerate Relative Gromov-Witten Invariants and Symplectic Sums, University of Wisconsin-Madison Ph. D. thesis, 2005.
* [Do] Donaldson, S. K. Symplectic submanifolds and almost-complex geometry, J. Differential Geom. 44 (1996), 666-705.
* [Do2] Donaldson, S. K., Lefschetz pencils on symplectic manifolds, J. Differential Geom. 53 (1999), 205-236.
* [EGH] Eliashberg, Y.; Givental, A.; Hofer, H.Introduction to symplectic field theory, GAFA 2000 (Tel Aviv, 1999). Geom. Funct. Anal. 2000, Special Volume, Part II, 560 673.
* [Go] Gompf, Robert E. A new construction of symplectic manifolds, Ann. of Math. 142 (1995), 527-595.
* [GoS] Gompf, Robert E.; Stipsicz, Andras I., 4-manifolds and Kirby calculus, Graduate Studies in Mathematics, 20., AMS, Providence, RI, 1999.
* [GrS] Mark Gross; Bernd Siebert, Logarithmic Gromov-Witten invariants, preprint, arXiv:1102.4322.
* [I] Ionel, Eleny, Topological recursive relations in $H^{2g}({\cal M}_{g,n})$, Invent. Math. 148 (2002), 627-658.
* [IP] Ionel, Eleny-Nicoleta; Parker, Thomas H., Gromov-Witten invariants of symplectic sums, Math. Res. Lett. 5 (1998), 563-576.
* [IP1] Ionel, Eleny-Nicoleta; Parker, Thomas H., Relative Gromov-Witten invariants, Ann. of Math. 157 (2003), 45-96.
* [IP2] Ionel, Eleny-Nicoleta; Parker, Thomas H., The symplectic sum formula for Gromov-Witten invariants, Ann. of Math. 159 (2004), 935-1025.
* [LR] Li, An-Min; Ruan, Yongbin, Symplectic surgery and Gromov-Witten invariants of Calabi-Yau 3-folds, Invent. Math. 145 (2001),151-218.
* [Li] Li, Jun, A degeneration formula of GW-invariants, J. Differential Geom. 60 (2002), 199-293.
* [Lo] Looijenga, Eduard, Smooth Deligne-Mumford compactifications by means of Prym level structures, J. Algebraic Geom. (1994), 283-293.
* [MS] McDuff, Dusa; Symington, Margaret, Associativity properties of the symplectic sum, Math. Res. Lett. 3 (1996), 591-608.
* [P] Brett Parker, Gromov Witten invariants of exploded manifolds, preprint, arXiv:1102.0158.
* [S] Symington, Margaret, A new symplectic surgery: the $3$-fold sum, Topology Appl. 88 (1998), 27-53
|
arxiv-papers
| 2011-03-21T11:22:44 |
2024-09-04T02:49:17.818607
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Eleny-Nicoleta Ionel",
"submitter": "Eleny Ionel",
"url": "https://arxiv.org/abs/1103.3977"
}
|
1103.3978
|
11institutetext: 1 Department of Astronomy, Nanjing University, Nanjing
210093, China
2 Department of Physics, Yunnan University, Kunming 650091, China
3 Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University),
Ministry of Education, China
11email: hyf@nju.edu.cn
# A New Three-Parameter Correlation for Gamma-ray Bursts with a Plateau Phase
in the Afterglow
M. Xu1,2 Y. F. Huang1,3
(Received 00 00, 0000; accepted 00 00, 0000)
###### Abstract
Aims. Gamma ray bursts (GRBs) have great advantages for their huge burst
energies, luminosities and high redshifts in probing the Universe. A few
interesting luminosity correlations of GRBs have been used to test cosmology
models. Especially, for a subsample of long GRBs with known redshifts and a
plateau phase in the afterglow, a correlation between the end time of the
plateau phase (in the GRB rest frame) and the corresponding X-ray luminosity
has been found.
Methods. In this paper, we re-analyze the subsample and found that a
significantly tighter correlation exists when we add a third parameter, i.e.
the isotropic $\gamma$-ray energy release, into the consideration. We use the
Markov chain Monte Carlo techniques to get the best-fit coefficients.
Results. A new three-parameter correlation is found for the GRBs with an
obvious plateau phase in the afterglow. The best fit correlation is found to
be $L_{\rm X}\propto T_{\rm a}^{-0.87}E_{\gamma,\rm iso}^{0.88}$.
Additionally, both long and intermediate duration GRBs are consistent with the
same three-parameter correlation equation.
Conclusions. It is argued that the new three-parameter correlation is
consistent with the hypothesis that the subsample of GRBs with a plateau phase
in the afterglow be associated with the birth of rapidly rotating magnetars,
and that the plateau be due to the continuous energy-injection from the
magnetar. It is suggested that the newly born millisecond magnetars associated
with GRBs might provide a good standard candle in the Universe.
###### Key Words.:
gamma rays: bursts - ISM: jets and outflows
## 1 Introduction
Gamma-ray busts (GRBs) are one of the most powerful and energetic explosive
events in the Universe. The observations of GRBs up to redshifts higher than 8
(Salvaterra et al. 2009; Cucchiara et al. 2011) make GRBs to be among the
farthest known astrophysical sources. Taking their considerable event rate
into consideration, GRBs may be good candidates that can be used to probe our
Universe. Several interesting correlations have been suggested for GRBs (Amati
et al. 2002; Norris et al. 2000; Ghirlanda et al. 2004a; Liang & Zhang 2005;
Dainotti et al. 2010; Qi & Lu 2010). Based on them, the cosmology parameters
have been tentatively constrained (e.g., Fenimore & Ramirez-Ruiz 2000;
Schaefer 2003, 2007; Dai et al. 2004; Ghirlanda et al. 2004b, 2006; Amati et
al. 2008; Wang & Dai 2006; Dainotti et al. 2008; Wang et al. 2009, 2011).
To derive a meritorious constraint on the cosmology parameters, the most
important thing is to find a credible standard candle relation for GRBs.
Currently, no such a relation can be established when all GRBs are involved
(Butler et al. 2009; Yu et al. 2009). The reason may be that different GRBs
should be produced via various mechanisms. Interestingly, for a subsample of
long GRBs with known redshifts and with a plateau phase in the afterglow, an
anti-correlation has been reported to exist between the end time of the
plateau phase ($T_{\rm a}$, measured in the GRB rest frame) and the
corresponding X-ray luminosity ($L_{\rm X}$) at that moment (Dainotti et al.
2010, hereafter D2010). In this paper, we call Dainotti et al.’s two parameter
correlation as the L-T correlation. The intrinsic scatter of this correlation
is still too large to be directly applied as a redshift estimator (Dainotti et
al. 2011). Additionally, normal long duration GRBs and the intermediate
duration GRBs do not obey the same correlation equation (D2010), and the
intermediate class seem to be more scattered in the plot.
In this study, we have tried to add a third parameter, i.e. the isotropic
$\gamma$-ray energy release ($E_{\gamma,\rm iso}$), into the correlation. We
find that the new three-parameter correlation (designated as the L-T-E
correlation) is much tighter than the previous L-T correlation. It is also
obeyed by both the long GRBs and the intermediate calss. The L-T-E correlation
may hopefully give a better measure for our Universe. In Section 2, we
describe our GRB sample and the method of data analysis. Our results are
presented in Section 3. Section 4 is our discussion and conclusions.
## 2 Sample & Data analysis
According to $Swift$ observations, many GRBs show a plateau phase in the early
afterglow, prior to the normal power-law decay phase (Zhang et al. 2006;
Nousek et al. 2006). In this study, we will mainly concentrate on the GRBs
with such a characteristics. All our GRBs are taken from Dainotti et al.’s
sample (D2010). In D2010’s data table, totally 77 GRBs are initially included,
with known redshift and with a plateau phase in the afterglow light curve.
After removing the intermediate class GRBs and some GRBs with relatively large
errors, they finally limited their major statics to only 62 long GRBs. Here,
we have re-selected the events by taking into account the following three
criterions in our studies: (1) the plateau should be obvious (GRBs 050318,
050603, 060124, 060418, 061007, 070518 and 071031 are removed by us, since
their phateau phase is not clear enough.); (2) the data in the plateau phase
should be rich enough to show the profile of the plateau and the end time of
the plateau as well (GRBs 050820A, 060512, 060904 and 060124 are removed by us
due to this constraint.); and (3) there should be no flares during the plateau
phase, since flares may affect the shape of the plateau light curve and lead
to errors in the quantities that we are interested in (GRBs 050904, 050908,
060223A and 060526 are removed by us according to this condition.). As a
result, our “golden sample” is consisted of 55 events in total, i.e., 47 long
GRBs and 8 intermediate class GRBs (Intermediate class GRB is characterized by
a short initial burst followed by an extended low intensity emission phase;
Norris et al. 2006). The redshifts of our sample range from 0.08 to 8.26.
For the end times of the plateau phase ($T_{\rm a}$, in the GRB rest frame)
and the X-ray afterglow luminosities at that moment ($L_{\rm X}\equiv L_{\rm
X}(T_{\rm a})$), we use the values of D2010. In D2010, $T_{\rm a}$ is derived
through a phenomenological fitting model (Willingale et al. 2007), and $L_{\rm
X}$ is derived from the following equation,
$L_{\rm X}=\frac{4\pi D_{\rm L}^{2}(z)F_{\rm X}}{(1+z)^{1-\beta_{\rm a}}},$
(1)
where $z$ is the redshift, $D_{\rm L}(z)$ is the luminosity distance, $F_{\rm
X}$ is the observed flux by $Swift-XRT$ at the end time of the plateau phase,
and $\beta_{\rm a}$ is the spectral index of the X-ray afterglow (Evans et al.
2009).
The isotropic $\gamma$-ray energy release in the prompt emission phase is
$E_{\gamma,\rm iso}=4\pi D_{\rm L}^{2}(z)S_{\rm bolo}/(1+z),$ (2)
where $S_{\rm bolo}$ is the bolometric fluence, and can be taken from Wang et
al. (2011). In the study of Wang et al. (2011), $S_{\rm bolo}$ is calculated
from the observed energy spectrum $\Phi(E)$ as (Schaefer 2007):
$S_{\rm
bolo}=S\times\frac{\int_{1/(1+z)}^{10^{4}/(1+z)}E\Phi(E)dE}{\int_{E_{\rm
min}}^{E_{\rm max}}E\Phi(E)dE},$ (3)
where $S$ is the observed fluence in units of $\rm erg\cdot cm^{-2}$ for each
GRB, and ($E_{\rm min},\leavevmode\nobreak\ E_{\rm max}$) are the detector
threshold. The energy spectrum $\Phi(E)$ is assumed to be the Band function
(Band et al. 1993),
$\Phi(E)=\left\\{\begin{array}[]{ll}AE^{\alpha}e^{-(2+\alpha)E/E_{\rm
peak}}\leavevmode\nobreak\ \leavevmode\nobreak\
E\leq[(\alpha-\beta)/(2+\alpha)]E_{\rm peak}\\\ BE^{\beta}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \rm otherwise\end{array}\right.$ (4)
where $E_{\rm peak}$ is the peak energy of the spectrum, and $\alpha$, $\beta$
are the power-law indices for photon energies below or above the break energy
respectively. At last, the complete data set of all our 55 GRBs are shown in
Table 1, where the error bars are $1\sigma$ range.
We investigate if an intrinsic correlation exists between the three parameters
of $L_{\rm X},\leavevmode\nobreak\ T_{\rm a}$ and $E_{\gamma,\rm iso}$ as
following,
${\rm log}(\frac{L_{\rm X}}{10^{47}\rm erg\cdot s^{-1}})=\it
a+b{\rm\leavevmode\nobreak\ log}(\frac{T_{\rm a}}{\rm 10^{3}s})+\it
c{\rm\leavevmode\nobreak\ log}(\frac{E_{\gamma,\rm iso}}{\rm 10^{53}erg}),$
(5)
where $a,\leavevmode\nobreak\ b$, and $c$ are constants to be determined from
the fit to the observational data. In this equation, $a$ is the constant of
the intercept. $b$ and $c$ are actually the power-law indices of time and
energy when we approximate $L_{\rm X}$ as power-law functions of $T_{\rm a}$
and $E_{\gamma,\rm iso}$. Due to the complexity of GRB sampling, an intrinsic
scattering parameter, $\sigma_{\rm int}$, is introduced in our analysis, as is
usually done by other researchers (Reichart 2001; Guidorzi et al. 2006; Amati
et al 2008). This extra variable that follows a normal distribution of
$N(0,\leavevmode\nobreak\ \sigma_{\rm int}^{2})$ is engaged to represent all
the contribution to $L_{\rm X}$ from other unknown hidden variables.
To derive the best fit to the observational data with the above three-
parameter correlation, we use the method presented in D′Agostini (2005). Here,
for simplify, we first define $x_{1}={\rm log}(\frac{T_{\rm a}}{10^{3}\rm
s})$, $x_{2}={\rm log}(\frac{E_{\gamma,\rm iso}}{10^{53}\rm erg})$, and
$y={\rm log}(\frac{L_{\rm X}}{10^{47}\rm erg/s})$. The joint likelihood
function for the coefficients of $a,\leavevmode\nobreak\
b,\leavevmode\nobreak\ c$ and $\sigma_{\rm int}$ is (D’Agostini 2005)
$\begin{array}[]{rcl}\mathcal{L}(a,b,c,\sigma_{\rm
int})\propto\displaystyle{\prod_{i}}\frac{1}{\sqrt{\sigma_{\rm
int}^{2}+\sigma_{y_{i}}^{2}+b^{2}\sigma_{x_{1,i}}^{2}+c^{2}\sigma_{x_{2,i}}^{2}}}\\\
\times\exp[-\frac{(y_{i}-a-bx_{1,i}-cx_{2,i})^{2}}{2(\sigma_{\rm
int}^{2}+\sigma_{y_{i}}^{2}+b^{2}\sigma_{x_{1,i}}^{2}+c^{2}\sigma_{x_{2,i}}^{2})}],\end{array}$
(6)
where $i$ is the corresponding serial number of GRBs in our sample.
In order to get the best-fit coefficients, the so called Markov chain Monte
Carlo techniques are used in our calculations. For each Markov chain, we
generate $10^{6}$ samples according to the likelihood function. Then we derive
the the coefficients of $a,\leavevmode\nobreak\ b,\leavevmode\nobreak\ c$ and
$\sigma_{\rm int}$ according to the statistical results of the samples.
Our likelihood function can also be conveniently applied to the two-parameter
L-T correlation case studied by D2010, by simply taking $c=0$. We have checked
our method by comparing our result for the L-T correlation with that of D2010.
The results are generally consistent, which proves the reliability of our
codes.
## 3 Results
In our study, we assume a flat $\Lambda\rm CDM$ cosmology with
$H=69.7\leavevmode\nobreak\ \rm km\cdot s^{-1}\cdot Mpc^{-1}$ and $\Omega_{\rm
M}=0.291$ (the same values as D2010). By using the method described in Section
2, we find that the best-fit correlation between $L_{\rm X}$, $T_{\rm a}$ and
$E_{\gamma,\rm iso}$ is
$\begin{array}[]{rcl}{\rm log}(\frac{L_{\rm X}}{10^{47}{\rm
erg/s}})=1.17-0.87\leavevmode\nobreak\ {\rm log}(\frac{T_{\rm a}}{10^{3}\rm
s})+0.88{\rm\leavevmode\nobreak\ log}(\frac{E_{\gamma,\rm iso}}{10^{53}\rm
erg}).\end{array}$ (7)
Figure 1 shows the above correlation. In this figure, the solid line is
plotted from Eq. (7), and the points represent the 55 GRBs of our sample (the
filled points correspond to the 47 long GRBs and the hollow square points
correspond to the 8 intermediate class GRBs). It is clearly shown that this
three-parameter correlation is tight for all the 55 GRBs.
Figure 1: The best-fit correlation between $L_{\rm X}$, $T_{\rm a}$ and
$E_{\gamma,\rm iso}$ for our “golden sample”. Y-axis is the X-ray luminosity
at the end time of the plateau phase, i.e. $L_{\rm X}$, in units of $10^{47}$
erg/s. Note that the X-axis is a combined quantity of $T_{\rm a}$ (in units of
$10^{3}$ s) and $E_{\gamma,\rm iso}$ (in units of $10^{53}$ erg), i.e.
$1.17-0.87\log{T_{\rm a}}+0.88\log E_{\gamma,\rm iso}$. The filled points
correspond to the observed data of 47 long GRBs and the hollow square points
correspond to the 8 intermediate class GRBs. The solid line is plotted from
Eq. (7), which is the best fit of the 55 observational data points.
Comparing Eqs. (5) and (7), we find that the best values for the constants of
$a$, $b$, and $c$ in Eq. (5) are $a=1.17$, $b=-0.87$, and $c=0.88$
respectively. Figure 1 also clearly shows that there is still obvious scatter
in the L-T-E correlation. To give a quantitative description of the scatter,
we need to derive the $1\sigma$ errors of these constants.
The probability distributions of these constants as well as the intrinsic
scattering parameter ($\sigma_{\rm int}$) are displayed in Figure 2. From this
figure, we find that the probability distributions of these coefficients can
be well fitted by Gauss functions. So we can easily get the $1\sigma$ error
bars for these parameters. Actually, the best values and the $1\sigma$ errors
for the coefficients are $a=1.17\pm 0.09$, $b=-0.87\pm 0.09$, $c=0.88\pm
0.08$, and $\sigma_{\rm int}=0.43\pm 0.05$, respectively.
0
Figure 2: The probability distributions of the constants of $a$ (upper left
panel), $b$ (upper right panel), $c$ (lower left panel) in Eq. (5), and the
probability distribution of the intrinsic scattering parameter $\sigma_{\rm
int}$ (lower right panel). According to these panels, the best values and the
$1\sigma$ errors for the coefficients are $a=1.17\pm 0.09$, $b=-0.87\pm 0.09$,
$c=0.88\pm 0.08$, $\sigma_{\rm int}=0.43\pm 0.05$, respectively.
We have also explored the three-parameter correlation for all the 77 GRB
events listed in D2010, using the same analytical method as for our “golden
sample” of 55 GRBs. The best fit result is shown in Figure 3. The best
parameter values and the $1\sigma$ errors for the coefficients are $a=0.81\pm
0.07$, $b=-0.91\pm 0.09$, $c=0.59\pm 0.05$, and $\sigma_{\rm int}=1.15\pm
0.12$. Comparing with the result of the “golden sample”, although there is
still an obvious correlation among $L_{\rm X}$, $T_{\rm a}$ and $E_{\gamma,\rm
iso}$ for all the 77 GRBs, the intrinsic scatter of the L-T-E correlation is
much larger now. However, it is very important to note that we exclude the 22
samples because they are most likely not physically belonging to the same
group as the “golden sample” (for example, many of them do not have an obvious
plateau phase), as judged from the three criterions in Section 2.
Figure 3: The best-fit correlation between $L_{\rm X}$, $T_{\rm a}$ and
$E_{\gamma,\rm iso}$ for all the 77 GRBs of D2010. The units of all physical
quantity are the same as Figure 2. The X-axis is a combined quantity of
$0.81-0.91\log{T_{\rm a}}+0.59\log E_{\gamma,\rm iso}$. The filled points
correspond to the observed data of 55 “golden” GRBs with error bars. The
hollow diamonds correspond to 7 GRBs with too large error bars to be plotted
in the figure, and the hollow circles correspond to other 15 discarded events.
The solid line is the best fit for all the 77 data points.
In order to directly compare with the L-T correlation suggested by D2010, we
have also fit the two-parameter correlation for our sample. The best-fit
equation is
$\begin{array}[]{rcl}{\rm log}(\frac{L_{\rm X}}{10^{47}\rm erg/s})=(0.78\pm
0.14)-(1.16\pm 0.16){\rm\leavevmode\nobreak\ log}(\frac{T_{\rm a}}{10^{3}\rm
s}).\end{array}$ (8)
This equation is consistent with the L-T correlation derived in D2010.
Comparing Eq. (8) with Eq. (7) and from Figure 2, we find that the error bars
of the constants in Eq. (8) (i.e. the L-T correlation) are generally
significantly larger than those of Eq. (7) (i.e. the L-T-E correlation).
Additionally, in the two-parameter fitting of Eq. (8), the intrinsic scatter
is $0.85\pm 0.10$, which is also markedly larger than that in the three-
parameter correlation case ($0.43\pm 0.05$). From the comparison, we see that
the L-T-E correlation is really significantly tighter than the L-T
correlation.
For our GRB sample, we additionally find that the correlation coefficient of
our L-T-E statistics is $r=0.92$ and the chance probability is $P=1.05\times
10^{-20}$. On the contrary, the correlation coefficient of the L-T statistics
of the same sample is $r=-0.73$ and the corresponding chance probability is
$P=5.55\times 10^{-8}$. This also shows that the L-T-E correlation is much
tighter than the L-T correlation.
## 4 Discussion and Conclusions
In this paper, a new three-parameter correlation is found for the GRBs with an
obvious plateau phase in the afterglow. This L-T-E correlation is tighter than
the L-T correlation reported in D2010. It has been shown that the intrinsic
scattering of our L-T-E correlation is significantly smaller than that of the
L-T correlation, and the correlation coefficient is correspondingly larger.
However, we note that the intrinsic scatter of the L-T-E correlation is still
larger than that of some correlations derived from prompt GRB emission
(Guidorzi et al. 2006; Amati et al. 2008). In the future, more samples and
more delicate selections might help to improve the result.
The plateau phase (or the shallow decay segment) is an interesting
characteristics of many GRB afterglows (Zhang et al. 2006; Nousek et al.
2006). This phenomenon can be explained as continuous energy injection from
the central engine after the prompt burst (Rees & Mészáros 1998; Dai & Lu
1998; Zhang & Mészáros 2001; Dai 2004; Kobayashi & Zhang 2007; Yu & Dai 2007;
Xu et al. 2009; Yu et al. 2010; Dall′Osso et al. 2011), or by the two
component models (Corsi & Mészáros 2009), or by structured jets (Eichler &
Granot 2006; Granot et al. 2006; Panaitescu 2007; Yamazaki 2009; Xu & Huang
2010), or even as due to dust scattering (Shao & Dai 2007; Shao et al. 2008).
According to our L-T-E correlation (Eq. (7)), the X-ray luminosity at the end
time of the plateau can be expressed as a function of the end time and the
isotropic $\gamma$-ray energy release as,
$L_{\rm X}\propto T_{\rm a}^{-0.87\pm 0.09}E_{\gamma,\rm iso}^{0.88\pm 0.08}.$
(9)
We believe that this relation can give useful constraint on the underlying
physics.
For the energy injection model, a natural mechanism is the dipole radiation
from the spinning down of a magnetar at the center of the fireball. Note that
the injected energy may not be Poynting flux, but can be electron-positron
pairs (Dai 2004). These pairs interact with the fireball material, leading to
the formation of a relativistic wind bubble. When the energy injection
dominates the dynamical evolution of the external shock, the afterglow
intensity should naturally be proportional to the energy injection power. So,
$L_{\rm X}$ is actually a measure of the energy injection rate. According to
Eq. (9), $L_{\rm X}$ is roughly inversely proportional to the timescale of the
energy injection, $T_{\rm a}$. It hints that the energy reservoir should be
roughly a constant. This is consistent with the energy injection model, which
usually assumes that the central engine is a rapidly rotating millisecond
magnetar. In different GRBs, the surface magnetic field intensities of the
central magnetars may be quite different, leading to various energy injection
luminosities and energy injection timescales. But the total energy available
for energy injection is relatively constant (about rotational energy of the
magnetar). It is mainly constrained by the limiting angular velocity of the
magnetar, which again is determined by the equation of state of neutron stars.
Additionally, according to Dai (2004), in order to produce an obvious plateau
in the afterglow lightcurve, the total injected energy must be comparable to
the original fireball energy (which may be comparable to $E_{\gamma,\rm
iso}$). This requirement is again roughly consistent with the item of
$E_{\gamma,\rm iso}^{0.88\pm 0.08}$ in Eq. (9). Based on the above analysises,
we argued that the L-T-E correlation strongly supports the energy injection
model of magnetars. It also indicates that the newly born millisecond
magnetars associated with GRBs provide a good standard candle in our Universe.
Thus the L-T-E correlation may potentially be used to test the cosmological
models.
Our sample contains 47 long GRBs and 8 intermediate class GRBs. From Figure 1,
we see that both of these two classes are consistent with the same L-T-E
correlation. Howerer, note that they behave very differently in frame work of
the two-parameter L-T correlation. This is another important advantage of our
three-parameter correlation. It indicates that magnetars may also form in
intermediate class GRBs, and their limiting spinning is just similar to those
magnetars born in long GRBs. A natural problem will be raised as to whether
short GRBs with plateau phase in the afterglow also obey the same correlation.
Unfortunately, the number of short GRBs meeting the requirement is currently
too few.
It is worth noting that many interesting physics could be involved in newly
born magnetars (Dall′Osso et al. 2009). The tops include the emission of
gravitational waves, the cooling process, the evolution of the magnetic axis,
etc. Some of the physics may affect the the energy injection process of the
newly born magnetar delicately. We believe that further studies on the new
three-parameter correlation may give useful constraints on the physics of
newly born magnetars.
###### Acknowledgements.
We thank the anonymous referee for many of the useful suggestions and
comments. We also would like to thank Z. G. Dai, S. Qi, and F. Y. Wang for
helpful discussion. This work was supported by the National Natural Science
Foundation of China (Grant No. 11033002), and the National Basic Research
Program of China (973 Program, Grant No. 2009CB824800).
## References
* (1) Amati, L., et al. 2002, ApJ, 390, 81
* (2) Amati, L., et al. 2008, MNRAS, 391, 577
* (3) Band, D., et al. 1993, ApJ, 413, 281
* (4) Butler, N. R., Kocevski, D., & Bloom, J. S. 2009, ApJ, 694, 76
* (5) Corsi, A., & Mészáros, P. 2009, ApJ, 702, 1171
* (6) Cucchiara, A., et al. 2011, ApJ, 736, 7
* (7) D′Agostini, G. 2005, arXiv:physics/0511182
* (8) Dai, Z. G. 2004, ApJ, 606, 1000
* (9) Dai, Z. G., Liang, E. W., & Xu, D. 2004, ApJ, 612, L101
* (10) Dai, Z. G., & Lu, T. 1998, A&A, 333, L87
* (11) Dainotti, M. G., Cardone, V.F., & Capozziello, S. 2008, MNRAS, 391, L79
* (12) Dainotti, M. G., Cardone, V. F., Capozziello, S., Ostrowski, M., & Willingale, R. 2011, ApJ, 730, 135
* (13) Dainotti, M. G., et al. 2010, ApJ, 722, L215
* (14) Dall′Osso, S., Shore, S. N., & Stella, L. 2009, MNRAS, 398, 1869
* (15) Dall′Osso, S., et al. 2011, A&A, 526, A121
* (16) Eichler, D., & Granot, J. 2006, ApJ, 641, L5
* (17) Evans, P., et al. 2009, MNRAS, 397, 1177
* (18) Fenimore, E. E., & Ramirez-Ruiz, E. 2000, (arXiv:astro-ph/0004176)
* (19) Ghirlanda, G., Ghisellini, G., & Lazzati, D. 2004a, ApJ, 616, 331
* (20) Ghirlanda, G., Ghisellini, G., Lazzati, D., & Firmani, C. 2004b, ApJ, 613, L13
* (21) Ghirlanda, G., Ghisellini, G., Firmani, C. 2006, New Journal of Physics, 8, 123
* (22) Granot, J., Königl, A. & Piran, T. 2006, MNRAS, 370, 1946
* (23) Guidorzi, C., et al. 2006, MNRAS, 371, 843
* (24) Kobayashi, S., & Zhang, B. 2007, ApJ, 655, 973
* (25) Liang, E. W., & Zhang, B. 2005, ApJ, 633, 611
* (26) Norris, J. P., Marani, G. F., & Bonnell, J. T. 2000, ApJ, 534, 248
* (27) Norris, J. P., & Bonnell, J. T. 2006, A&A, 643, 266
* (28) Nousek, J. A., et al. 2006, ApJ, 642, 389
* (29) Panaitescu, A. 2007, MNRAS, 379, 331
* (30) Qi, S., & Lu, T. 2010, ApJ, 717, 1274
* (31) Rees, M. J., & Mészáros, P. 1998, ApJ, 496, L1
* (32) Reichart, D. E. 2001, ApJ, 553, 235
* (33) Salvaterra, R., et al. 2009, Nature, 461, 1258
* (34) Schaefer, B. E. 2003, ApJ, 583, L67
* (35) Schaefer, B. E. 2007, ApJ, 660, 16
* (36) Shao, L., & Dai, Z. G. 2007, ApJ, 660, 1319
* (37) Shao, L., Dai, Z. G., & Mirabal, N. 2008, ApJ, 675, 507
* (38) Wang, F. Y. & Dai, Z. G. 2006, MNRAS, 368, 371
* (39) Wang, F. Y., Dai, Z. G., & Qi, S. 2009, A&A, 507, 53
* (40) Wang, F. Y., Qi, S., & Dai, Z. G. 2011, MNRAS, 415, 3423
* (41) Willingale, R. W., et al. 2007, ApJ, 662, 1093
* (42) Xu, M., & Huang, Y. F. 2010, A&A, 523, 5
* (43) Xu, M., Huang, Y. F., & Lu, T. 2009, RAA, 9, 1317
* (44) Yamazaki, R. 2009, ApJ, 690, L118
* (45) Yu, B., Qi, S., & Lu, T. 2009, ApJ, 705, 15
* (46) Yu, Y. W., Cheng, K. S., & Cao, X. F. 2010, ApJ, 715, 477
* (47) Yu, Y. W., & Dai, Z. G. 2007, A&A, 470, 119
* (48) Zhang, B., Fan, Y. Z., & Dyks, J., et al. 2006, ApJ, 642, 354
* (49) Zhang, B., & Mészáros, P. 2001, ApJ, 552, L35
GRB | $z$ | ${\rm Log}[L_{\rm X}/(\rm erg/s)]$ | ${\rm Log}[T_{\rm a}/(\rm s)]$ | ${\rm Log}[E_{\gamma,\rm iso}/(\rm erg)]$ | Type
---|---|---|---|---|---
050315 | 1.95 | 47.05 $\pm$ 0.19 | 3.92 $\pm$ 0.17 | 52.85 $\pm$ 0.012 | Long
050319 | 3.24 | 47.52 $\pm$ 0.18 | 4.04 $\pm$ 0.17 | 52.90 $\pm$ 0.057 | Long
050401 | 2.9 | 48.45 $\pm$ 0.15 | 3.28 $\pm$ 0.14 | 52.50 $\pm$ 0.098 | Long
050416A | 0.65 | 46.29 $\pm$ 0.23 | 2.97 $\pm$ 0.21 | 51.02 $\pm$ 0.027 | Long
050505 | 4.27 | 48.03 $\pm$ 0.34 | 3.67 $\pm$ 0.33 | 53.26 $\pm$ 0.019 | Long
050724 | 0.26 | 44.53 $\pm$ 1.24 | 4.92 $\pm$ 1.22 | 50.17 $\pm$ 0.055 | IC
050730 | 3.97 | 48.68 $\pm$ 0.07 | 3.44 $\pm$ 0.04 | 53.26 $\pm$ 0.017 | Long
050801 | 1.38 | 47.86 $\pm$ 0.17 | 2.17 $\pm$ 0.16 | 51.49 $\pm$ 0.066 | Long
050802 | 1.71 | 47.43 $\pm$ 0.06 | 3.52 $\pm$ 0.06 | 52.59 $\pm$ 0.021 | Long
050803 | 0.42 | 46.55 $\pm$ 0.87 | 2.74 $\pm$ 0.81 | 51.46 $\pm$ 0.069 | Long
050814 | 5.3 | 47.88 $\pm$ 0.47 | 3.13 $\pm$ 0.45 | 53.29 $\pm$ 0.029 | Long
050824 | 0.83 | 45.30 $\pm$ 0.29 | 4.65 $\pm$ 0.27 | 51.13 $\pm$ 0.052 | Long
050922C | 2.2 | 48.92 $\pm$ 0.07 | 2.08 $\pm$ 0.07 | 52.77 $\pm$ 0.009 | Long
051016B | 0.94 | 47.59 $\pm$ 0.57 | 3.22 $\pm$ 0.55 | 51.01 $\pm$ 0.034 | Long
051109A | 2.35 | 48.01 $\pm$ 0.13 | 3.4 $\pm$ 0.11 | 52.72 $\pm$ 0.018 | Long
051109B | 0.08 | 43.51 $\pm$ 0.21 | 3.64 $\pm$ 0.19 | 48.55 $\pm$ 0.064 | Long
051221A | 0.55 | 44.74 $\pm$ 0.16 | 4.51 $\pm$ 0.16 | 51.40 $\pm$ 0.014 | IC
060108 | 2.03 | 46.50 $\pm$ 0.13 | 3.92 $\pm$ 0.13 | 51.94 $\pm$ 0.027 | Long
060115 | 3.53 | 47.80 $\pm$ 0.57 | 3.09 $\pm$ 0.55 | 52.99 $\pm$ 0.023 | Long
060116 | 6.6 | 49.37 $\pm$ 0.33 | 1.8 $\pm$ 0.3 | 53.33 $\pm$ 0.082 | Long
060202 | 0.78 | 45.64 $\pm$ 0.23 | 4.74 $\pm$ 0.23 | 52.00 $\pm$ 0.040 | Long
060206 | 4.05 | 48.65 $\pm$ 0.10 | 3.15 $\pm$ 0.1 | 52.79 $\pm$ 0.013 | Long
060502A | 1.51 | 47.27 $\pm$ 0.19 | 3.85 $\pm$ 0.21 | 52.59 $\pm$ 0.012 | IC
060510B | 4.9 | 47.39 $\pm$ 0.49 | 3.78 $\pm$ 0.48 | 53.64 $\pm$ 0.011 | Long
060522 | 5.11 | 48.51 $\pm$ 0.33 | 2.07 $\pm$ 0.31 | 53.05 $\pm$ 0.026 | Long
060604 | 2.68 | 47.24 $\pm$ 0.19 | 3.98 $\pm$ 0.18 | 52.21 $\pm$ 0.069 | Long
060605 | 3.8 | 47.76 $\pm$ 0.09 | 3.48 $\pm$ 0.08 | 52.66 $\pm$ 0.034 | Long
060607A | 3.08 | 45.68 $\pm$ 2.75 | 4.14 $\pm$ 0.02 | 53.12 $\pm$ 0.012 | Long
060614 | 0.13 | 43.93 $\pm$ 0.05 | 5.01 $\pm$ 0.05 | 51.32 $\pm$ 0.006 | IC
060707 | 3.43 | 48.01 $\pm$ 0.40 | 2.94 $\pm$ 0.36 | 52.93 $\pm$ 0.025 | Long
060714 | 2.71 | 48.22 $\pm$ 0.08 | 3.11 $\pm$ 0.07 | 53.06 $\pm$ 0.016 | Long
060729 | 0.54 | 46.17 $\pm$ 0.04 | 4.73 $\pm$ 0.04 | 51.69 $\pm$ 0.021 | Long
060814 | 0.84 | 46.69 $\pm$ 0.06 | 4.01 $\pm$ 0.06 | 52.97 $\pm$ 0.004 | Long
060906 | 3.69 | 47.73 $\pm$ 0.13 | 3.62 $\pm$ 0.12 | 53.26 $\pm$ 0.042 | Long
060908 | 2.43 | 48.24 $\pm$ 0.11 | 2.46 $\pm$ 0.09 | 53.03 $\pm$ 0.010 | Long
060912A | 0.94 | 46.37 $\pm$ 0.23 | 2.97 $\pm$ 0.18 | 51.91 $\pm$ 0.020 | IC
061121 | 1.31 | 48.35 $\pm$ 0.10 | 3 $\pm$ 0.09 | 53.47 $\pm$ 0.004 | Long
070110 | 2.35 | 48.25 $\pm$ 0.72 | 1.89 $\pm$ 0.37 | 52.90 $\pm$ 0.033 | Long
070208 | 1.17 | 46.88 $\pm$ 0.15 | 3.63 $\pm$ 0.14 | 51.58 $\pm$ 0.060 | Long
070306 | 1.49 | 47.07 $\pm$ 0.05 | 4.42 $\pm$ 0.04 | 53.18 $\pm$ 0.008 | Long
Table 1: 55 GRBs of our sample. Data of the second, third, and forth columns
are taken from D2010, the fifth column, $E_{\gamma,\rm iso}$, are calculated
from Equation (2), where $S_{\rm bolo}$ are taken from Wang et al. (2011). The
last column is the type of the GRB, where Long means long GRB and IC is
intermediate class GRB. All the error bars are $1\sigma$ range.
GRB | $z$ | ${\rm Log}[L_{\rm X}/(\rm erg/s)]$ | ${\rm Log}[T_{\rm a}/(\rm s)]$ | ${\rm Log}[E_{\gamma,\rm iso}/(\rm erg)]$ | Type
---|---|---|---|---|---
070506 | 2.31 | 47.63 $\pm$ 1.42 | 2.87 $\pm$ 1.42 | 51.82 $\pm$ 0.029 | Long
070508 | 0.82 | 48.20 $\pm$ 0.02 | 2.75 $\pm$ 0.02 | 53.11 $\pm$ 0.004 | Long
070529 | 2.5 | 48.40 $\pm$ 0.15 | 2.34 $\pm$ 0.15 | 53.04 $\pm$ 0.025 | Long
070714B | 0.92 | 46.85 $\pm$ 0.20 | 3.03 $\pm$ 0.19 | 52.30 $\pm$ 0.033 | IC
070721B | 3.63 | 47.08 $\pm$ 0.51 | 3.58 $\pm$ 0.51 | 53.34 $\pm$ 0.035 | Long
070802 | 2.45 | 46.84 $\pm$ 2.72 | 3.68 $\pm$ 0.62 | 51.96 $\pm$ 0.047 | Long
070809 | 0.22 | 44.15 $\pm$ 0.76 | 4.09 $\pm$ 0.75 | 49.43 $\pm$ 0.062 | IC
070810A | 2.17 | 47.97 $\pm$ 0.13 | 2.83 $\pm$ 0.12 | 52.26 $\pm$ 0.023 | IC
071020 | 2.15 | 49.22 $\pm$ 0.05 | 1.84 $\pm$ 0.05 | 52.87 $\pm$ 0.016 | Long
080310 | 2.42 | 46.72 $\pm$ 0.11 | 4.08 $\pm$ 0.11 | 52.88 $\pm$ 0.023 | Long
080430 | 0.77 | 46.03 $\pm$ 0.08 | 4.29 $\pm$ 0.08 | 51.68 $\pm$ 0.022 | Long
080603B | 2.69 | 48.88 $\pm$ 0.26 | 2.92 $\pm$ 0.24 | 53.07 $\pm$ 0.011 | Long
080810 | 3.35 | 48.24 $\pm$ 0.08 | 3.28 $\pm$ 0.07 | 53.42 $\pm$ 0.031 | Long
081008 | 1.97 | 47.79 $\pm$ 0.24 | 2.95 $\pm$ 0.22 | 52.85 $\pm$ 0.047 | Long
090423 | 8.26 | 48.48 $\pm$ 0.11 | 2.95 $\pm$ 0.1 | 53.03 $\pm$ 0.018 | Long
|
arxiv-papers
| 2011-03-21T11:25:59 |
2024-09-04T02:49:17.833673
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "M. Xu and Y. F. Huang",
"submitter": "Ming Xu",
"url": "https://arxiv.org/abs/1103.3978"
}
|
1103.4238
|
Macromolecular Dynamics
An introductory lecture
Joachim Wuttke
Jülich Centre for Neutron Science at FRM II
Forschungszentrum Jülich GmbH
This text appeared in:
Macromolecular Systems in Soft and Living Matter, Lecture notes of the 42nd
IFF Spring School 2011, edited by Jan K. G. Donth, Gerhard Gompper, Peter R.
Lang, Dieter Richter, Marisol Ripoll, Dieter Willbold, Reiner Zorn.
Schriften des Forschungszentrums Jülich, ISBN 978-3-89336-688-0, Jülich 2011.
###### Contents
1. 1 Systems and States
2. 2 Brownian Motion
1. 2.1 The Langevin equation
2. 2.2 The Smoluchowski equation
3. 3 Segmental Relaxation and the Glass Transition
1. 3.1 The glass transition
2. 3.2 Structural relaxation
3. 3.3 Relaxation map and secondary relaxation
4. 3.4 The mode-coupling crossover
4. 4 Dynamics of a Free Chain
1. 4.1 The spring-bead model
2. 4.2 The Rouse model
3. 4.3 Rouse modes
4. 4.4 Macroscopic consequences: dielectric and shear response
5. 4.5 Microscopic verification: neutron spin echo
5. 5 Entanglement and Reptation
6. A Linear response theory: relaxation, dissipation, fluctuation
7. B Debye’s theory of dielectric relaxation
8. C Zimm’s theory of chain dynamics in solution
## 1 Systems and States
This lecture on polymer dynamics primarily addresses the molecular motion in
melts and solutions. Other states like rubber or glass will be refered to
briefly. Therefore, let us start by sorting out the following states of
macromolecular matter:
A polymer melt is basically a liquid. However, in contrast to a simple liquid,
its constituent molecules are flexible, and therefore their center-of-mass
motion is usually accompanied by conformational changes. The dynamics is
further complicated by the entanglement of neighbouring chain molecules. As a
result, polymer melts are highly viscous and non-Newtonian liquids. The prime
example for a naturally occuring polymer melt is latex, a milky fluid rich in
cis-polyisoprene that is found in many plants, particularly abundant in the
rubber tree.
A rubber (or elastomer) is obtained from a polymer melt by a chemical process
called vulcanization that creates permanent inter-chain links. These cross-
links prevent long-ranged diffusion and viscous flow. On short time and length
scales, however, the polymer segments can move as freely as in a melt, which
explains the characteristic elasticity of the rubber state. Everyday examples:
erasers, car tyres. An unvulcanised polymer melt is said to be in a rubber
state if the entanglement is so strong that the chains do not flow on the time
scale of observation.
Plastic materials are either thermosetting or thermoplastic.
A thermosetting plastic is obtained by irreversibly creating strong chemical
inter-chain links. Examples: bakelite, duroplast, epoxy resin.
Thermoplastic materials are obtained in a reversible manner by cooling a
polymer melt. They can be either amorphous or semi-crystalline.
An amorphous solid is also called a glass, provided that it can be reversibly
transformed into a liquid. This transformation is called the glass transition;
it occurs at a loosely defined temperature $T_{\rm g}$. Examples: poly(methyl
methacrylate) (PMMA, plexiglas), polystyrene, polycarbonate.
In semicrystalline polymers, chains run through ordered and disordered
domains. The ordered, crystalline domains exist up to a melting point $T_{\rm
m}$. They provide strong links between different chains. In the temperature
range between $T_{\rm g}$ and $T_{\rm m}$, the disordered domains provide
flexibility. Example: polyethylene ($T_{\rm g}\simeq-100\ldots-70^{\circ}$C,
$T_{\rm m}\simeq 110\ldots 140^{\circ}$C), used for foils and bags. Most other
thermoplastics are typically employed below $T_{\rm g}$. Example: polyethylene
terephthalate (PET, $T_{\rm g}\simeq 75^{\circ}$C, $T_{\rm m}\simeq
260^{\circ}$C), used for bottles and textile fibers.
In a polymer solution the dilute macromolecules are always flexible. Therefore
they behave as in a pure polymer melt, except that interactions between
segments are modified by the presence of a solvent.
## 2 Brownian Motion
Almost all of polymer dynamics can be described by means of classical
mechanics. Quantum mechanics seems relevant only for the explanation of
certain low temperature anomalies that are beyond the scope of this lecture.
Since we aim at understanding macroscopically relevant average properties of a
large number of molecules, the appropriate methods are provided by statistical
mechanics.
Some fundamental methods of classical statistical mechanics have been
originally developped for the description of Brownian motion [1]. In this
introductory section, we will review two of them: the Langevin equation for
the evaluation of particle trajectories, and the Smoluchowski equation for a
higher-level modelling in terms of probability densities. Both methods are
ultimately equivalent, but depending on the application there can be a huge
computational advantage in using the one or the other.111The following is
standard material, covered by many statistical mechanics textbooks and by a
number of monographs. For Brownian motion, see e. g. [2], for the Langevin
equation, [3].
### 2.1 The Langevin equation
We consider a colloidal (mesoscopic) particle $P$ suspended in a liquid $L$.
Molecules of $L$ frequently collide with $P$, thereby exerting a random force
$\mbox{\boldmath$F$}(t)$. On average the impacts from opposite directions
cancel, so that
$\langle\mbox{\boldmath$F$}(t)\rangle=0.$ (1)
For this reason it was long believed that $L-P$ collisions cannot be
responsible for the random motion of $P$. By noting the analogy with the
fluctuations of good or bad luck in gambling, Smoluchowski (1906) showed that
this argument is fallacious: After $n$ collisions, the expectation value of
total momentum transfer is indeed 0, but for each single history of $n$
collisions one can nevertheless expect with high probability that the momentum
transfer has summed up to a value different from 0.
This random walk argument was put on a firm base by Langevin (1908).222K. Razi
Naqvi [arXiv:physics/052141v1] contests this received opinion, arguing that
Langevin’s “analysis is at best incomplete, and at worst a mere tautology.”
Let us write down the Newtonian equation of motion for the center-of-mass
coordinate $\mbox{\boldmath$r$}(t)$ of $P$:
$m\partial^{2}_{t}\mbox{\boldmath$r$}=-\zeta\partial_{t}\mbox{\boldmath$r$}+\mbox{\boldmath$F$},$
(2)
or for the velocity $\mbox{\boldmath$v$}=\partial_{t}\mbox{\boldmath$r$}$:
$m\partial_{t}\mbox{\boldmath$v$}=-\zeta\mbox{\boldmath$v$}+\mbox{\boldmath$F$}.$
(3)
If $P$ is a spherical particle of radius $a$, and $L$ has a shear viscosity
$\eta_{\rm s}$, then the friction coefficient is (Stokes 1851) [4, § 20]
$\zeta=6\pi\eta_{\rm s}a.$ (4)
Eq. (3) can be easily integrated:
$\mbox{\boldmath$v$}(t)=m^{-1}\int_{-\infty}^{t}\\!{\rm
d}t^{\prime}\,\mbox{\boldmath$F$}(t^{\prime}){\rm
e}^{-(t-t^{\prime})/{\tau_{m}}}.$ (5)
$P$ may be said to have a memory of past collisions that decays with a
relaxation time
$\tau_{m}:=\frac{m}{\zeta}.$ (6)
To compute averages like $\langle\mbox{\boldmath$v^{2}$}\rangle$ we must now
specify the expectation value of random force correlation. We assume that
different Cartesian components of $F$ are uncorrelated with each other,
$\langle F_{\alpha}F_{\beta}\rangle=0$ for $\alpha\neq\beta$, and that random
forces acting at different times $t,t^{\prime}$ are uncorrelated. The second
moment of $F$ then takes the form
$\langle
F_{\alpha}(t)F_{\beta}(t^{\prime})\rangle=A\delta_{\alpha\beta}\delta(t-t^{\prime})$
(7)
where the delta function is a convenient approximation for a memory function
that extends over no more than the duration of one $L-P$ collision.
By virtue of (1), the average velocity is
$\langle\mbox{\boldmath$v$}\rangle=0$. However, as argued above, for each
single history of $F$ one can expect that the integral in (5) results in a
nonzero velocity. After a short calculation we find that the second moment has
the time-independent value
$\langle\mbox{\boldmath$v$}^{2}(t)\rangle=\frac{3A}{2m\zeta}.$ (8)
On the other hand, this moment is well known from the equipartition theorem
$\frac{m}{2}\langle\mbox{\boldmath$v^{2}$}\rangle=\frac{3}{2}k_{\rm B}T,$ (9)
so that we obtain the coefficient $A=2k_{\rm B}T\zeta$ that was left
unspecified in (7).
In the next step, we can compute the mean squared displacement of $P$ within a
time span $t$, which we will abbreviate as
$\langle
r^{2}(t)\rangle:=\left\langle{\left[\mbox{\boldmath$r$}(t)-\mbox{\boldmath$r$}(0)\right]}^{2}\right\rangle.$
(10)
We determine $\mbox{\boldmath$r$}(t)$ by explicit integration of (5), and
after some calculation [3] we find
$\langle r^{2}(t)\rangle=6Dt-6D\tau_{m}\left(1-{\rm e}^{-t/\tau_{m}}\right)$
(11)
where we have introduced
$D:=\frac{k_{\rm B}T}{\zeta}.$ (12)
From the long-time limit $\langle r^{2}(t)\rangle\simeq 6Dt$, we identify $D$
as the diffusion coefficient (Sutherland 1905, Einstein 1905). In the opposite
limit $t\ll\tau_{m}$, ballistic motion is found: $\langle
r^{2}(t)\rangle\doteq\langle{(\mbox{\boldmath$v$}(0)t)}^{2}\rangle$.
### 2.2 The Smoluchowski equation
Alternatively, Brownian motion can be analysed in terms of the space-time
distribution $\rho(\mbox{\boldmath$r$},t)$ of suspended particles. The
continuity equation is
$\partial_{t}\rho+\mbox{\boldmath$\nabla$}\mbox{\boldmath$j$}=0.$ (13)
As usual, the current $j$ has a diffusive component (Fick 1855)
$\mbox{\boldmath$j$}^{\rm diff}=-D\mbox{\boldmath$\nabla$}\rho.$ (14)
We now assume that a potential $U(\mbox{\boldmath$r$})$ is acting upon the
particles. In the stationary state, the force $-\mbox{\boldmath$\nabla$}U$ is
just cancelled by the friction $-\zeta\vec{v}$. Accordingly, the drift
component of $j$ is
$\mbox{\boldmath$j$}^{\rm
drift}=\rho\mbox{\boldmath$v$}=-\zeta^{-1}\rho\mbox{\boldmath$\nabla$}U.$ (15)
Collecting everything, we obtain the Smoluchowski equation (1915)
$\partial_{t}\rho=D\mbox{\boldmath$\nabla$}^{2}\rho+\zeta^{-1}\mbox{\boldmath$\nabla$}(\rho\mbox{\boldmath$\nabla$}U).$
(16)
For a velocity distribution or for the more general case of a phase-space
distribution it is known as the Fokker-Planck equation (1914/1917).
In the stationary state $\partial_{t}\rho=0$, we expect to find a Boltzmann
distribution $\rho\propto(-U/k_{\rm B}T)$. It is easily verified that this
holds provided $D$ and $\zeta$ are related by the Sutherland-Einstein relation
(12). Combined with (4), the Stokes-Einstein relation
$D=\frac{k_{\rm B}T}{6\pi\eta_{\rm s}a}$ (17)
is obtained. Surprisingly, this relation holds not only for mesoscopic
suspended particles, but even for the motion of one liquid molecule among
others (the hydrodynamic radius $a$ is then treated as an adjustable
parameter). Only in the highly viscous supercooled state, the proportionality
$D\propto T/\eta_{\rm s}$ is found to break down.
## 3 Segmental Relaxation and the Glass Transition
### 3.1 The glass transition
It is generally believed that the ground state of condensed matter is always
an ordered one. However, in many supercooled liquids this ordered state is
kinetically inaccessible. Crystallization through homogeneous nucleation
requires the spontaneous formation of nuclei of a certain critical size. If
the crystalline unit cell is complicated and the energy gain small, then this
critical size is rather larger. If at the same time a high viscosity entails a
low molecular mobility, then the formation of a stable nucleus may be so
improbable that crystallization is practically excluded. On further
supercooling the liquid becomes an amorphous solid, a glass.
Glass formation is observed in quite different classes of materials: in
covalent networks like quartz glass (SiO2) or industrial glass (SiO2 with
additives), in molecular liquids, in ionic mixtures, in aqueous solutions, and
others. In polymers, glass formation is the rule, not the exception. As said
in Sect. 1, some plastic materials are completely amorphous; in others
crystalline domains are intercalated between amorphous regions.
Fig. 1: Temperature dependence of the specific volume of a glass forming
system. Depending on the cooling rate, the cross-over from liquid-like to
glass-like slopes is observed at different temperatures, and it results in
different glass states. If the system is cooled rapidly to $T_{2}$ and then
kept at that temperature, it relaxes towards states that are normally obtained
by slow cooling. Similar behavior is found for the enthalpy.
The glass transition is not a usual thermodynamic phase transition of first or
second order, as can be seen from the temperature dependence of enthalpy or
density (Fig. 1): the cross-over from liquid-like to solid-like slopes is
smooth, and it depends on the cooling rate. When the cooling is interrupted
within the transition range, structural relaxation towards a denser state is
observed. This shows that glass is not an equilibrium state: on the contrary,
it can be described as a liquid that has fallen out of equilibrium.333You may
object that the supercooled liquid is already out of equilibrium — with
respect to crystallization. However, the crystalline ground state is a null
set in phase space: beyond nucleation it has no impact upon the dynamics of
the liquid. The glass transition, in contrast, can be described as an
ergodicity breaking in phase space.The thermodynamics of the glass transition
is subject of ongoing debate. A recent attempt to clarify confused concepts is
[5].
### 3.2 Structural relaxation
Fig. 2: Dielectric spectrum in the viscous liquid or rubber state. As long as
molecular dipoles are able to follow an external electric field modulation,
the permittivity
$\epsilon(\omega)=\epsilon^{\prime}(\omega)+i\epsilon^{\prime\prime}(\omega)$
has the elevated low-frequency value of a polar liquid, here
$\epsilon_{0}=60$. On the other hand, when probed with a high-frequency field
modulation, dipoles appear frozen, resulting in the low permittivity of a
solid, here $\epsilon_{\infty}=3$. As required by the Kramers-Kronig relation,
the dispersion of $\epsilon^{\prime}(\omega)$ is accompanied by a dissipation
peak in $\epsilon^{\prime\prime}(\omega)$. The dispersion step and the loss
peak are stretched when compared to Debye’s Eq. (82) (dashed).
Relaxation can also be probed by applying a periodic perturbation. This is
done for instance in dielectric spectroscopy (Fig. 2). For reference, the most
elementary theory of dielectric relaxation (Debye 1913) is derived in Appendix
B. It leads to a complex permittivity
$\epsilon(\omega)=\epsilon_{\infty}+\frac{(\epsilon_{0}-\epsilon_{\infty})}{\left(1-(i\omega\tau)^{\alpha}\right)^{\gamma}}$
(18)
with $\alpha=\gamma=1$. Empirically the cross-over from liquid-like to solid-
like behavior extends over a much wider frequency range than this formula
predicts: it is stretched. Often, this stretching is well described if the
exponents anticipated in Eq. (18) are allowed to take values below 1 (Cole,
Cole 1941, Cole, Davidson 1951, Havriliak, Negami 1967). Alternatively
(Williams, Watts 1970), $\epsilon(\omega)$ can be fitted by the Fourier
transform of the stretched exponential function
$\Phi_{\rm K}(t)=\exp(-(t/\tau)^{\beta})$ (19)
originally introduced to describe relaxation in the time domain (R. Kohlrausch
1854, F. Kohlrausch 1863).
Generally, the relaxation time $\tau$ depends much stronger on temperature
than the stretching exponents $\alpha,\gamma$ or $\beta$. This can be
expressed as a scaling law, often called time-temperature superposition
principle: Permittivities $\epsilon(\omega;T)$ measured at different
temperatures $T$ can be rescaled with times $\tau(T)$ to fall onto a common
master curve $\hat{\epsilon}(\tau(T)\omega)$.
Fig. 3: (a) Correlation function $\Phi(q,t)$ of deuterated polybutadiene,
measured by neutron spin-echo at $q=1.5$ Å-1 (close to the maximum of $S(q)$).
Solid lines: fits with $f_{q}\exp(-(t/\tau)^{\beta})$ with fixed $\beta=0.45$.
(b) Master curve. Data from [6].
Quite similar results are obtained for other response functions. For instance,
the longitudinal mechanic modulus, a linear combination of the more
fundamental shear and bulk moduli, can be probed by ultrasonic propagation and
attenuation in a kHz…MHz range, or by light scattering in a “hypersonic” GHz
range (Brillouin 1922, Mandelstam 1926).
Particularly interesting is the pure shear modulus $G(\omega)$, which can be
probed by torsional spectroscopy. Since a liquid cannot sustain static shear,
there is no constant term in the low-frequency expansion
$G(\omega)=i\eta\omega+{\cal O}(\omega^{2}).$ (20)
The coefficient $\eta$ is the macroscopic shear viscosity. If $G(\omega)$
obeys time-temperature superposition, then $\eta(T)$ is proportional to a
shear relaxation time $\tau_{\eta}(T)$.
By virtue of the fluctuation-dissipation theorem (Appendix A), relaxations can
also be studied in equilibrium, via correlation functions that can be measured
in inelastic scattering experiments. Fig. 3 shows the normalized correlation
function
$\Phi(q,t):=S(q,t)/S(q,0)$ (21)
of a glass-forming polymer, measured in the time domain by neutron spin echo.
Scaling is demonstrated by construction of a master curve, stretching by fits
with a Kohlrausch function.
### 3.3 Relaxation map and secondary relaxation
Fig. 4: Relaxation times in glycerol, determined by various spectroscopies.
This plot can be read as a dynamic phase diagram: whether the material reacts
like a liquid or a solid depends not only on temperature, but also on the time
scale of observation. Adapted from [7].
At this point it is interesting to compare the outcome of different
experimental methods. It is found that the susceptibility master curve is not
universal; different spectroscopies generally yield different stretching
exponents. Relaxation times may vary by factors 2 or more. However, with good
accuracy all relaxation times have the same temperature dependence. This is
demonstrated in Fig. 4 for a particularly well studied molecular liquid. Upon
cooling from 290 to 190 K, the relaxation times increase in parallel by almost
12 decades. When a viscosity of $10^{13}$ Poise ($10^{12}$ Pa s) or a
relaxation time of about $10^{3}$ s is reached, relaxation falls out of
equilibrium on the time scale of a human observer. As anticipated above, this
is the glass transition. All relaxation modes that follow the temperature
dependence of viscosity are conventionally called $\upalpha$ relaxation.
The temperature dependence of relaxation times is often discussed with
reference to the simplest physical model that comes to mind: thermally
activated jumps over an energy barrier $E_{\rm A}$,
$\tau=\tau_{0}\exp\frac{E_{\rm A}}{k_{\rm B}T}$ (22)
(van t’Hoff 1884, Arrhenius 1889). Therefore, data are often plotted as
$\log\tau$ versus $1/T$ or $T_{\rm g}/T$ so that a straight line is obtained
if (22) holds. The $\upalpha$ relaxation in glass forming liquids, however,
almost never follows (22); its trace in the Arrhenius plot is more or less
concave. Many fitting formulæ have been proposed; for most applications it is
sufficient to extend (22) by just one more parameter, as in the Vogel-Fulcher-
Tammann equation (for polymers also named after Williams, Landel, Ferry)
$\tau=\tau_{0}\exp\frac{A}{T-T_{0}}.$ (23)
The singularity $T_{0}$ lies below $T_{\rm g}$, and it is unclear whether it
has any physical meaning.
In many materials, there is more than just one relaxation process. If the
additional, secondary processes are faster than $\upalpha$ relaxation, they
are conventionally labelled $\upbeta$, $\upgamma$, …. In rarer cases, a slower
process is found, designated as $\upalpha^{\prime}$ and tentatively explained
by weak intermolecular associations. While $\upgamma$ and higher order
relaxations are always attributed to innermolecular or side-chain motion, the
situation is less clear for $\upbeta$ relaxation. There are theoretical
arguments (Goldstein 1969) and experimental findings (Johari 1970) to support
the belief that $\upbeta$ relaxation is a universal property of glass-forming
systems. In any case, all secondary relaxations are well described by the
Arrhenius law (22). This implies that at some temperature they merge with the
$\upalpha$ relaxation. This is well confirmed, principally by dielectric
spectroscopy, which has the advantage of a particularly broad bandwidth. Fig.
5 shows an example for this merger or decoupling of $\upalpha$ and $\upbeta$
relaxation; such an Arrhenius plot is also called relaxation map.
Fig. 5: $\upalpha$ and $\upbeta$ relaxation times of cross-linked
polyurethane, obtained by differential mechanical-thermal analysis (DMTA),
photon correlation spectroscopy (PCS), dielectric spectroscopy and neutron
scattering. Data from [8].
### 3.4 The mode-coupling crossover
At present, there exists no satisfactory microscopic theory of relaxation near
the glass transition. At moderate to low viscosities the situation is slightly
better, since basic phenomena can be explained to some extent by a mode-
coupling theory (Götze et al. 1984–).444The standard reference for mode-
coupling theory is the comprehensive book [9]. More accessible introductions
are provided by [10, 11]. This theory attacks the microscopic dynamics at the
level of the normalized density pair correlation function (21). An equation of
motion is written in the form
$0=\ddot{\Phi}_{q}(t)+\nu_{q}\dot{\Phi}_{q}(t)+\Omega_{q}^{2}\Phi_{q}(t)+\Omega_{q}^{2}\int_{0}^{t}\\!{\rm
d}\tau\,\dot{\Phi}_{q}(t-\tau)m_{q}\\{\Phi(\tau)\\},$ (24)
which guarantees that subsequent approximations do not violate conservation
laws. In a systematic, though uncontrolled expansion the memory kernel $m_{q}$
is then projected back onto products of pair correlations,
$m_{q}\\{\Phi(t)\\}\simeq\sum_{k+p=q}V_{kpq}(T)\Phi_{k}(t)\Phi_{p}(t).$ (25)
The coupling coeffients $V_{kpq}$ depend only on the static structure factor
$S(q)$, which in turn depends weakly on the temperature. This temperature
dependence, however, is sufficient to trigger a transition from ergodic,
liquid-like solutions to non-ergodic, glassy ones:
$\begin{array}[]{ll}\Phi_{q}(t\to\infty)\to 0&\mbox{ for }T>T_{\rm c},\\\
\Phi_{q}(t\to\infty)\to f_{q}>0&\mbox{ for }T<T_{\rm c}.\end{array}$ (26)
On cooling towards $T_{\rm c}$, a critical slowing down is predicted that is
characterized by two diverging timescales,
$\begin{array}[]{ll}t_{\sigma}=t_{0}\,\sigma^{-1/(2a)},\\\
\tau_{\sigma}=t_{0}\,\sigma^{-1/(2a)-1/(2b)},\end{array}$ (27)
with the reduced temperature $\sigma:=T/T_{\rm c}-1$. The microscopic
timescale $t_{0}$ is of the order $\Omega_{q}^{-1}$. The exponents fulfill
$0<a<b<1$; they depend on just one lineshape parameter called $\lambda$. The
pair correlation function passes through the following scaling regimes:
$\begin{array}[]{ll}\Phi_{q}(t)\simeq
f_{q}+h_{q}\sigma^{1/2}(t/t_{\sigma})^{-a}&\mbox{ for }t_{0}\ll t\ll
t_{\sigma},\\\ \Phi_{q}(t)\simeq
f_{q}-h_{q}\sigma^{1/2}B(t/t_{\sigma})^{b}&\mbox{ for }t_{\sigma}\ll
t\lesssim\tau_{\sigma},\\\
\Phi_{q}(t)\simeq\hat{\Phi}_{q}(t/\tau_{\sigma})&\mbox{ for
}\tau_{\sigma}\lesssim t.\\\ \end{array}$ (28)
The regime delimited by the first two equations has been been given the
unfortunate name “fast $\upbeta$ relaxation” (the Johari-Goldstein process is
then called “slow $\upbeta$ relaxation” though it is faster than $\upalpha$
relaxation).
The $t^{-a}$ power law is a genuine theoretical prediction; it has been
searched for in many scattering experiments, and it has actually shown up in a
number of liquids. With the $t^{b}$ power law, the theory explains
experimental facts known since long (v. Schweidler 1907). This power law is
also compatible with the short-time limit of Kohlrausch’s stretched
exponential; it leads over to the $\upalpha$ relaxation master curve implied
by the third equation of (28).
Fig. 6: Incoherent intermediate scattering function $S_{\rm self}(q,t)$ of the
molecular glass former ortho-terphenyl, obtained by combining Fourier
transformed and resolution corrected neutron scattering spectra from three
different spectrometers. Fits with the mode-coupling scaling law of fast
$\upbeta$ relaxation. Adapted from from [12].
Mode-coupling predictions have been confirmed with impressive accuracy by
light scattering studies of the density driven glass transition in colloidal
suspensions (van Megen et al. 1991–). On the other hand, in conventional glass
formers mode coupling is not the full story. By numerically solving simplified
versions of (24), it is possible to fit relaxational spectra of normal or
slightly supercooled liquids. On further supercooling, in favorable cases
(such as shown in Fig. 6) the power law asymptotes of slow $\upbeta$
relaxation appear, and extrapolations yield a consistent estimate of $T_{\rm
c}$. Typically, this $T_{\rm c}$ is located 15% to 20% above $T_{\rm g}$. This
implies that the ergodicity breaking of Eq. (26) does not explain the glass
transition; nor does the power law (27) fit the divergence of viscosities or
relaxation times near $T_{\rm g}$.
This leads to the view that $T_{\rm c}$ marks a crossover between two dynamic
regimes: a mode-coupling regime at elevated temperature and low viscosity, and
a “hopping” regime in highly viscous, deeply supercooled liquids. Experimental
support for the significance of this crossover comes from a possible change in
the functional form of $\eta(T)$ or $\tau(T)$, from the aforementioned
breakdown of the Stokes-Einstein relation (17), and from the merger of
$\upalpha$ and slow $\upbeta$ relaxation.
## 4 Dynamics of a Free Chain
### 4.1 The spring-bead model
While the short-time dynamics of a polymer melt is quite similar to that of
any other viscous liquid, on longer time scales the chain structure of the
polymer makes a decisive difference, imposing strong constraints on the
segmental motion. To study the motion of an entire chain, we neglect all
details of chemical structure. We approximate the polymer as a sequence of $N$
beads at positions $\mbox{\boldmath$r$}_{n}$ ($n=1,\ldots,N$), connected by
$N-1$ springs. Each bead represents several monomeric units so that there is
no preferred bond angle (Kuhn 1934[?], Fig. 7).
Fig. 7: Spring-bead model of a flexible polymer. Since one bead represents
several monomeric units, there is no preferential bond angle at the level of
this model. Note also that the forces represented by the springs are entropic.
The time-averaged equilibrium configuration of the polymer is assumed to be
given by a Gaussian distribution of bead-to-bead vectors
$\mbox{\boldmath$r$}_{n}-\mbox{\boldmath$r$}_{n-1}$,
$P\\{\mbox{\boldmath$r$}\\}\propto\exp\left(-\kappa\sum_{n=2}^{N}{\left(\mbox{\boldmath$r$}_{n}-\mbox{\boldmath$r$}_{n-1}\right)}^{2}\right),$
(29)
where the force constant
$\kappa:=\frac{3k_{\rm B}T}{b^{2}}$ (30)
ensures an average squared spring length of $b^{2}$.
The size of a polymer coil can be characterized by a mean squared radius. The
most common measures are the end-to-end distance $R_{\rm e}$,
$R_{\rm
e}^{2}:=\langle{(\mbox{\boldmath$r$}_{N}-\mbox{\boldmath$r$}_{1})}^{2}\rangle=Nb^{2}$
(31)
and the gyration radius
$R_{\rm
g}^{2}:=N^{-1}\sum_{n}\langle{(\mbox{\boldmath$r$}_{n}-\mbox{\boldmath$r$}_{\rm
G})}^{2}\rangle\simeq Nb^{2}/6,$ (32)
with the center of mass
$\mbox{\boldmath$r$}_{\rm G}:=N^{-1}\sum\mbox{\boldmath$r$}_{n}$ (33)
The expressions (31), (32) hold in polymer melts and in $\Theta$ solutions. In
other cases, both expressions are still valid approximations if the factor $N$
is replaced by $N^{2\nu}$. In good solvents, the exponent is $\nu=3/5$.
### 4.2 The Rouse model
The Gaussian distribution (29) is based on the assumption that the free energy
$A=U-TS$ is dominated by the entropy $S=k_{\rm B}\ln P$ so that the internal
energy $U$ can be neglected. Hence each bead experiences an entropic force
$\mbox{\boldmath$F$}_{n}^{\rm
coil}=-\frac{\partial}{\partial\mbox{\boldmath$r$}_{n}}A=-\kappa\left(-\mbox{\boldmath$r$}_{n-1}+2\mbox{\boldmath$r$}_{n}-\mbox{\boldmath$r$}_{n+1}\right),$
(34)
with obvious modifications for $n=1,N$. This force strives to minimize the
distances between beads, thereby maximizing the coiling of the polymer.
The coupling to the heat bath shall be modelled by a random force
$\mbox{\boldmath$F$}_{n}^{\rm heat}$. Its second moment is given by an obvious
extension of (7),
$\langle F^{\rm heat}_{n\alpha}(t)F^{\rm
heat}_{m\beta}(t^{\prime})\rangle=2\zeta k_{\rm
B}T\delta_{nm}\delta_{\alpha\beta}\delta(t-t^{\prime}).$ (35)
Finally, moving beads experience a friction
$-\zeta\partial_{t}\mbox{\boldmath$r$}_{n}$. These three forces make up the
Rouse model, which is a key reference in polymer physics (Rouse 1953).555Rouse
theory is covered in many text books, most often using a continuum
approximation probably due to de Gennes [13]. The short but well written
chapter in [10] comes with a nice selection of experimental results. Among
dedicated polymer physics textbooks, I found [14] indispensable though largely
indigestible for its coverage of computational details, and [15] inspiring
though sometimes suspicious for its cursory outline of theory. Accordingly,
the Langevin equation is
$m\partial_{t}^{2}\mbox{\boldmath$r$}_{n}=-\zeta\partial_{t}\mbox{\boldmath$r$}_{n}+\mbox{\boldmath$F$}^{\rm
coil}_{n}+\mbox{\boldmath$F$}^{\rm heat}_{n}.$ (36)
In polymer solutions, the simple linear friction term is no longer adequate;
it must be replaced by a hydrodynamic interaction. This interaction is handled
reasonably well by a theory (Zimm 1956) outlined in Appendix C.
At this point, let us indicate some orders of magnitude. Assuming a spring
length $b=1$ nm and equating it with the hydrodynamic radius in (4), and
assuming further a microscopic viscosity $\eta_{\rm s}\simeq 10$ Pa$\cdot$s,
we find a friction coefficient $\zeta$ of the order of $10^{-7}$ to $10^{-6}$
Ns/m, in agreement with empirical data for polyisobutylene, polymethyl
acrylate and natural rubber [16]. Assuming a bead mass of 100 Da, the single-
bead collision relaxation time (6) is $\tau_{m}=m/\zeta\simeq 10^{-18}$ s,
which means that inertia is completely negligible on all relevant time scales.
At $T=300$ K, thermal motion is of the order $\langle v^{2}\rangle^{1/2}\simeq
300$ m/s, and the force constant is about $\kappa\simeq 10^{-2}$ N/m.
### 4.3 Rouse modes
We will solve (36) by transforming to normal coordinates. To begin, we note
that there is no coupling whatsoever between the three Cartesian components.
Therefore, we need to consider just one of them. Let us write $x$ and $f$ for
an arbitrary component of $r$ and $\mbox{\boldmath$F$}^{\rm heat}$. Then the
Langevin equation reads
$m\partial_{t}^{2}x_{n}=-\zeta\partial_{t}x_{n}-\kappa\left(-x_{n-1}+2x_{n}-x_{n+1}\right)+f_{n}.$
(37)
Introducing the vector notation $\underline{x}:=(x_{1},\ldots,x_{N})^{\rm T}$,
Eq. (37) takes the form
$m\partial_{t}^{2}\underline{x}=-\zeta\partial_{t}\underline{x}-\kappa\,\underline{\underline{K}}\,\underline{x}+\underline{f}$
(38)
with the force matrix
$\underline{\underline{K}}=\left(\begin{array}[]{cccccc}+1&-1&0&\cdots&0&0\\\
-1&+2&-1&\ddots&0&0\\\ 0&-1&+2&\ddots&0&0\\\
\vdots&\ddots&\ddots&\ddots&-1&0\\\ 0&0&0&-1&+2&-1\\\
0&0&0&0&-1&+1\end{array}\right)_{N\times N}.$ (39)
The entries $+1$ at both extremities of the diagonal reflect the necessary
modification of Eq. (37) for $n=1,N$. In the well known derivation of phonon
dispersion, this complication at the boundary is usually ignored or superseded
by an unphysical periodicity. It is largely unknown that the correct
$\underline{\underline{K}}$ can be diagonalized quite easily without any
approximation. The eigenvalues are
$\lambda_{p}=2-2\cos\frac{p\pi}{N}=4\sin^{2}\frac{p\pi}{2N},\quad
p=0,\ldots,N-1,$ (40)
and the normalized eigenvectors $\underline{\hat{v}}_{p}$ have components
$\hat{v}_{pn}=\left\\{\begin{array}[]{ll}\displaystyle N^{-1/2}&\mbox{ for
}p=0,\\\ \displaystyle(N/2)^{-1/2}\cos\frac{p(n-\frac{1}{2})\pi}{N}&\mbox{ for
all other $p$},\end{array}\right.$ (41)
where $n=1,\ldots,N$. The proof requires no more than a straightforward
verification of
$\underline{\underline{K}}\,\underline{\hat{v}}_{p}=\lambda_{p}\underline{\hat{v}}$.
Collecting the normalized eigenvectors into an orthogonal matrix
$\underline{\underline{A}}:=(\underline{\hat{v}}_{0},\ldots,\underline{\hat{v}}_{N-1})$,
we introduce normal coordinates
$\underline{\tilde{x}}:=\underline{\underline{A}}^{\rm
T}\underline{x},\quad\underline{x}=\underline{\underline{A}}\,\underline{\tilde{x}},$
(42)
and similar for $\underline{f}$. It is easily seen that the average random
force correlation (35) is still diagonal,
$\langle\tilde{f}_{p}(t)\tilde{f}_{q}(t^{\prime})\rangle=2\zeta k_{\rm
B}T\delta_{pq}\delta(t-t^{\prime}).$ (43)
In consequence, for each normal mode one obtains a decoupled Langevin equation
$m\partial_{t}^{2}\tilde{x}_{p}=-\zeta\partial_{t}\tilde{x}_{p}-\kappa\lambda_{p}\tilde{x}_{p}+\tilde{f}_{p}.$
(44)
At this point we must distinguish the eigenmode with the special eigenvalue
$\lambda_{0}=0$ from all the others.
The eigenmode $p=0$ describes the motion of the center of mass
$\mbox{\boldmath$r$}_{\rm G}=N^{-1/2}\mbox{\boldmath$\tilde{r}$}_{0}$. Since
$\lambda_{0}=0$, (44) is identical with the Langevin equation for Brownian
motion (2). Accordingly, the long-time evolution of the mean squared
displacement is $\langle r_{\rm G}^{2}(t)\rangle\simeq 6D_{\rm R}t$, where the
macromolecular Rouse diffusion coefficient is given by a rescaled version of
the Sutherland-Einstein relation (12):
$D_{\rm R}=\frac{k_{\rm B}T}{\zeta N}.$ (45)
Turning to the Rouse modes with $p>0$, we neglect the inertial term in the
Langevin equation (44). Integration is then straightforward:
$\tilde{x}_{p}(t)=\zeta^{-1}\int_{-\infty}^{t}\\!{\rm d}t^{\prime}\,{\rm
e}^{-(t-t^{\prime})/\tau_{p}}\tilde{f}_{p}(t^{\prime}),$ (46)
introducing the mode relaxation time $\tau_{p}:=\zeta/(\kappa\lambda_{p})$. It
can be approximated for $p\ll N$ as
$\tau_{p}\simeq\frac{\tau_{\rm R}}{p^{2}}.$ (47)
The relaxation time of the fundamental mode $p=1$ is known as the Rouse time:
$\tau_{\rm R}:=\frac{L^{2}\zeta}{3\pi^{2}k_{\rm B}T}.$ (48)
In this expression, $N$ and $b$ enter only via the extended chain length
$L=Nb$, which does not change if we change our spring-bead model to consist of
$N/x$ segments of length $bx$. This justifies ex post that we have ignored all
details of microscopic structure. A simple alternative expression for
$\tau_{\rm R}$ is
$\tau_{\rm R}:=\frac{2R_{\rm g}^{2}}{\pi^{2}D_{\rm R}}$ (49)
with the gyration radius (32).
### 4.4 Macroscopic consequences: dielectric and shear response
Fig. 8: (a) Cis-polyisoprene consists of monomers that possess a dipole moment
in direction of the polymeric bonds. (b) Peak frequencies $\omega$ of
dielectric loss spectra in cis-polyisoprene samples of different molecular
weight $M$. The normal mode resonance depends strongly on $M$, whereas the
$\alpha$ relaxation does not. (c) Normal mode relaxation times
$\tau=\omega^{-1}$ versus $M$, showing a crossover from the Rouse regime
$\tau_{\rm R}\sim M^{2}$ to a stronger $M$ dependence above the entanglement
onset $M_{c}$. Data from [17].
In favorable cases, Rouse relaxation can be observed by dielectric
spectroscopy. In the simplest case, the monomeric possess a dielectric moment
$\mu$ in direction of the polymeric bond. This requires the absence of a
perpendicular mirror plane, which is the case e. g. for cis-polyisoprene (Fig.
8a). Then the overall dipole moment of the macromolecule is
$\mbox{\boldmath$\mu$}=\sum_{n=2}^{N}(\mbox{\boldmath$r$}_{n}-\mbox{\boldmath$r$}_{n-1})\mu=(\mbox{\boldmath$r$}_{N}-\mbox{\boldmath$r$}_{1})\mu$
(50)
From (41) we infer that only eigenmodes with odd $p$ contribute to (50). For
$t\gtrsim\tau_{\rm R}$ pair correlations are dominated by the $p=1$ mode:
$\langle\mbox{\boldmath$\mu$}(0)\mbox{\boldmath$\mu$}(t)\rangle\propto\langle\mbox{\boldmath$\tilde{r}$}_{1}(0)\mbox{\boldmath$\tilde{r}$}_{1}(t)\rangle\propto{\rm
e}^{-t/\tau_{\rm R}}.$ (51)
According to the fluctuation-dissipation theorem (73), this correlation is
proportional to a linear response function. After Fourier transform, one finds
that the dielectric permittivity $\epsilon(\omega)$ has a Debye resonance
around $\omega\sim\tau_{\rm R}^{-1}$.
Fig. 8b,c shows results of dielectric spectroscopy in cis-polyisoprene melts
with different extended chain lengths $L=Nb$. In experimental reports the
chain length is of course expressed as molecular weight $M$. The $\alpha$
relaxation peak, discussed above in Sect. 3.2, is perfectly independent of
$M$, which confirms that it is due to innersegmental motion, In contrast, the
normal mode relaxation time depends strongly on $M$. For low $M$, the Rouse
prediction $\tau_{R}\sim M^{2}$ is confirmed. However, at $M_{c}\simeq 10$
kDalton, there is a rather sharp crossover to a steeper power law $M^{3.7}$
that is ascribed to entanglement effects.
Mechanical spectroscopy has the advantage that it works also if monomers are
too symmetric for dielectric measurements. As already mentioned in Sect. 3.2
torsional spectroscopy probes a system’s response to shear. Not seldom this
response is nonlinear, due to non-Newtonian flow phenomena that are beyond the
scope of the present lecture. As long as the response is linear, it can be
described by the frequency dependent shear modulus $G(\omega)$. Its high
frequency limit $G_{\infty}$ quantifies the shear needed to cause a given
strain. In a liquid, the low frequency limit $G_{0}$ is zero: stationary shear
is not able to build up a lasting stress. Sometimes this is seen as the
defining property of the liquid state. Instead of stationary shear, a flow
gradient is needed to maintain strain. The proportionality coefficient is the
shear viscosity $\eta$, which is the low frequency limit of
$G(\omega)/(i\omega)$, as anticipated in (20).
To calculate $G(\omega)$ in the frame of Rouse theory, it is convenient to
invoke a Green-Kubo relation (a variant of the fluctuation-dissipation
theorem) according to which $G(\omega)$ is proportional to the Fourier
transform of a stress autocorrelation function
$\langle\sigma_{xz}(t)\sigma_{xz}(0)\rangle$.666The following is no more than
a speculative summary of obscure calculations in [14] and [15]. The stress
component $\sigma_{xz}$ can be expressed through the displacements of
individual beads. For weak elongations, the autocorrelation can be factorized
so that $G(t)$ is proportional to the square of a normalized one-dimensional
displacement autocorrelation function
$\sum_{n}\frac{\langle\delta x_{n}(t)\delta x_{n}(0)\rangle}{\langle\delta
x_{n}^{2}\rangle}=\sum_{p\geq
1}\frac{\langle\tilde{x}_{p}(t)\tilde{x}_{p}(0)\rangle}{\langle\tilde{x}_{p}^{2}\rangle}.$
(52)
Using (46) to compute the normal mode autocorrelation we find
$G(t)\sim\sum_{p}{\rm e}^{-2t/\tau_{p}}.$ (53)
With $\tau_{p}\propto p^{-2}$ and replacing the sum by an integral we obtain
the approximation
$G(t)\sim\sum_{p\geq 1}{\rm e}^{-2p^{2}t/\tau_{\rm
R}}\sim\int_{0}^{\infty}\\!{\rm d}p\,{\rm e}^{-2p^{2}t/\tau_{\rm
R}}\sim{\left(\frac{\tau_{\rm R}}{t}\right)}^{1/2}.$ (54)
Fourier transform yields the power law
$G(\omega)\sim\omega^{1/2}.$ (55)
For small $t$, quite many eigenmodes contribute to (53) so that neither the
low-$p$ expansion (47) nor the extension of the summation to $\infty$ are
justified. Therefore $G(\omega)$ must cross over from (55) to a constant high-
frequency limit $G_{\infty}$.
Fig. 9: Real part $G^{\prime}(\omega)$ of the shear modulus of polystyrene
samples with different narrow molecular weight distributions. Experiments were
done in a frequency range from $10^{-1.5}$ to $10^{0.5}$ s-1 and in a
temperature range from 120 to 260 K. Then, time-temperature superposition was
used to construct the master curves shown here. Data from [18].
For large $t$, only few eigenmodes contribute to (53) so that the passage to a
continuous $p$ becomes invalid. In this case, it is more appropriate to
compute the Fourier transform term by term, which yields a sum of Maxwell-
Debye resonances
$G(\omega)\sim\sum_{p}\frac{1}{1-i\omega\tau_{p}}.$ (56)
In the low-frequency limit we obtain a constant term of doubtful significance
plus a linear term proportional to the viscosity
$\eta\sim\sum\tau_{p}\sim\tau_{\rm R}.$ (57)
From (48), we have $\tau_{\rm R}\sim N^{2}$, but there is a prefactor $N^{-1}$
in $G(\omega)$ omitted in our sloppy derivation so that finally the Rouse
model predicts $\eta\sim N$.
In Fig. 9 experimental data are shown. For moderate chain lengths,
$G^{\prime}(\omega)$ is indeed found to cross over from $\omega^{2}$ (the
lowest non-constant real term in the expansion of (56) to $\omega^{1/2}$. The
ultimate limit $G_{\infty}$ has not been reached in this experiment. For
longer polymer chains, a flat plateau appears between the liquid-like
$\omega^{2}$ and the Rouse regime $\omega^{1/2}$. Such a constant value of
$G(\omega)$ implies instantaneous, memory-free response, which is
characteristic of rubber elasticity; it is caused by entanglement.
### 4.5 Microscopic verification: neutron spin echo
For direct, microscopic measurement of chain conformation fluctuations one
must access length and time scales of the order of nm and ns. The most
powerful instrument in this domain is the neutron spin echo spectrometer. The
recent book [19] provides a comprehensive review of spin echo studies on
polymer dynamics.
Usually, spin echo experiments require deuterated samples to avoid the
otherwise dominant incoherent scattering from protons. The experiments then
yield the coherent dynamic structure factor $S(q,t)$. However, for a simple,
intuitive data analysis the incoherent scattering function $S_{\rm i}(q,t)$ is
preferable. It is also denoted $S_{\rm self}(q,t)$ since it reveals the self
correlation of a tagged particle. In Gaussian approximation
$S_{\rm
i}(q,t)=\langle\exp\left(iq(\mbox{\boldmath$r$}(t)-\mbox{\boldmath$r$}(0))\right)\rangle\simeq\exp\left(-q^{2}\langle
r^{2}(t)\rangle/6\right),$ (58)
it yields the mean squared displacement (10).
Measuring $S_{\rm i}(q,t)$ by neutron spin echo is difficult because the
random spin flips associated with incoherent scattering destroy 2/3 of the
incoming polarization. Nevertheless, thanks to progress in instrumentation, it
is nowadays possible to obtain self correlation functions from undeuterated
(“protonated”) samples in decent quality. Alternatively, self correlations can
be measured with the data quality of coherent scattering if short protonated
sequences are intercalated at random in deuterated chains.
Within the Rouse model, and neglecting ballistic short-time terms, the mean
squared displacement is given by
$\langle
r^{2}(t)\rangle=\frac{6}{N}\sum_{p=0}^{N-1}\left[\langle\mbox{\boldmath$\tilde{x}$}_{p}^{2}\rangle-\langle\mbox{\boldmath$\tilde{x}$}_{p}(t)\mbox{\boldmath$\tilde{x}$}_{p}(0)\rangle\right]=6D_{\rm
R}\left\\{t+\sum_{p=1}^{N-1}\tau_{p}\,\left[1-{\rm
e}^{-t/\tau_{p}}\right]\right\\}.$ (59)
With the same technique as in (54), one obtains the approximation
$\langle r^{2}(t)\rangle\simeq 6D_{\rm R}\left\\{t+{\left(\pi\tau_{\rm
R}t\right)}^{1/2}\right\\}.$ (60)
At about $t\sim\tau_{\rm R}$, there is a cross-over from a $t^{1/2}$ regime
dominated by conformational fluctuations to the $t^{1}$ diffusion limit.
Inserting the asymptotic $\langle r^{2}(t)\rangle\sim t^{1/2}$ into (58) one
obtains an expression that agrees with the Kohlrausch function (19) with a
stretching exponent $\beta=1/2$. This indicates that the high-$p$ limit of the
Rouse modes is more physical than might have been expected; it seems to
capture even some aspects of segmental $\alpha$ relaxation. Be that as it may,
the $t^{1/2}$ prediction has been impressively confirmed in neutron scattering
experiments.
Fig. 10: Single-chain coherent normalized dynamic structure factor of
polyethylene melts at 509 K. Lines are predictions of the Rouse model. They
fit for short chains (left), but not for long chains (right); note the
different time scales. Data points and lines from [19].
The coherent dynamic structure factor is more involved than (58). In general,
it contains contributions from interchain as well as from intrachain
correlations. Single-chain dynamics can be isolated by contrast variation,
using a mixture of about 10% protonated and 90% deuterated polymer. Fig. 10
shows the single-chain dynamic structure factor of two polyethylene melts with
different chain lengths [19]. For short chains, the data are in perfect
agreement with the Rouse model. For long chains, however, correlations decay
much slower than predicted by the Rouse model. This is yet another indication
of entanglement.
## 5 Entanglement and Reptation
Entanglement means that the conformational dynamics of a chain is hindered by
presence of other chains. Entanglement is a topological constraint, due to the
simple fact that chains cannot cross each other (Fig. 11). We have already
encountered experimental results that provide clear evidence for the relevance
of entanglement for polymer chains that exceed a certain size $N_{\rm c}$:
* •
The dielectric relaxation time crosses over from the Rouse behavior
$\tau\propto N^{2}$ (48) to a steeper slope $\tau\propto N^{3.7}$ (Fig. 8c).
* •
In the shear modulus $G(\omega)$, there appears a plateau
$G(\omega)=\mbox{const}$ with rubber-like elasticity between the liquid limit
$G(\omega)\simeq i\eta\omega$ and the Rouse regime $G(\omega)\sim\omega^{1/2}$
(Fig. 9).
* •
Neutron scattering shows that correlations within long chains decay much
slower than predicted by the Rouse model (Fig. 10).
The rather sharp crossover at $N_{\rm c}$ implies that entanglement becomes
relevant if the coil radius $N^{1/2}b$ (up to a constant factor, depending on
definition) exceeds a certain value
$a:=N_{\rm c}^{1/2}b.$ (61)
Up to this length scale, the chains are heavily coiled with little mutual
penetration (Fig. 11a). On larger scales, the coarse-grained polymer chains
have the character of heavily entangled tubes (Fig. 11b). Each tube can be
modelled as an ideal random chain, consisting of beads of size $a$.
Fig. 11: Entanglement. (a) Local detail, with relatively few chain crossings.
(b) Coarse grained representation, showing heavy entanglement. Each
monochrome, tube-like region is filled with coiled subunits of the polymer.
For times between the entanglement time $t_{\rm e}$ and the disentanglement
time $t_{\rm d}$, polymer motion is confined to these tubes.
An entanglement time $t_{\rm e}$ can be defined by $\langle r^{2}(t_{\rm
e})\rangle\simeq a^{2}$. Using (60) in the limit $t\ll\tau_{\rm R}$, we find
up to numeric prefactors
$t_{\rm e}\sim\frac{L_{\rm c}^{2}}{D}$ (62)
with the critical extended chain length $L_{\rm c}=N_{\rm c}b$. For times
beyond $t_{\rm e}$, the dynamics is qualitatively different from the free
chain Rouse regime. Since a chain is basically confined to a tube, it can only
perform a one-dimensional, snake-like motion, called reptation (de Gennes
1971).
For a short outline of some scaling results, we concentrate on the mean
squared displacement. We will see that there are altogether no less than five
different regimes. They are summarized in Fig. 12.
The one-dimensional dynamics within a tube shall be described by the Rouse
model as before. Let $s$ be a coordinate along the tube. The mean squared
displacement in $s$ is just one third of the Rouse result (60) in $r$. Since
the tubes are ideal Gaussian random coils, an extended tube length
$s^{2}=N_{s}^{2}a^{2}$ corresponds to a squared real-space displacement of
$r^{2}=N_{s}a^{2}=as$. In the reptation regime $t_{\rm e}\ll t\ll t_{\rm R}$
we obtain, omitting prefactors,
$\langle r^{2}(t)\rangle\sim aD_{\rm R}^{1/2}{(\tau_{R}t)}^{1/4}.$ (63)
This $t^{1/4}$ law is a key prediction of reptation theory; it has been
confirmed by neutron spin echo measurements [19].
For times beyond $t_{\rm R}$, the one-dimensional dynamics crosses over from
innerchain Rouse fluctuations to center-of-mass diffusion. Accordingly, the
real-space mean squared displacement takes the form
$\langle r^{2}(t)\rangle\sim aD_{\rm R}^{1/2}t^{1/2}.$ (64)
This holds until the chain escapes from its tube, which happens when
$s^{2}\sim N^{2}b^{2}$ or $r^{2}\sim aD_{\rm R}t_{\rm d}^{1/2}$. Using again
$r^{2}=as$, we obtain the disentanglement time
$t_{\rm d}\sim\frac{N^{3}b^{2}}{D}.$ (65)
Finally, on time scales above $t_{\rm d}$, the chain, having diffused out of
its original tube, is free to try new conformations in three dimensions. This
is a center-of-mass random walk, described by
$\langle r^{2}(t)\rangle\simeq 6D_{\rm d}t.$ (66)
Matching (66) with (64) at $t_{\rm d}$, we obtain the disentangled diffusion
coefficient
$D_{\rm d}\sim\frac{aD}{bN^{2}}.$ (67)
Fig. 12: Time evolution of the mean squared displacement (10) on a double
logarithmic scale. (a) For short chains, as predicted by Rouse theory. (b) For
long chains, as predicted by de Gennes’ reptation theory.
The scaling laws $t_{\rm d}\sim N^{3}$ and $D_{\rm d}\sim N^{-2}$ are
important predictions. The disentanglement time $t_{\rm d}$ determines the
relaxation time $\tau$ observed in dielectric or mechanical spectroscopy.
Empirically, the molecular mass dependence of $\tau$ is even stronger than
$N^{3}$. Typical exponents are 3.2 to 3.6; in Fig. 8c we had even 3.7. This
discrepancy shows that one-dimensional diffusion in fixed tubes is not the
full story. It is necessary to take into account fluctuations of the
neighbouring chains (contour length fluctuations). On the other hand, the
prediction $D_{\rm d}\sim N^{-2}$ has been confirmed by quite direct,
spatially resolved diffusion measurements [15].
## Appendices
## Appendix A Linear response theory: relaxation, dissipation, fluctuation
If a multi-particle system is exposed to a weak perturbation $A$, its response
$B$ is linear in $A$, as far as amplitudes are concerned. However, the
response may be delayed in time, assuming the character of relaxation.
Relaxation may be probed in time or in frequency, by spectroscopy (response to
external perturbation) or by scattering methods (fluctuations in equilibrium).
The relations between these probes are the subject of linear response theory,
to be briefly summarized in this appendix.
The linear response $B(t)$ to a perturbation $A(t)$ can be written as
$B(t)=\int_{-\infty}^{t}\\!{\rm d}t^{\prime}\,R(t-t^{\prime})\,A(t^{\prime}).$
(68)
Consider first the momentary perturbation $A(t)=\delta(t)$. The response is
$B(t)=R(t)$. Therefore, the memory kernel $R$ is identified as the response
function.
Consider next a perturbation $A(t)={\rm e}^{\eta t}\Theta(-t)$ that is slowly
switched on and suddenly switched off ($\Theta$ is the Heavyside step
function, $\eta$ is sent to $0^{+}$ at the end of the calculation). For $t>0$,
one obtains $B(t)=\Phi(t)$ where $\Phi$ is the negative primitive of the
response function,
$R(t)=-\partial_{t}\Phi(t).$ (69)
Since $\Phi$ describes the time evolution after an external perturbation has
been switched off, it is called the relaxation function. In the special case
of exponential (Lorentzian) relaxation, $R$ and $\Phi$ are equal (up to a
constant factor), which is a frequent source of confusion.
Consider finally a periodic perturbation that is switched on adiabatically,
$A(t)=\exp(-i\omega t+\eta t)$, implying again the limit $\eta\to 0^{+}$. The
response can be written $B(t)=\chi(\omega)A(t)$, introducing a dynamic
susceptibility
$\chi(\omega):=\int_{0}^{\infty}\\!{\rm d}t\,{\rm
e}^{i(\omega+i\eta)t}\,R(t).$ (70)
This motivates the definition
$F(\omega):=\int_{0}^{\infty}\\!{\rm d}t\,{\rm e}^{i\omega t}\,\Phi(t).$ (71)
of the one-sided Fourier transform $F(\omega)$ of the relaxation function
$\Phi(t)$. Because of (69), there is a simple relation between $\chi$ and $F$:
$\chi(\omega)=\Phi(0)+i\omega F(\omega).$ (72)
In consequence, the imaginary part of the susceptibility, which typically
describes the loss peak in a spectroscopic experiment, is given by the real
part of the Fourier transform of the relaxation function,
$\mbox{Im~{}}\chi=\omega\mbox{Re~{}}F(\omega)$. Conversely, dispersion is
described by $\mbox{Re~{}}\chi=\Phi(0)-\omega\mbox{Im~{}}F(\omega)$.
Up to this point, the only physical input has been Eq. (68). To make a
connection with correlation functions, more substantial input is needed. Using
the full apparatus of statistical mechanics (Poisson brackets, Liouville
equation, Boltzmann distribution, Yvon’s theorem), it is found [20] that for
classical systems
$\langle A(t)B(0)\rangle=k_{\rm B}T\Phi(t).$ (73)
This is an expression of the fluctuation-dissipation theorem (Nyquist 1928,
Callen, Welton 1951): the left side describes fluctuations in equilibrium; the
right side relaxation towards equilibrium, which is inevitably accompanied by
dissipation (loss peak in $\mbox{Im~{}}\chi$).
Pair correlation functions are typically measured in scattering experiments.
For instance, inelastic neutron scattering at wavenumber $q$ measures the
scattering law $S(q,\omega)$, which is the Fourier transform of the density
correlation function,
$S(q,\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\\!{\rm d}t\,{\rm
e}^{i\omega t}\langle\rho(q,t)^{*}\rho(q,0)\rangle.$ (74)
In contrast to (71) and (70), this is a normal, two-sided Fourier transform.
In consequence, if we let $\langle\rho(q,t)^{*}\rho(q,0)\rangle=\Phi(t)$, then
the scattering law $S(q,\omega)$ is proportional to the real part
$\mbox{Re~{}}F(\omega)$ of the one-sided Fourier transform of $\Phi(t)$.
## Appendix B Debye’s theory of dielectric relaxation
In modern terms, Debye’s theory of dipolar relaxation is based on a
Smoluchowski equation that describes the time evolution of the probability
distribution $f(\vartheta,\phi,t)$ of dipole orientations $\vartheta,\phi$
($0\leq\vartheta\leq\pi$, $0\leq\phi<2\pi$):
$\zeta\partial_{t}f=\nabla(\beta^{-1}\nabla f+U\nabla f)$ (75)
where $\zeta$ is a friction coefficient, $U$ is an external potential, and
$\beta\equiv 1/(k_{\rm B}T)$. Inserting spherical coordinates, ignoring
$\phi$, and keeping $r$ constant, we obtain
$\zeta^{\prime}\partial_{t}f=(\sin\vartheta)^{-1}\partial_{\vartheta}\sin\vartheta(\beta^{-1}\partial_{\vartheta}f+f\partial_{\vartheta}U)$
(76)
with $\zeta^{\prime}=r^{2}\zeta$. An electric field in $z$ direction causes a
potential
$U(t)=-\mu E(t)\cos\vartheta$ (77)
that is proportional to the dipole moment $\mu$.
With the ansatz
$f(\vartheta,t)=1+\beta g(t)\cos\vartheta,$ (78)
and introducing the relaxation time $\tau:=\beta\zeta^{\prime}/2$, Eq. (76)
simplifies to
$\tau\partial_{t}g(t)=-g(t)+\mu E(t)+{\cal O}(g\beta\mu E).$ (79)
Under realistic experimental conditions, we always have $\mu E\ll\beta^{-1}$
so that the last term is negligible. The remaining linear differential
equation shall be rewritten for a macroscopic observable, the polarization
$P(t)=\int\\!\frac{{\rm d}\Omega}{4\pi}\,\mu\cos\vartheta
f(\vartheta,t)=\frac{\mu\beta g(t)}{3}.$ (80)
We obtain
$(1+\tau\partial_{t})P(t)=\frac{\mu^{2}\beta}{3}E(t).$ (81)
In the simplest time-dependent experiment, the electric field is adiabatically
switched on, then suddenly switched off at $t=0$. The polarization then
relaxes exponentially, $P(t)\propto\exp(-t/\tau)$. In a frequency-dependent
experiment, a periodic perturbation $E(t)\propto\exp(i\omega t)$ is applied.
This yields the susceptibility
$\chi_{\rm
dipolar}(\omega)=\frac{P(\omega)}{E(\omega)}=\frac{1}{1-i\omega\tau}.$ (82)
The relative electric permittivity is then
$\epsilon(\omega)=1+\chi_{\rm dipolar}(\omega)+\chi_{\rm other}(\omega)$ (83)
where the “other” contribution comes mainly from the electronic
polarizability. The dashed lines in Fig. 2 show the dispersion step in the
real part $\epsilon^{\prime}(\omega)$ and the dissipation maximum in the
imaginary part $\epsilon^{\prime\prime}(\omega)$.
## Appendix C Zimm’s theory of chain dynamics in solution
Starting with equation (34), the entire Rouse model is based on the assumption
that the chain conformation is driven by entropy only. This a good
approximation for melts, but generally not for solutions, except in the
$\Theta$ condition. To account for the swelling of a polymer in solution, one
needs at least to model the mutual sterical exclusion of different chain
segments. The simplest approximation for this excluded volume interaction is
the repulsive potential
$U_{\rm ex}\\{\mbox{\boldmath$r$}\\}=k_{\rm B}Tv_{\rm ex}\sum_{n\neq
m}\delta(\mbox{\boldmath$r$}_{n}-\mbox{\boldmath$r$}_{m}).$ (84)
As described in Sect. 4.1, its effect upon the equilibrium structure is
limited to the modification of the exponent $\nu$ in the scaling laws (31),
(32) for the coil radius.
For the dynamics, another modification of the Rouse model is even more
important: one has to include the hydrodynamic interaction between the polymer
and the solvent. The motion of a polymer bead drags the surrounding solvent
with it, thereby creating a flow pattern, which in turn exerts a force upon
other beads. If inertia is neglected, the friction term assumed in the Rouse
model implies $\mbox{\boldmath$v$}=\zeta^{-1}\mbox{\boldmath$F$}$. To account
for hydrodynamic interactions, this equation must be replaced by
$\mbox{\boldmath$v$}_{n}=\sum_{m}\mbox{\boldmath$H$}_{nm}\mbox{\boldmath$F$}_{m}.$
(85)
To estimate the coupling coefficients $\mbox{\boldmath$H$}_{nm}$, one usually
refers to a simple case for which the hydrodynamic interaction can be obtained
from first principles: for a point particle, located at
$\mbox{\boldmath$r$}_{1}(t)$ and dragged by a force $\mbox{\boldmath$F$}_{1}$,
one can solve the Navier-Stokes equations to obtain the flow field
$\mbox{\boldmath$v$}(r)=\mbox{\boldmath$H$}(\mbox{\boldmath$r$}-\mbox{\boldmath$r$}_{1})\mbox{\boldmath$F$}_{1}$
with the Oseen tensor777I am unable to trace back this result to C. W. Oseen
whose 1910 papers in Ark. Mat. Astr. Fys. are frankly unreadable. Zimm (1956)
takes the tensor from Kirkwood and Riseman (1948) who cite a report by Burgers
(1938) that is not easily available.
$\mbox{\boldmath$H$}(\mbox{\boldmath$r$})=\frac{1}{8\pi\eta
r}\left(\mbox{\boldmath$1$}+\mbox{\boldmath$\hat{r}$}\otimes\mbox{\boldmath$\hat{r}$}\right),$
(86)
which is then used to approximate
$\mbox{\boldmath$H$}_{nm}\simeq\left\\{\begin{array}[]{ll}\zeta^{-1}\mbox{\boldmath$1$}&\mbox{
for }n=m,\\\
\mbox{\boldmath$H$}(\mbox{\boldmath$r$}_{n}-\mbox{\boldmath$r$}_{m})&\mbox{
else.}\end{array}\right.$ (87)
Unfortunately, the $r$ dependence of $H$ makes the modified Langevin equation
nonlinear. This obstacle is overcome in the Zimm theory (1956) by a
preaveraging step that is basically a mean field approximation:
$\mbox{\boldmath$H$}_{nm}$ is replaced by its average under the equilibrium
distribution $P\\{\mbox{\boldmath$r$}\\}$. In the $\Theta$ condition, one
obtains for $n\neq m$
$\langle\mbox{\boldmath$H$}_{nm}\rangle\simeq\frac{\mbox{\boldmath$1$}}{{(6\pi^{3}|n-m|)}^{1/2}\eta
b},$ (88)
which is a rather long-ranged interaction. In other conditions, a modified
distribution $P\\{\mbox{\boldmath$r$}\\}$ might be used, leading to a modified
power law $|n-m|^{-\nu}$.
The preaveraging linearization allows to rewrite the one-dimensional Rouse
mode Langevin equation (44) with hydrodynamic interaction as
$\partial_{t}\underline{\tilde{x}}=\underline{\underline{\tilde{H}}}\left(-m\partial_{t}^{2}\underline{\tilde{x}}-\kappa\,\underline{\underline{\Lambda}}\,\underline{\tilde{x}}+\underline{\tilde{f}}\right)$
(89)
with the diagonal matrix $\Lambda_{pq}=\delta_{pq}\lambda_{p}$ and with
$\underline{\underline{\tilde{H}}}=\underline{\underline{A}}^{\rm
T}\,\underline{\underline{H}}\,\underline{\underline{A}}$, which in the
$\Theta$ condition is in good approximation
$\tilde{H}_{pq}\simeq\left\\{\begin{array}[]{ll}0&\mbox{ for }p\neq
q,\\\\[4.73611pt] \displaystyle\frac{8N^{1/2}}{{3(6\pi^{3})}^{1/2}\eta
b}&\mbox{ for }p=q=0,\\\\[14.63881pt]
\displaystyle\frac{N^{1/2}}{{(3\pi^{3}p)}^{1/2}\eta b}&\mbox{
else.}\end{array}\right.$ (90)
For the $p=0$ eigenmode, we find again Brownian motion, with the Zimm
diffusion constant
$D_{\rm Z}=\frac{k_{\rm B}TH_{00}}{N}\propto N^{-1/2},$ (91)
which differs from $D_{\rm R}\sim N^{-1}$ in the Rouse model. Allowing for
excluded volume interaction in a good solvent, the aforementioned
generalisation of (88) leads to $D_{\rm Z}\sim N^{-\nu}$. Comparing with (32),
we find that the diffusion constant is in both cases determined by the coil
radius: $D_{\rm Z}\sim R^{-1}$. This result is routinely used in photon
correlation spectroscopy (somewhat misleadingly also called dynamic light
scattering), where the diffusion coefficient of dilute macromolecules is
measured in order to determine their gyration radius.
For nonzero eigenmodes, the Rouse time is replaced in the $\Theta$ condition
by
$\tau_{\rm Z}:=\frac{\eta b^{3}N^{3/2}}{(3\pi)^{1/2}k_{\rm B}T},$ (92)
and the mode relaxation times become
$\tau_{p}\simeq\frac{\tau_{\rm Z}}{p^{3/2}}.$ (93)
Again, the $N$ dependence can be generalized towards a dependence on the coil
radius, $\tau_{\rm Z}\sim R^{3}$.
## References
* [1] B. Duplantier, Séminaire Poincaré 1, 155 (2005).
* [2] R. M. Mazo, Brownian Motion. Fluctuations, Dynamics, and Applications, Oxford University Press: Oxford (2002).
* [3] W. T. Coffey, Yu. P. Kalmykov, and J. T. Waldron, The Langevin Equation, World Scientific: Singapore (22004).
* [4] L. D. Landau and E. M. Lifshitz, Course of Theoretical Physics. Vol 6. Fluid Mechanics. Translated from Russian, also available in other languages.
* [5] L. Leuzzi and T. M. Nieuwenhuizen, Thermodynamics of the Glassy State, Taylor & Francis: New York (2008).
* [6] D. Richter, B. Frick and B. Farago, Phys. Rev. Lett. 61, 2465 (1988).
* [7] J. Wuttke, Adv. Solid State Phys. (Festkörperprobleme) 40, 481 (2000).
* [8] C. Levelut, A. Faivre, J. Pelous, B. Johnson and D. Durand, 276–278, 431 (2000).
* [9] W. Götze, Complex Dynamics of Glass-Forming Liquids. A Mode-Coupling Theory, Oxford University Press: Oxford (2009).
* [10] K. Binder and W. Kob, Glassy Materials and Disordered Solids: An Introduction to their Statistical Mechanics, World Scientific: Singapore (2005).
* [11] T. Voigtmann, in Soft Matter. From Synthetic to Biological Materials, edited by J. K. G. Dhont et al. (Lecture Notes of the 39th Spring School), Forschungszentrum Jülich: Jülich (2008).
* [12] J. Wuttke, M. Kiebel, E. Bartsch, F. Fujara, W. Petry and H. Sillescu, Z. Phys. B 91, 357 (1993).
* [13] P.-G. de Gennes, Scaling Concepts in Polymer Physics, Cornell University Press: Ithaca (1979).
* [14] M. Doi and S. F. Edwards, The Theory of Polymer Dynamics, Clarendon: Oxford (1986).
* [15] G. Strobl, The Physics of Polymers, Springer: Berlin (1996).
* [16] J. D. Ferry, Viscoelastic Properties of Polymers, J. Wiley: New York (1961, 31980).
* [17] D. Boese and F. Kremer, Macromolecules 23, 829 (1990).
* [18] S. Onogi, T. Masuda and K. Kitagawa, Macromolecules 3, 109 (1970).
* [19] D. Richter, M. Monkenbusch, A. Arbe and J. Colmenero, Neutron Spin Echo in Polymer Systems (Adv. Polym. Sci., Vol. 174), Springer: Berlin (2005).
* [20] R. Kubo, Rep. Progr. Phys. 29, 255 (1966).
## Index
* amorphous solid §1, §3.1
* Arrhenius plot §3.3
* bakelite §1
* Brillouin-Mandelstam scattering §3.2
* Brownian motion §2
* bulk modulus §3.2
* cis-polyisoprene, see polyisoprene
* coherent neutron scattering §4.5
* Cole-Cole function §3.2
* Cole-Davidson function §3.2
* contour length fluctuation §5
* contrast variation §4.5
* correlation function Appendix A
* critical slowing down §3.4
* Debye relaxation Appendix B, §3.2
* dielectric spectroscopy §3.2, §3.3
* diffusion coefficient §2.1, §4.3, §5
* disentanglement time §5
* dispersion Appendix A, Appendix B
* dissipation Appendix A, Appendix B
* duroplast §1
* dynamic light scattering, see photon correlation spectroscopy
* dynamic phase diagram Fig. 4
* Einstein relation, see Sutherland-Einstein relation
* elastomer §1
* end-to-end distance §4.1
* entanglement §1, §4.4, §4.4, §5
* entanglement time §5
* epoxy resin §1
* ergodicity footnote 3, §3.4
* excluded volume Appendix C
* exponential relaxation Appendix A, Appendix B
* extended chain length §4.3, §5
* Fick’s law §2.2
* fluctuation-dissipation theorem Appendix A
* Fokker-Planck equation §2.2
* friction coefficient §2.1, §4.2
* Gaussian approximation §4.5
* glass §1, §3.1
* glass transition §1, §3, §3.3
* Green-Kubo relation §4.4
* gyration radius §4.1
* Havriliak-Negami function §3.2
* hydrodynamic interaction Appendix C, §4.2
* hydrodynamic radius §2.2
* hypersound §3.2
* incoherent neutron scattering §4.5
* Johari-Goldstein relaxation §3.3, §3.4
* Kohlrausch function, see stretched exponential function
* Kuhn model §4.1
* Langevin equation §2.1
* latex §1
* light scattering §3.2
* linear response Appendix A
* longitudinal modulus §3.2
* Lorentzian, see exponential relaxation
* master curve §3.2
* mean squared displacement §2.1, §4.3, §4.5, §5
* memory §2.1
* memory kernel §3.4
* mode-coupling theory §3.4
* neutron spin echo §3.2, §4.5
* non-Newtonian flow §4.4
* nucleation §3.1
* Oseen tensor Appendix C
* photon correlation spectroscopy Appendix C
* plastic §1
* plexiglas, see poly(methyl methacrylate)
* polybutadiene Fig. 3
* polycarbonate §1
* polyethylene §1, Fig. 10
* polyethylene terephthalate §1
* polyisobutylene §4.2
* polyisoprene §1, §4.4
* polymer dynamics §1
* polymer melt §1
* polymer radius §4.1
* polymer solution Appendix C, §1
* polymethyl acrylate §4.2
* poly(methyl methacrylate) §1
* polystyrene §1
* polyurethane Fig. 5
* random walk §2.1
* relaxation Appendix A
* relaxation map §3.3
* reptation §5
* resin §1
* Rouse mode §4.3
* Rouse model §4.2
* Rouse time Appendix C, §4.3
* rubber §1, §4.2, §4.4
* scaling §3.2, §3.4, §5
* secondary relaxation §3.3
* segmental dynamics §3
* self correlation §4.5
* semicrystalline §1
* shear modulus §3.2, §4.4
* shear viscosity §2.1, §3.2, §4.4, §4.4
* Smoluchowski equation §2.2
* Stokes law §2.1
* Stokes-Einstein relation §2.2, §3.4
* stretched exponential function §3.2, §3.4, §4.5
* stretching §3.2
* structural relaxation §3.1
* supercooled liquid §3.1
* susceptibility Appendix B
* Sutherland-Einstein relation §2.2
* swelling Appendix C
* $T_{\rm g}$ §1, §1, §3.4
* tagged particle §4.5
* thermoplastic §1
* thermosetting §1
* time-temperature superposition §3.2
* torsional spectroscopy §4.4
* tube model §5
* ultrasound §3.2
* viscosity, see shear viscosity
* vulcanization §1
* Williams-Watts function §3.2
* Zimm theory Appendix C, §4.2
* $\upalpha$ relaxation §3.3
* $\upbeta$ relaxation §3.3, §3.4
|
arxiv-papers
| 2011-03-22T10:18:25 |
2024-09-04T02:49:17.842664
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Joachim Wuttke",
"submitter": "Joachim Wuttke",
"url": "https://arxiv.org/abs/1103.4238"
}
|
1103.4242
|
4.2cm4.2cm3.6cm3cm
# Generators of simple Lie superalgebras in characteristic zero
Wende Liu1,2111Supported by the NSF for Distinguished Young Scholars, HLJ
Province (JC201004) and the NSF of China (10871057) and Liming
Tang1,2222Correspondence: wendeliu@ustc.edu.cn (W. Liu), limingaaa2@sina.com
(L. Tang)
1Department of Mathematics, Harbin Institute of Technology
Harbin 150006, China
2School of Mathematical Sciences, Harbin Normal University
Harbin 150025, China
> Abstract: It is shown that any finite dimensional simple Lie superalgebra
> over an algebraically closed field of characteristic 0 is generated by 2
> elements.
>
> Keywords: Classical Lie superalgebra; Cartan Lie superalgebra; generator
>
> Mathematics Subject Classification 2000: 17B05, 17B20, 17B70
## 0\. Introduction
Our principal aim is to determine the minimal number of generators for a
finite-dimensional simple Lie superalgebra over an algebraically closed field
of characteristic 0. The present work is dependent on the classification
theorem due to Kac [4], which states that a simple Lie superalgebra (excluding
simple Lie algebras) is isomorphic to either a classical Lie superalgebra or a
Cartan Lie superalgebra (see also [6]). In 2009, Bois [1] proved that a simple
Lie algebra in arbitrary characteristic $p\neq 2,3$ is generated by 2
elements. In 1976, Ionescu [3] proved that a simple Lie algebra $L$ over the
field of complex numbers is generated by $1.5$ elements, that is, given any
nonzero $x,$ there exists $y\in L$ such that the pair $(x,y)$ generates $L.$
In 1951, Kuranashi [5] proved that a semi-simple Lie algebra in characteristic
0 is generated by 2 elements.
As mentioned above, all the simple Lie superalgebras split into two series:
Classical Lie superalgebras and Cartan Lie superalgebras. The Lie algebra
(even part) of a classical Lie superalgebra is reductive and meanwhile there
exists a similarity in the structure side between the Cartan Lie superalgebras
in characteristic 0 and the simple graded Lie algebra of Cartan type in
characteristic $p$. Thus, motivated by Bois’s paper [1] and in view of the
observation above, we began this work in 2009. In the process we benefit in
addition much from the literatures above, especially from [1], which contains
a considerable amount of information in characteristic 0 and characteristic
$p$. We also use certain information about classical Lie superalgebras from
[7].
Throughout we work over an algebraically closed field $\mathbb{F}$ of
characteristic 0 and all the vector spaces and algebras are finite
dimensional. The main result is that any simple Lie superalgebra is generated
by 2 elements.
## 1\. Classical Lie superalgebras
### 1.1. Basics
A classical Lie superalgebra by definition is a simple Lie superalgebra for
which the representation of its Lie algebra (its even part) on the odd part is
completely reducible [4, 6]. Throughout this section, we always write
$L=L_{\bar{0}}\oplus L_{\bar{1}}$ for a classical Lie superalgebra. Our aim is
to determine the minimal number of generators for a classical Lie superalgebra
$L$. The strategy is as follows. First, using the results in Lie algebras [1,
3], we show that the Lie algebra $L_{\bar{0}}$ is generated by 2 elements.
Then, from the structure of semi-simple Lie algebras and their simple modules,
we prove that each classical Lie superalgebra is generated by 2 elements.
A classical Lie superalgebra is determined by its Lie algebra in a sense.
###### Proposition 1.1.
[6, p.101, Theorem 1] A simple Lie superalgebra is classical if and only if
its Lie algebra is reductive.
The following facts including Table 1.1 may be found in [4, 6]. The odd part
$L_{\bar{1}}$ as $L_{\bar{0}}$-module is completely reducible and
$L_{\bar{1}}$ decomposes into at most two irreducible components. By
Proposition 1.1, $L_{\bar{0}}=C(L_{\bar{0}})\oplus[L_{\bar{0}},L_{\bar{0}}].$
If the center $C(L_{\bar{0}})$ is nonzero, then $\dim C(L_{\bar{0}})=1$ and
$L_{\bar{1}}=L_{\bar{1}}^{1}\oplus L_{\bar{1}}^{2}$ is a direct sum of two
irreducible $L_{\bar{0}}$-submodules. For further information the reader is
refereed to [6, 4].
Classical Lie superalgebras (Table 1.1)
---
$L$ | $L_{\bar{0}}$ | $L_{\bar{1}}$ as $L_{\bar{0}}$-module
${\rm{A}}(m,n),\;m,n\geq 0,n\neq m$ | ${\rm{A}}_{m}\oplus{\rm{A}}_{n}\oplus\mathbb{F}$ | $~{}~{}\mathfrak{sl}_{m+1}\otimes\mathfrak{sl}_{n+1}\otimes\mathbb{F}\oplus(\mbox{its dual})$
${\rm{A}}(n,n),\;n>0$ | ${\rm{A}}_{n}\oplus{\rm{A}}_{n}$ | $~{}~{}\mathfrak{sl}_{n+1}\otimes\mathfrak{sl}_{n+1}\oplus(\mbox{its dual})$
${\rm{B}}(m,n),\;m\geq 0,n>0$ | ${\rm{B}}_{m}\oplus{\rm{C}}_{n}$ | $~{}~{}\mathfrak{so}_{2m+1}\otimes\mathfrak{sp}_{2n}$
${\rm{D}}(m,n),\;m\geq 2,n>0$ | ${\rm{D}}_{m}\oplus{\rm{C}}_{n}$ | $\mathfrak{so}_{2m}\otimes\mathfrak{sp}_{2n}$
${\rm{C}}(n),\;n\geq 2$ | ${\rm{C}}_{n-1}\oplus\mathbb{F}$ | $~{}~{}\mathfrak{csp}_{2n-2}\oplus(\mbox{its dual})$
${\rm{P}}(n),\;n\geq 2$ | ${\rm{A}}_{n}$ | $~{}~{}\Lambda^{2}\mathfrak{sl}^{*}_{n+1}\oplus{\rm{S}}^{2}\mathfrak{sl}_{n+1}$
${\rm{Q}}(n),\;n\geq 2$ | ${\rm{A}}_{n}$ | $~{}~{}{\rm{ad}}\mathfrak{sl}_{n+1}$
${\rm{D}}(2,1;\alpha),\;\alpha\in\mathbb{F}\setminus\\{-1,0\\}$ | ${\rm{A}}_{1}\oplus{\rm{A}}_{1}\oplus{\rm{A}}_{1}$ | $~{}~{}\mathfrak{sl}_{2}\otimes\mathfrak{sl}_{2}\otimes\mathfrak{sl}_{2}$
${\rm{G}}(3)$ | $\mathfrak{G}_{2}\oplus{\rm{A}}_{1}$ | $\mathfrak{G}_{2}\otimes\mathfrak{sl}_{2}$
${\rm{F}}(4)$ | ${\rm{B}}_{3}\oplus{\rm{A}}_{1}$ | $\mathfrak{spin}_{7}\otimes\mathfrak{sl}_{2}$
### 1.2. Even parts
Let $\mathfrak{g}$ be a semi-simple Lie algebra. Consider the root
decomposition relative to a Cartan subalgebra $\mathfrak{h}$:
$\mathfrak{g}=\mathfrak{h}\oplus\bigoplus_{\alpha\in\Phi}\mathfrak{g}^{\alpha}.$
For $x\in\mathfrak{g}$ we write
$x=x_{\mathfrak{h}}+\sum_{\alpha\in\Phi}x^{\alpha}$ for the corresponding root
space decomposition. It is well-known that [2]
$\displaystyle{\rm{dim}}\mathfrak{g}^{\alpha}=1\quad\mbox{for
all}\;\alpha\in\Phi,$ (1.1)
$\displaystyle\mathfrak{h}=\sum_{\alpha\in\Phi}[\mathfrak{g}^{\alpha},\mathfrak{g}^{-\alpha}],$
(1.2)
$\displaystyle[\mathfrak{g}^{\alpha},\mathfrak{g}^{\beta}]=\mathfrak{g}^{\alpha+\beta}\quad\mbox{whenever}\;\alpha,\beta,\alpha+\beta\in\Phi.$
(1.3)
Let $V$ be a vector space and $\mathfrak{F}:=\\{f_{1},\ldots,f_{n}\\}$ a
finite set of non-zero linear functions on $V$. Write
$\Omega_{\mathfrak{F}}:=\\{v\in V\mid\Pi_{1\leq i\neq j\leq
n}(f_{i}-f_{j})(v)\neq 0\\}.$
###### Lemma 1.2.
Suppose $\mathfrak{F}$ is a finite set of non-zero functions in $V^{*}$. Then
$\Omega_{\mathfrak{F}}\neq\emptyset.$ If $\mathfrak{G}\subset\mathfrak{F}$
then $\Omega_{\mathfrak{F}}\subset\Omega_{\mathfrak{G}}.$
###### Proof.
The first statement is from [1, Lemma 2.2.1] and the second is
straightforward. ∎
This lemma will be usually used in the special situation when $V$ is a Cartan
subalgebra of a simple Lie superalgebra.
An element $x$ in a semi-simple Lie algebra $\mathfrak{g}$ is called balanced
if it has no zero components with respect to the standard decomposition of
simple Lie algebras. If $\mathfrak{h}$ is a Cartan subalgebra of
$\mathfrak{g}$, $x\in\mathfrak{g}$ is called $\mathfrak{h}$-balanced provided
that $x^{\alpha}\neq 0$ for all $\alpha\in\Phi.$
###### Lemma 1.3.
[1] An element of a semi-simple Lie algebra $\mathfrak{g}$ is balanced if and
only if it is $\mathfrak{h}$-balanced for some Cartan subalgebra
$\mathfrak{h}$.
###### Proof.
One direction is obvious. Suppose $x\in\mathfrak{g}$ is balanced and let
$\mathfrak{h}^{\prime}$ is a Cartan subalgebra of $\mathfrak{g}$. From the
proof of [1, Theorem 2.2.3], there exists $\varphi\in\mathfrak{g}$ such that
$\varphi(x)$ is $\mathfrak{h}^{\prime}$-balanced. Letting
$\mathfrak{h}=\varphi^{-1}(\mathfrak{h}^{\prime})$, one sees that
$\mathfrak{h}$ is a Cartan subalgebra and $x$ is $\mathfrak{h}$-balanced. ∎
For an algebra $\mathfrak{A}$ and $x,y\in\mathfrak{A}$, we write $\langle
x,y\rangle$ for the subalgebra generated by $x$ and $y$. We should notice that
for a Lie superalgebra $\langle x,y\rangle$ is not necessarily a
$\mathbb{Z}_{2}$-graded subalgebra (hence not necessarily a sub-Lie
superalgebra). The following technical lemma will be frequently used.
###### Lemma 1.4.
Let $\mathfrak{A}$ be an algebra. For $a\in\mathfrak{A}$ write $L_{a}$ for the
left-multiplication operator given by $a$. Suppose
$x=x_{1}+x_{2}+\cdots+x_{n}$ is a sum of eigenvectors of $L_{a}$ associated
with mutually distinct eigenvalues. Then all $x_{i}$’s lie in the subalgebra
generated $\langle a,x\rangle$.
###### Proof.
Let $\lambda_{i}$ be the eigenvalues of $L_{a}$ corresponding to $x_{i}$.
Suppose for a moment that all the $\lambda_{i}$’s are nonzero. Then
$(L_{a})^{k}(x)=\lambda_{1}^{k}x_{1}+\lambda_{2}^{k}x_{2}+\cdots+\lambda_{n}^{k}x_{n}\quad\mbox{for}\;k\geq
1.$
Our conclusion in this case follows from the fact that the Vandermonde
determinate given by $\lambda_{1},\lambda_{2},\ldots,\lambda_{n}$ is nonzero
and thereby the general situation is clear. ∎
We write down a lemma from [1, Theorem B and Corollory 2.2.5] and the
references therein, which is also a consequence of Lemmas 1.2, 1.3 and 1.4.
###### Lemma 1.5.
Let $\mathfrak{g}$ be a semi-simple Lie algebra. If $x\in\mathfrak{g}$ is
balanced then for a suitable Cartan subalgebra $\mathfrak{h}$ and the
corresponding root system $\Phi$ we have $\mathfrak{g}=\langle x,h\rangle$ for
all $h\in\Omega_{\Phi}$.
Denote by $\Pi:=\\{\alpha_{1},\ldots,\alpha_{n}\\}$ the system of simple roots
of a semi-simple Lie algebra $\mathfrak{g}$ relative to a Cartan subalgebra
$\mathfrak{h}$. As above, $x\in\mathfrak{g}$ is refereed to as $\Pi$-balanced
if $x$ is a sum of all the simple-root vectors, that is,
$x=\sum_{\alpha\in\Pi}x^{\alpha},$ where $x^{\alpha}$ is a root vector of
$\alpha$. Recall that $\Omega_{\Pi}\neq\emptyset$ by Lemma 1.2.
###### Corollary 1.6.
A semi-simple Lie algebra $\mathfrak{g}$ is generated by a $\Pi$-balanced
element and an element in $\Omega_{\Pi}$.
###### Proof.
This is a consequence of Lemma 1.4 and the facts (1.1), (1.2) and (1.3). ∎
###### Proposition 1.7.
The Lie algebra of a classical Lie superalgebra is generated by 2 elements.
###### Proof.
Let $L=L_{\bar{0}}\oplus L_{\bar{1}}$ be a classical Lie superalgebra. By
Proposition 1.1, $L_{\bar{0}}$ is reductive, that is,
$[L_{\bar{0}},L_{\bar{0}}]$ is semi-simple and
$L_{\bar{0}}=C(L_{\bar{0}})\oplus[L_{\bar{0}},L_{\bar{0}}].$ (1.4)
If $C(L_{\bar{0}})=0$, the conclusion follows immediately from Lemma 1.5. If
$C(L_{\bar{0}})$ is nonzero, then $C(L_{\bar{0}})=\mathbb{F}z$ is
1-dimensional. Choose a balanced element $x\in[L_{\bar{0}},L_{\bar{0}}]$. By
Lemma 1.5, there exists $h\in[L_{\bar{0}},L_{\bar{0}}]$ such that
$[L_{\bar{0}},L_{\bar{0}}]=\langle x,h\rangle.$ Claim that
$L_{\bar{0}}=\langle x,h+z\rangle.$
Indeed, considering the projection of $L_{\bar{0}}$ onto
$[L_{\bar{0}},L_{\bar{0}}]$ with respect to the decomposition (1.4), denoted
by $\pi$, which is a homomorphism of Lie algebras, we have
$\pi(\langle x,h+z\rangle)=\langle\pi(x),\pi(h+z)\rangle=\langle
x,h\rangle=[L_{\bar{0}},L_{\bar{0}}].$
Hence only two possibilities might occur: $\langle x,h+z\rangle=L_{\bar{0}}$
or ${\rm{dim}}\langle x,h+z\rangle={\rm{dim}}[L_{\bar{0}},L_{\bar{0}}].$ The
first case is the desired. Let us show that the second does not occur. Assume
the contrary. Then $\pi$ restricting to $\langle x,h+z\rangle$ is an
isomorphism and thereby $\langle x,h+z\rangle$ is semi-simple. Thus
$\langle x,h+z\rangle=[\langle x,h+z\rangle,\langle x,h+z\rangle]=[\langle
x,h\rangle,\langle x,h\rangle]=\langle x,h\rangle.$
Hence $h\in\langle x,h+z\rangle$. It follows that
$z\in\langle x,h+z\rangle=\langle x,h\rangle=[L_{\bar{0}},L_{\bar{0}}],$
contradicting (1.4). ∎
###### Remark 1.8.
By Corollary 1.6, $\mathfrak{sl}(n)$ is generated by a $\Pi$-balanced element
$x$ and an element $y$ in $\Omega_{\Pi}$. As in the proof of Proposition 1.7,
one may prove that $\mathfrak{gl}(n)$ is generated by $h$ and $x+z$, where $z$
is a nonzero central element in $\mathfrak{gl}(n)$.
### 1.3. Classical Lie superalgebras
Suppose $L$ is a classical Lie superalgebra with the standard Cartan
subalgebra $H$. The corresponding weight (root) space decompositions are
$\displaystyle
L_{\bar{0}}=H\oplus\bigoplus_{\alpha\in\Delta_{\bar{0}}}L_{\bar{0}}^{\alpha},\qquad
L_{\bar{1}}=\bigoplus_{\beta\in\Delta_{\bar{1}}}L_{\bar{1}}^{\beta};$
$\displaystyle
L=H\oplus\bigoplus_{\alpha\in\Delta_{\bar{0}}}L_{\bar{0}}^{\alpha}\oplus\bigoplus_{\beta\in\Delta_{\bar{1}}}L_{\bar{1}}^{\beta}.$
(1.5)
Every $x\in L$ has a unique decomposition with respect to (1.5):
$x=x_{H}+\sum_{\alpha\in\Delta_{\bar{0}}}x_{\bar{0}}^{\alpha}+\sum_{\beta\in\Delta_{\bar{1}}}x_{\bar{1}}^{\beta},$
(1.6)
where $x_{H}\in H,$ $x_{\bar{0}}^{\alpha}\in L_{\bar{0}}^{\alpha}$,
$x_{\bar{1}}^{\beta}\in L_{\bar{1}}^{\beta}$. Write
$\Delta:=\Delta_{\bar{0}}\cup\Delta_{\bar{1}}\quad\mbox{and}\quad
L^{\gamma}:=L_{\bar{0}}^{\gamma}\oplus
L_{\bar{1}}^{\gamma}\quad\mbox{for}\quad\gamma\in\Delta.$
Note that the standard Cartan subalgebra of a classical Lie superalgebra is
diagonal:
$\mathrm{ad}h(x)=\gamma(h)x\quad\mbox{for all}\;\;h\in H,\;x\in
L^{\gamma},\;\gamma\in\Delta.$ (1.7)
For $x\in L$, write
$\mathbf{supp}(x):=\\{\gamma\in\Delta\mid x_{\gamma}\neq 0\\}.$ (1.8)
For $x=x_{\bar{0}}+x_{\bar{1}}\in L,$
$\mathbf{supp}(x)=\mathbf{supp}(x_{\bar{0}})\cup\mathbf{supp}(x_{\bar{1}}).$
###### Lemma 1.9.
* $\mathrm{(1)}$
If $L\neq{\rm{Q}}(n)$ then $0\notin\Delta_{\bar{1}}$ and
$\Delta_{\bar{0}}\cap\Delta_{\bar{1}}=\emptyset.$
* $\mathrm{(2)}$
If $L={\rm{Q}}(n)$ then $\Delta_{\bar{1}}=\\{0\\}\cup\Delta_{\bar{0}}.$
* $\mathrm{(3)}$
If $L\neq\mathrm{A}(1,1)$, $\mathrm{Q}(n)$ or $\mathrm{P}(3)$ then
$\mathrm{dim}L^{\gamma}=1$ for every $\gamma\in\Delta.$
* $\mathrm{(4)}$
Suppose $L={\rm{A}}(m,n)$, ${\rm{A}}(n,n),{\rm{C}}(n)$ or ${\rm{P}}(n),$ where
$m\neq n$.
* $\mathrm{(a)}$
$L_{\bar{1}}=L_{\bar{1}}^{1}\oplus L_{\bar{1}}^{2}$ is a direct sum of two
irreducible $L_{\bar{0}}$-submodules.
* $\mathrm{(b)}$
Let $\Delta_{\bar{1}}^{i}$ be the weight set of $L_{\bar{1}}^{i}$ relative to
$H,$ $i=1,2$. Then there exist $\alpha_{\bar{1}}^{i}\in\Delta_{\bar{1}}^{i}$
such that $\alpha_{\bar{1}}^{1}\neq\alpha_{\bar{1}}^{2}.$
###### Proof.
(1), (2) and (3) follow from [6, Proposition 1, p.137]. (4)(a) follows from
Table 1.1. Let us consider (4)(b). For $L={\rm{A}}(m,n),{\rm{A}}(n,n)$ or
${\rm{C}}(n),$ it follows from the fact that $L_{0}$-modules $L_{-1}$ and
$L_{1}$ are contragradient. For $L={\rm{P}}(n),$ a direct computation shows
that $-\varepsilon_{1}-\varepsilon_{2}\in\Delta_{\bar{1}}^{1}$ and
$2\varepsilon_{1}\in\Delta_{\bar{1}}^{2}.$ ∎
###### Theorem 1.10.
A classical Lie superalgebra is generated by 2 elements.
###### Proof.
Let $L=L_{\bar{0}}\oplus L_{\bar{1}}$ be a classical Lie superalgebra.
Case 1. Suppose ${\rm{dim}}C(L_{\bar{0}})=1.$ In this case $L={\rm{C}}(n)$ or
${\rm{A}}(m,n)$ with $m\neq n$ (see Table 1.1). Then
$L_{\bar{1}}=L_{\bar{1}}^{1}\oplus L_{\bar{1}}^{2}$ is a direct sum of two
irreducible $L_{\bar{0}}$-submodules and $[L_{\bar{0}},L_{\bar{0}}]$ is simple
or a direct sum of two simple Lie algebras. Let $x_{\bar{0}}$ be a balanced
element in $[L_{\bar{0}},L_{\bar{0}}].$ From Lemma 1.3, there exists a Cartan
subalgebra $\mathfrak{h}$ of $[L_{\bar{0}},L_{\bar{0}}]$ such that
${\bf{supp}}(x_{\bar{0}})=\Delta_{\bar{0}},$ the latter is viewed as the root
system relative to $\mathfrak{h}.$ By Lemma 1.5, we have
$[L_{\bar{0}},L_{\bar{0}}]=\langle x_{\bar{0}},h\rangle$ for all
$h\in\Omega_{\Delta_{\bar{0}}}$. Furthermore, from the proof of Proposition
1.7 it follows that $L_{\bar{0}}=\langle x_{\bar{0}},h+z\rangle$ for $0\neq
z\in C(L_{\bar{0}}).$ By Lemma 1.9(1) and (4), there exist
$\alpha_{\bar{1}}^{1}\in\Delta_{\bar{1}}^{1}$ and
$\alpha_{\bar{1}}^{2}\in\Delta_{\bar{1}}^{2}$ such that
$\alpha_{\bar{1}}^{1}\neq\alpha_{\bar{1}}^{2}$ and
$\alpha_{\bar{1}}^{1},\alpha_{\bar{1}}^{2}\notin\Delta_{\bar{0}}.$ Set
$x:=x_{\bar{0}}+x_{\bar{1}}^{\alpha_{\bar{1}}^{1}}+x_{\bar{1}}^{\alpha_{\bar{1}}^{2}}+z$
for some weight vectors $x_{\bar{1}}^{\alpha_{\bar{1}}^{i}}\in
L_{\bar{1}}^{\alpha_{\bar{1}}^{i}},\;i=1,2.$ Then
$x=(x_{\mathfrak{h}}+z)+\sum_{\alpha\in\Delta_{\bar{0}}}x_{\bar{0}}^{\alpha}+x_{\bar{1}}^{\alpha_{\bar{1}}^{1}}+x_{\bar{1}}^{\alpha_{\bar{1}}^{2}}.$
Write
$\Phi:=\Delta_{\bar{0}}\cup\\{\alpha_{\bar{1}}^{1}\\}\cup\\{\alpha_{\bar{1}}^{2}\\}$
and choose an element $h^{\prime}\in\Omega_{\Phi}.$ Assert $\langle
x,h^{\prime}\rangle=L.$ To show that, write $L^{\prime}:=\langle
x,h^{\prime}\rangle.$ Lemma 1.4 implies all components $x_{\bar{0}}^{\alpha},$
$x_{\bar{1}}^{\alpha_{\bar{1}}^{1}}$, $x_{\bar{1}}^{\alpha_{\bar{1}}^{2}}$ and
$x_{H}+z$ belong to $L^{\prime}$. Since $x_{\bar{0}}^{\alpha}\in L^{\prime}$
for all $\alpha\in\Delta_{\bar{0}},$ from (1.2) we have $x_{H}\in L^{\prime}$
and then $z\in L^{\prime}.$ As
$h^{\prime}\in\Omega_{\Phi}\subset\Omega_{\Delta_{\bar{0}}},$ we obtain
$\langle x_{\bar{0}},h^{\prime}+z\rangle=L_{\bar{0}}\subset L^{\prime}.$ Since
$x_{\bar{1}}^{\alpha_{\bar{1}}^{i}}\in L^{\prime}$ and
$L_{\bar{1}}^{\alpha_{i}}$ is an irreducible $L_{\bar{0}}$-module, we have
$L_{\bar{1}}^{i}\subset L^{\prime},$ where $i=1,2.$ Therefore, $L=L^{\prime}.$
Case 2. Suppose $C(L_{\bar{0}})=0.$ Then $L_{\bar{0}}$ is a semi-simple Lie
algebra and $L_{\bar{1}}$ decomposes into at most two irreducible components
(see Table 1.1).
Subcase 2.1. Suppose $L_{\bar{1}}$ is an irreducible $L_{\bar{0}}$-module.
Note that in this subcase, $L$ is of type $\mathrm{B}(m,n)$,
$\mathrm{D}(m,n)$, ${\rm{D}}(2,1;\alpha),$ $\mathrm{Q}(n),$ $\mathrm{G}(3)$ or
$\mathrm{F}(4).$ We choose a weight vector $x_{\bar{1}}^{\alpha_{\bar{1}}}\in
L_{\bar{1}}^{\alpha_{\bar{1}}}$ ($\alpha_{\bar{1}}\neq 0$) and any balanced
element $x_{\bar{0}}$ in $L_{\bar{0}}.$ By Lemma 1.3, we may assume that
${\bf{supp}}(x_{\bar{0}})=\Delta_{\bar{0}}.$
If $L\neq{\rm{Q}}(n),$ according to Lemma 1.9(1),
$\alpha_{\bar{1}}\notin\Delta_{\bar{0}}.$ Let
$x=x_{\bar{0}}+x_{\bar{1}}^{\alpha_{\bar{1}}}.$ Then
$x=x_{H}+\sum_{\alpha\in{\Delta_{\bar{0}}}}x_{\bar{0}}^{\alpha}+x_{\bar{1}}^{\alpha_{\bar{1}}}$
is the root-vector decomposition. Let
$\Phi=\Delta_{\bar{0}}\cup\\{\alpha_{\bar{1}}\\}$. By Lemmas 1.2 and 1.4, all
components $x_{H}$, $x_{\bar{0}}^{\alpha}$ and
$x_{\bar{1}}^{\alpha_{\bar{1}}}$ belong to $\langle x,h\rangle$ for
$h\in\Omega_{\Phi}\subset H$. By (1.1) and (1.2), this yields
$L_{\bar{0}}=\langle x_{\bar{0}},h\rangle\subset\langle x,h\rangle.$ Since
$x_{\bar{1}}^{\alpha_{\bar{1}}}\in\langle x,h\rangle$ and $L_{\bar{1}}$ is
irreducible as $L_{\bar{0}}$-module, we have $L=\langle x,h\rangle.$
Suppose $L={\rm{Q}}(n).$ Denote by
$\Pi:=\\{\delta_{1},\delta_{2},\ldots,\delta_{n}\\}$ the set of simple roots
of $L_{\bar{0}}$ relative to the Cartan subalgebra $H.$ According to Lemma
1.9(2), without loss of generality we may assume that
$\alpha_{\bar{1}}:=\delta_{1}+\delta_{2}.$ Let
$x=x_{\bar{0}}+x_{\bar{1}}^{\alpha_{\bar{1}}}.$ Then
$x=x_{H}+\sum_{\alpha\in{\Delta_{\bar{0}}}\setminus\\{\alpha_{\bar{1}}\\}}x_{\bar{0}}^{\alpha}+(x_{\bar{0}}^{\alpha_{\bar{1}}}+x_{\bar{1}}^{\alpha_{\bar{1}}}).$
By Lemma 1.4, all components $x_{H}$, $x_{\bar{0}}^{\alpha}$
($\alpha\in\Delta_{\bar{0}}\setminus\\{\alpha_{\bar{1}}\\}$), and
$x_{\bar{0}}^{\alpha_{\bar{1}}}+x_{\bar{1}}^{\alpha_{\bar{1}}}$ belong to
$\langle x,h\rangle$, where $h\in\Omega_{\Delta_{\bar{0}}}\subset H.$ From
(1.3) and (1.1) we conclude that
$x_{\bar{0}}^{\alpha_{\bar{1}}}\in\mathbb{F}[x_{\bar{0}}^{\delta_{1}},x_{\bar{0}}^{\delta_{2}}]\subset\langle
x,h\rangle$ and then $x_{\bar{1}}^{\alpha_{\bar{1}}}\in\langle x,h\rangle.$ As
above, the irreducibility of $L_{{\bar{1}}}$ yields $L=\langle x,h\rangle.$
Subcase 2.2. Suppose $L_{\bar{1}}=L_{\bar{1}}^{1}\oplus L_{\bar{1}}^{2}$ is a
direct sum of two irreducible $L_{\bar{0}}$-submodules. In this case,
$L={\rm{A}}(n,n)$ or ${\rm{P}}(n).$ Choose any balanced element
$x_{\bar{0}}\in L_{\bar{0}}$ and weight vectors
$x_{\bar{1}}^{\alpha_{\bar{1}}^{i}}\in L_{\bar{1}}^{\alpha_{\bar{1}}^{i}},$
where ${\alpha_{\bar{1}}^{1}}$ and ${\alpha_{\bar{1}}^{2}}$ are different
nonzero weights and $\alpha_{\bar{1}}^{i}\notin\Delta_{\bar{0}}$ (Lemma 1.9(1)
and (4)). Lemma 1.3 allows us to assume that
${\bf{supp}}(x_{\bar{0}})=\Delta_{\bar{0}}.$ Let
$x:=x_{\bar{0}}+x_{\bar{1}}^{\alpha_{\bar{1}}^{1}}+x_{\bar{1}}^{\alpha_{\bar{1}}^{2}}$
and
$\Phi:=\Delta_{\bar{0}}\cup\\{\alpha_{\bar{1}}^{1}\\}\cup\\{\alpha_{\bar{1}}^{2}\\}.$
As before, we are able to deduce that $L_{\bar{0}}\subset\langle x,h\rangle$
and
$x_{\bar{1}}^{\alpha_{\bar{1}}^{1}},x_{\bar{1}}^{\alpha_{\bar{1}}^{2}}\in\langle
x,h\rangle$ for $h\in\Omega_{\Phi}\subset\Omega_{\Delta_{\bar{0}}}\subset H.$
Thanks to the irreducibility of $L_{\bar{1}}^{1}$ and $L_{\bar{1}}^{2}$, we
have $L=\langle x,h\rangle$. The proof is complete. ∎
###### Remark 1.11.
In view of the proof of Theorem 1.10, starting from any balanced element in
the semi-simple part of the Lie algebra of a classical Lie superalgebra $L$ we
are able to find two elements generating $L.$
By Theorem 1.10, as in the proof of Proposition 1.7, one is able to prove the
following
###### Corollary 1.12.
The general linear Lie superalgebra $\mathfrak{gl}(m,n)$ is generated by 2
elements.
As a subsidiary result, let us show that a classical Lie superalgebra, except
for $\mathrm{A}(1,1)$, $\mathrm{Q}(n)$ or $\mathrm{P}(3)$, is generated by 2
homogeneous elements. By Lemma 1.9(3), for such a classical Lie superalgebra,
all the odd-weight subspaces are 1-dimensional. Here we give a more general
description in Remark 1.13. As before, an element $x\in L$ is called
$\Delta_{\bar{1}}$-balanced if $x$ is a sum of all the odd-weight vectors,
namely, $x=\sum_{\gamma\in\Delta_{\bar{1}}}x_{\bar{1}}^{\gamma},$ where
$x_{\bar{1}}^{\gamma}$ is a weight vector of $\gamma$.
###### Remark 1.13.
A finite dimensional simple Lie superalgeba (not necessarily classical) for
which all the odd-weight is $1$-dimensional is generated by 2 homogeneous
elements.
###### Proof.
Let $L$ be such a Lie superalgebra. Choose a $\Delta_{\bar{1}}$-balanced
element $x=\sum_{\gamma\in\Delta_{\bar{1}}}x_{\bar{1}}^{\gamma}$ and any
$h\in\Omega_{\Delta_{\bar{1}}}\subset H.$ By Lemmas 1.2 and 1.4, all
components $x_{\bar{1}}^{\gamma}$ belong to $\langle x,h\rangle$ for
$h\in\Omega_{\Delta_{\bar{1}}}\subset H$. Since $\mathrm{dim}L^{\gamma}=1,$ we
conclude that $L^{\gamma}\subset\langle x,h\rangle$ for all
$\gamma\in\Delta_{\bar{1}}.$ By [4, Proposition 1.2.7(1), p.20],
$L_{\overline{0}}=[L_{\overline{0}},L_{\overline{0}}]$ and then $\langle
x,h\rangle=L$. ∎
Finally we give an example to explain how to find the pairs of generators in
Theorem 1.10 and Remark 1.13.
###### Example 1.14.
Let $\mathrm{A}={\rm{A}}(1;0)$. Find the generators of $\mathrm{A}$ as in
Theorem 1.10 and Remark 1.13.
Recall that ${\rm{A}}=\\{x\in\mathfrak{gl}(2;1)\mid{\rm{str}}(x)=0\\}$. Its
Lie algebra is a direct sum of the $1$-dimensional center and the semi-simple
part:
${\rm{A}}_{\bar{0}}=\mathbb{F}(e_{11}+e_{22}+2e_{33})\oplus[{\rm{A}}_{\bar{0}},{\rm{A}}_{\bar{0}}],$
where
$[{\rm{A}}_{\bar{0}},{\rm{A}}_{\bar{0}}]=\mathrm{span}_{\mathbb{F}}\\{e_{11}-e_{22},e_{12},e_{21}\\}.$
The odd part is a direct sum of two irreducible
$\mathrm{A}_{\bar{0}}$-submodules:
$\displaystyle{\rm{A}}_{\bar{1}}={\rm{A}}_{\bar{1}}^{1}\oplus{\rm{A}}_{\bar{1}}^{2}=\mathrm{span}_{\mathbb{F}}\\{e_{13},e_{23}\\}\oplus\mathrm{span}_{\mathbb{F}}\\{e_{31},e_{32}\\}.$
The standard Cartan subalgebra is
$H=\mathrm{span}_{\mathbb{F}}\\{e_{11}-e_{22},e_{11}+e_{22}+2e_{33}\\}.$
Table 1.2 gives all the roots and the corresponding root vectors.
Table 1.2
---
roots | $\varepsilon_{1}-\varepsilon_{2}$ | $\varepsilon_{2}-\varepsilon_{1}$ | $\varepsilon_{1}-2\varepsilon_{3}$ | $\varepsilon_{2}-2\varepsilon_{3}$ | $-\varepsilon_{1}+2\varepsilon_{3}$ | $-\varepsilon_{2}+2\varepsilon_{3}$
vectors | $\hfill e_{12}\hfill$ | $\hfill e_{21}\hfill$ | $\hfill e_{13}\hfill$ | $\hfill e_{23}\hfill$ | $\hfill e_{31}\hfill$ | $\hfill e_{32}\hfill$
* •
Theorem 1.10-Version. Put
$x:=(e_{12}+e_{21})+e_{13}+e_{31}+(e_{11}+e_{22}+2e_{33})$ and
$h:=3e_{11}+e_{22}+4e_{33}.$ From Table 1.2, the weight values corresponding
to $e_{12},e_{21},e_{13},e_{31}$ are $2,-2,-5,5,$ respectively. As in the
proof of Theorem 1.10, we have
$e_{12},e_{21},e_{13},e_{31},e_{11}+e_{22}+2e_{33}\in\langle x,h\rangle.$
Furthermore,
$\displaystyle\langle
e_{12}+e_{21},h+(e_{11}+e_{22}+2e_{33})\rangle={\rm{A}}_{\bar{0}}\subset\langle
x,h\rangle.$
Since ${\rm{A}}_{\bar{1}}^{i}$ is an irreducible ${\rm{A}}_{\bar{0}}$-module,
${\rm{A}}_{\bar{1}}^{i}\subset\langle x,h\rangle$, $i=1,2.$ Hence
$\mathrm{A}=\langle x,h\rangle$.
* •
Remark 1.13-Version. Consider the $\Delta_{\overline{1}}$-balanced element
$x:=e_{13}+e_{31}+e_{23}+e_{32}$ and write $h:=e_{11}+e_{33}.$ By Table 1.2,
the weight values corresponding to $e_{13},e_{31},e_{23},e_{32}$ are
$-1,1,-2,2,$ respectively. As in the proof of Remark 1.13, we have
$e_{13},e_{31},e_{23},e_{32}\in\langle x,h\rangle.$ Since
$\mathrm{dimA}_{\bar{1}}^{\lambda}=1$ for $\lambda\in\Delta_{\bar{1}}$ and
$[\mathrm{A}_{\bar{1}},\mathrm{A}_{\bar{1}}]=\mathrm{A}_{\overline{0}},$ we
obtain $\mathrm{A}=\langle x,h\rangle.$
## 2\. Cartan Lie superalgebras
All the Cartan Lie superalgebras are listed below [4, 6]:
* $W(n)$ ($n\geq 3$), $S(n)$ ($n\geq 4$), $\widetilde{S}(2m)$ ($m\geq 2$), $H(n)$ ($n\geq 5$).
Let $\Lambda(n)$ be the Grassmann superalgebra with $n$ generators
$\xi_{1},\ldots,\xi_{n}$. For a $k$-shuffle $u:=(i_{1},i_{2},\ldots,i_{k})$,
that is, a strictly increasing sequence between $1$ and $n$, we write $|u|:=k$
and $x^{u}:=\xi_{i_{1}}\xi_{i_{2}}\cdots\xi_{i_{k}}.$ Letting
${\rm{deg}}\xi_{i}=1,\;i=1,\ldots,n,$ we obtain the so-called standard
$\mathbb{Z}$-grading of $\Lambda(n).$ Let us briefly describe the Cartan Lie
superalgebras.
* •
$W(n)={\rm{der}}\Lambda(n)$ is $\mathbb{Z}$-graded,
$W(n)=\oplus_{k=-1}^{n-1}W(n)_{k},$
$W(n)_{k}={\rm{span}}_{\mathbb{F}}\\{x^{u}\partial/\partial\xi_{i}\mid|u|=k+1,\;1\leq
i\leq n\\}.$
* •
$S(n)=\oplus_{k=-1}^{n-2}S(n)_{k}$ is a $\mathbb{Z}$-graded subalgebra of
$W(n)$,
$S(n)_{k}={\rm{span}}_{\mathbb{F}}\\{\mathrm{D}_{ij}(x^{u})\mid|u|=k+2,\,\
1\leq i,j\leq n\\}.$
Hereafter,
${{\mathrm{D}_{ij}}}(f):=\partial(f)/\partial\xi_{i}\partial/\partial\xi_{j}+\partial(f)/\partial\xi_{j}\partial/\partial\xi_{i}$
for $f\in\Lambda(n).$
* •
$\widetilde{S}(2m)$ ($m\geq 2$) is a subalgebra of $W(2m)$ and as a
$\mathbb{Z}$-graded subspace,
$\widetilde{S}(2m)=\oplus_{k=-1}^{2m-2}\widetilde{S}(2m)_{k},$
where
$\displaystyle\widetilde{S}(2m)_{-1}={\rm{span}}_{\mathbb{F}}\\{(1+\xi_{1}\cdots\xi_{2m})\partial/\partial\xi_{j}\mid
1\leq j\leq 2m\\},$ $\displaystyle\widetilde{S}(2m)_{k}=S(2m)_{k},\;0\leq
k\leq 2m-2.$
Notice that $\widetilde{S}(2m)$ is not a $\mathbb{Z}$-graded subalgebra of
$W(2m)$.
* •
$H(n)=\oplus_{k=-1}^{n-3}H(n)_{k}$ is a $\mathbb{Z}$-graded subalgebra of
$W(n)$, where
$H(n)_{k}={\rm{span}}_{\mathbb{F}}\\{{\rm{D_{H}}}(x^{u})\mid|u|=k+2\\}.$
To explain the linear mapping ${\rm{D_{H}}}:\Lambda(n)\longrightarrow W(n)$,
write $n=2m$ $(m\geq 3)$ or $2m+1$ $(m\geq 2).$ By definition,
${\rm{D_{H}}}(x^{u}):=(-1)^{|u|}\sum_{i=1}^{n}\partial(x^{u})/\partial\xi_{i}\partial/\partial\xi_{i^{\prime}}$
for any shuffle $u,$ where ′ is the involution of the index set
$\\{1,\ldots,n\\}$ satisfying that $i^{\prime}=i+m$ for $i\leq m$.
For simplicity we usually write $W,S,\widetilde{S},H$ for
$W(n),S(n),\widetilde{S}(2m),H(n),$ respectively. Throughout this section $L$
denotes one of Cartan Lie superalgebras. Consider its decomposition of
subspaces mentioned above:
$L=L_{-1}\oplus\cdots\oplus L_{s}.$ (2.1)
For $W,S,\widetilde{S}$ and $H$, the height $s$ is $n-1,$ $n-2,$ $2m-2$ or
$n-3,$ respectively. Note that $S$ and $H$ are $\mathbb{Z}$-graded subalgebras
of $W$ with respect to (2.1), but $\widetilde{S}$ is not. The null $L_{0}$ is
isomorphic to
$\mathfrak{gl}(n),\mathfrak{sl}(n),\mathfrak{sl}(2m),\mathfrak{so}(n)$ for
$L=W,S,\widetilde{S},H,$ respectively.
###### Lemma 2.1.
* $\mathrm{(1)}$
$L_{-1}$ and $L_{s}$ are irreducible as $L_{0}$-modules.
* $\mathrm{(2)}$
$L_{1}$ is an irreducible $L_{0}$-module for $L=S,\widetilde{S}$ or $H,$
except for $H(6).$ For $L=H(6),$ $L_{1}$ is a direct sum of two irreducible
$L_{0}$-submodules.
* $\mathrm{(3)}$
$L$ is generated by the local part $L_{-1}\oplus L_{0}\oplus L_{1}.$
* $\mathrm{(4)}$
$L$ is generated by $L_{-1}$ and $L_{s}$ for $L=W$, $S$ or $H$.
###### Proof.
All the statements are standards (see [4, 6] for example), except for that
$\widetilde{S}_{-1}$ is irreducible as $\widetilde{S}_{0}$-module. Indeed, a
direct verification shows that $\widetilde{S}_{-1}$ is an
$\widetilde{S}_{0}$-module and the irreducibility follows from the canonical
isomorphism of $S_{0}$-modules
$\varphi:S_{-1}\longrightarrow\widetilde{S}_{-1}$ assigning
$\partial/\partial\xi_{i}$ to
$(1+\xi_{1}\cdots\xi_{2m})\partial/\partial\xi_{i}$ for $1\leq i\leq 2m.$ ∎
The following is a list of bases of the standard Cartan subalgebras
$\mathfrak{h}_{L_{0}}$ of $L_{0}.$
Table 2.1
---
$L$ | basis of $\mathfrak{h}_{L_{0}}$
$W(n)$ | $\xi_{i}\partial/\partial\xi_{i},$ $1\leq i\leq n$
$S(n)$ | $\xi_{1}\partial/\partial\xi_{1}-\xi_{j}\partial/\partial\xi_{j},$ $2\leq j\leq n$
$\widetilde{S}(2m)$ | $\xi_{1}\partial/\partial\xi_{1}-\xi_{j}\partial/\partial\xi_{j},$ $2\leq j\leq 2m$
$H(2m)$ | $\xi_{i}\partial/\partial\xi_{i}-\xi_{m+i}\partial/\partial\xi_{m+i},$ $1\leq i\leq m$
$H(2m+1)$ | $\xi_{i+1}\partial/\partial\xi_{i+1}-\xi_{m+i}\partial/\partial\xi_{m+i},$ $1\leq i\leq m$
The weight space decomposition of the component $L_{k}$ relative to
$\mathfrak{h}_{L_{0}}$ is:
$L_{k}=\delta_{k,0}\mathfrak{h}_{L_{0}}\oplus_{\alpha\in\Delta_{k}}L_{k}^{\alpha},\,\
\mbox{where}\,\ -1\leq k\leq s.$
By Lemma 2.1(2), $H(6)_{1}$ is a direct sum of two irreducible
$H(6)_{0}$-modules
$H(6)_{1}=H(6)_{1}^{1}\oplus H(6)_{1}^{2}.$
Let $\Delta_{1}^{i}$ be the weight set of $H(6)_{1}^{i},$ $i=1,2.$ Write $\Pi$
for the set of simple roots of $L_{0}$ relative to the Cartan subalgebra
$\mathfrak{h}_{L_{0}}$. We have
###### Lemma 2.2.
* $\mathrm{(1)}$
If $L=W$ or $S$ then
$\Pi\cap\Delta_{-1}=\Pi\cap\Delta_{s}=\Delta_{-1}\cap\Delta_{s}=\emptyset.$
* $\mathrm{(2)}$
If $L=\widetilde{S}$ then
$\Pi\cap\Delta_{-1}=\Pi\cap\Delta_{1}=\Delta_{-1}\cap\Delta_{1}=\emptyset.$
* $\mathrm{(3)}$
If $L=H(2m)$ then $\Pi\cap\Delta_{-1}=\Pi\cap\Delta_{1}=\emptyset$ and
$\Delta_{-1}\neq\Delta_{1}$.
* $\mathrm{(4)}$
If $L=H(2m+1)$ then $0\in\Delta_{-1},$ $\Pi\neq\Delta_{1}$ and
$\Delta_{-1}\neq\Delta_{1}.$
* $\mathrm{(5)}$
There exist nonzero weights $\alpha_{1}^{i}\in\Delta_{1}^{i}$ such that
$\alpha_{1}^{1}\neq\alpha_{1}^{2}.$
###### Proof.
We first compute the weight sets of the desired components and the system of
simple roots of $L_{0}.$ For $W(n)$,
$\displaystyle\Delta_{-1}=\\{-\varepsilon_{j}\mid 1\leq j\leq
n\\},\qquad~{}~{}~{}~{}~{}~{}~{}~{}\Delta_{0}=\\{\varepsilon_{i}-\varepsilon_{j}\mid
1\leq i\neq j\leq n\\},$
$\displaystyle\Pi=\\{\varepsilon_{i}-\varepsilon_{i+1}\mid 1\leq i\leq
n-1\\},\qquad\Delta_{s}=\bigg{\\{}\sum_{k=1}^{n}\varepsilon_{k}-\varepsilon_{j}\mid
1\leq j\leq n\bigg{\\}}.$
For $S(n)$ and $\widetilde{S}(n),$
$\displaystyle\Delta_{-1}=\\{-\varepsilon_{j}\mid 1\leq j\leq
n\\},\qquad~{}~{}~{}~{}~{}~{}~{}~{}\Delta_{0}=\\{\varepsilon_{i}-\varepsilon_{j}\mid
1\leq i\neq j\leq n\\},$
$\displaystyle\Pi=\\{\varepsilon_{i}-\varepsilon_{i+1}\mid 1\leq i\leq
n-1\\},\qquad\Delta_{1}=\big{\\{}\varepsilon_{k}+\varepsilon_{l}-\varepsilon_{j}\mid
1\leq k,l,j\leq n\big{\\}},$
$\displaystyle\Delta_{s}=\bigg{\\{}\sum_{i=1}^{n}\varepsilon_{i}-\varepsilon_{j}-\varepsilon_{k}\mid
1\leq j,k\leq n\bigg{\\}}.$
For $H(2m)$,
$\displaystyle\Delta_{-1}=\\{\ \pm\varepsilon_{j}\mid 1\leq j\leq
m\\},\qquad\Delta_{0}=\\{\pm(\varepsilon_{i}+\varepsilon_{j}),\pm(\varepsilon_{i}-\varepsilon_{j})\mid
1\leq i<j\leq m\\},$
$\displaystyle\Pi=\\{\varepsilon_{i}-\varepsilon_{i+1},\varepsilon_{m-1}+\varepsilon_{m}\mid
1\leq i<m\\},$
$\displaystyle\Delta_{1}=\\{\pm(\varepsilon_{i}+\varepsilon_{j})\pm\varepsilon_{k},\pm(\varepsilon_{i}-\varepsilon_{j})\pm\varepsilon_{k}\mid
1\leq i<j<k\leq m\\}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}\cup\\{\pm\varepsilon_{l}\mid 1\leq
l\leq m\\}.$ (2.2)
For $H(2m+1)$, write $\varepsilon_{i}^{\prime}=\varepsilon_{i+1}$ for $1\leq
i\leq m.$ We have
$\displaystyle\Delta_{-1}=\\{0\\}\cup\\{\pm\varepsilon_{i}^{\prime}\mid 1\leq
i\leq m\\},$
$\displaystyle\Delta_{0}=\\{\pm\varepsilon_{k}^{\prime},\pm(\varepsilon_{i}^{\prime}+\varepsilon_{j}^{\prime}),\pm(\varepsilon_{i}^{\prime}-\varepsilon_{j}^{\prime})\mid
1\leq k\leq m,1\leq i<j\leq m\\},$
$\displaystyle\Pi=\\{\varepsilon_{i}^{\prime}-\varepsilon_{i+1}^{\prime},\varepsilon_{m}^{\prime}\mid
1\leq i<m\\},$
$\displaystyle\Delta_{1}=\\{0\\}\cup\\{\pm\varepsilon_{l}^{\prime},\pm(\varepsilon_{i}^{\prime}+\varepsilon_{j}^{\prime}),\pm(\varepsilon_{i}^{\prime}-\varepsilon_{j}^{\prime})\mid
1\leq l\leq m,1\leq i<j\leq m\\}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}\cup\\{\pm(\varepsilon_{i}^{\prime}+\varepsilon_{j}^{\prime})\pm\varepsilon_{k},\pm(\varepsilon_{i}^{\prime}-\varepsilon_{j}^{\prime})\pm\varepsilon_{k}^{\prime})\mid
1\leq i<j<k\leq m\\}.$
All the statements follow directly, except (5) for $L=H(6).$ In this special
case, from (2.2) one sees that $0\notin\Delta_{1}$ and $|\Delta_{1}|>1$.
Consequently, (5) holds. ∎
Recall that an element $x\in\mathfrak{g}$ is refereed to as $\Pi$-balanced if
$x$ is a sum of all the simple-root vectors.
###### Theorem 2.3.
A Cartan Lie superalgebra is generated by 2 elements.
###### Proof.
Recall the null $L_{0}$ is isomorphic to
$\mathfrak{gl}(n),\mathfrak{sl}(n),\mathfrak{sl}(2m)$ or $\mathfrak{so}(n)$.
From Remark 1.8 and Corollary 1.6, for a $\Pi$-balanced element $x_{0}\in
L_{0}$ and $h_{0}\in\Omega_{\Pi}\subset\mathfrak{h}_{L_{0}}$ we have
$L_{0}=\langle x_{0}+\delta_{L,W}z,h_{0}\rangle,$ where $z$ is a central
element in $\mathfrak{gl}(n).$
For simplicity, write $t:=s$ for $L=W$ or $S$ and $t:=1$ for $L=\widetilde{S}$
or $H$. Suppose $L\neq H(6)\;\mbox{and}\;H(2m+1).$ According to Lemma 2.2, we
are able to choose nonzero weights $\alpha_{-1}\in\Delta_{-1}$ and
$\alpha_{t}\in\Delta_{t}$ such that $\alpha_{-1}\neq\alpha_{t}$,
$\alpha_{-1}\notin{\Pi},$ and $\alpha_{t}\notin{\Pi}.$ Put
$x:=x_{-1}+x_{0}+\delta_{L,W}z+x_{t}$ for some weight vectors $x_{-1}\in
L_{-1}^{\alpha_{-1}}$ and $x_{t}\in L_{t}^{\alpha_{t}}.$ Now set
$\Phi:=\Pi\cup\\{\alpha_{-1}\\}\cup\\{\alpha_{t}\\}\subset\mathfrak{h}_{L_{0}}^{*}$
and choose an element $h_{0}\in\Omega_{\Phi}.$ Assert $\langle
x,h_{0}\rangle=L.$ Lemma 1.4 implies all components $x_{-1}$ $x_{0}$,
$\delta_{L,W}z$ and $x_{t}$ belong to $\langle x,h_{0}\rangle.$ As
$h_{0}\in\Omega_{\Phi}\subset\Omega_{\Pi},$ we obtain $L_{0}=\langle
x_{0}+\delta_{L,W}z,h_{0}\rangle\subset\langle x,h_{0}\rangle.$ By Lemma
2.1(1) and (2), since $L_{-1}$ and $L_{t}$ are irreducible $L_{0}$-modules, we
have $L_{-1}+L_{t}\subset\langle x,h_{0}\rangle.$ From Lemma 2.1(3) and (4) it
follows that $L=\langle x,h_{0}\rangle.$
If $L=H(6),$ by Lemma 2.2(3), we are able to choose
$\alpha_{-1}\in\Delta_{-1},$ $\alpha_{1}^{1}\in\Delta_{1}^{1}$ and
$\alpha_{1}^{2}\in\Delta_{1}^{2}$ such that
$\alpha_{-1},\alpha_{1}^{1},\alpha_{1}^{2}$ are pairwise distinct and
$\alpha_{-1}\notin\Pi,$ $\alpha_{1}^{1}\notin\Pi$ and
$\alpha_{1}^{2}\notin\Pi$. Put $x:=x_{-1}+x_{0}+x_{1}^{1}+x_{1}^{2}$ for some
weight vectors $x_{-1}\in L_{-1}^{\alpha_{-1}}$ and $x_{1}^{i}\in
L_{1}^{\alpha_{1}^{i}},$ $i=1,2.$ Write
$\Phi:=\Pi\cup\\{\alpha_{-1}\\}\cup\\{\alpha_{1}^{1}\\}\cup\\{\alpha_{1}^{2}\\}.$
For $h_{0}\in\Omega_{\Phi}\subset\Omega_{\Pi}$, as in the above, one may show
that $L=\langle x,h_{0}\rangle$.
If $L=H(2m+1),$ by Lemma 2.2(4), choose $\alpha_{-1}\in\Delta_{-1},$
$\alpha_{1}\in\Delta_{1}$ such that $\alpha_{-1}=0,$ $\alpha_{1}\notin\Pi.$
Set $x:=x_{-1}+x_{0}+x_{1}$ for some weight vectors $x_{-1}\in
L_{-1}^{\alpha_{-1}}$ and $x_{1}\in L_{t}^{\alpha_{1}}.$ Now put
$\Phi:=\Pi\cup\\{\alpha_{-1}\\}\cup\\{\alpha_{1}\\}\subset\mathfrak{h}_{L_{0}}^{*}.$
Let $h_{0}\in\Omega_{\Phi}\subset\Omega_{\Pi}$ and claim that $L=\langle
x,h_{0}\rangle.$ By Lemma 1.4, $x_{0},$ $x_{-1}$ and $x_{1}\in\langle
x,h_{0}\rangle.$ Consequently, $L_{0}\subset L$. The irreducibility of
$L_{-1}$ and $L_{1}$ ensures $L_{-1}+L_{1}\subset\langle x,h_{0}\rangle.$ By
Lemma 2.1(3), the claim holds. The proof is complete. ∎
Theorems 1.10 and 2.3 combine to the main result of this paper:
###### Theorem 2.4.
Any simple Lie superalgebra is generated by 2 elements.
## References
* [1] J.-M. Bois. Generators of simple Lie algebras in arbitrary characteristics. Math. Z. 262 (2009): 715-741.
* [2] J. E. Humphreys. Introduction to Lie Algebras and Representations Theory. Springer Verlay. New York, 1972.
* [3] T. Ionescu. On the generators of semi-simple Lie algebras. Linear Algebra Appl. 15 (3), (1976): 271-292.
* [4] V.G. Kac. Lie superalgebras. Adv. Math. 26 (1977): 8-96.
* [5] M. Kuranish. On everywhere dense imbedding of free groups in Lie groups. Nagoya Math. J. 2 (1951): 63-71.
* [6] M. Scheunert. Theory of Lie superalgebras. Lecture Notes Math. 716 (1979), Springer-Verlag.
* [7] R. B. Zhang. Serre presentstions of Lie superalgebras. arXiv: 1101. 3114vl math. RT, 2011.
|
arxiv-papers
| 2011-03-22T10:43:58 |
2024-09-04T02:49:17.850678
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Wende Liu and Liming Tang",
"submitter": "Tang Liming",
"url": "https://arxiv.org/abs/1103.4242"
}
|
1103.4313
|
# Graphene valley filter using a line defect
D. Gunlycke Naval Research Laboratory, Washington, D.C. 20375, USA C. T.
White Naval Research Laboratory, Washington, D.C. 20375, USA
###### Abstract
With its two degenerate valleys at the Fermi level, the band structure of
graphene provides the opportunity to develop unconventional electronic
applications. Herein, we show that electron and hole quasiparticles in
graphene can be filtered according to which valley they occupy without the
need to introduce confinement. The proposed valley filter is based on
scattering off a recently observed line defect in graphene. Quantum transport
calculations show that the line defect is semitransparent and that
quasiparticles arriving at the line defect with a high angle of incidence are
transmitted with a valley polarization near 100%.
###### pacs:
73.22.Pr, 73.61.Wp, 73.63.Bd, 85.75.-d
Owing to its exceptional electronBolo08_1 and thermalBala08_1 transport
properties, grapheneNovo04_1 is a promising material for use in advanced low-
energy electronic applications. As graphene is a semimetal with no band
gap,Wall47_1 it cannot in its intrinsic form be used as a replacement
material for silicon and other semiconductors in conventional electronics.
However, with its bands at the Fermi level defining two conical valleys,
graphene might instead offer novel electronic applications. Before such
applications can be realized, the control over transport in graphene needs to
be improved. Understanding the effects a recently observed line defectLahi10
and grain boundariesHuan11 have on the transport properties might be the key.
While there are several theoretical studies that have investigated the latter
grain boundaries,Malo10 ; Yazy10 ; Liu10 ; Yazy10a ; Huan11 the line defect
has so far remained relatively unexplored, even though the line defect is
always straight and its adjoining grains are aligned, arguably making the line
defect a more suitable structure for controlled transport in graphene.
This Letter investigates the transport properties of the atomically precise,
self-assembled graphene line defect shown in Fig. 1 and shows that this line
defect is semitransparent and can be used as a valley filter.
Figure 1: Extended line defect in graphene. (a) Arrangement of the carbon
atoms around the line defect highlighted in gray. The structure exhibits
translational symmetry along the defect with a primitive cell shown in beige.
The two sublattices in graphene indicated by blue and green atoms reverse upon
reflection at the line defect, as can be clearly seen in the enlarged overlay.
(b) Scanning tunneling microscope image of the line defect in graphene (Image
adapted from Ref. [Lahi10, ]).
Electron and hole quasiparticles can either transmit through the line defect
without changing direction or reflect following the law of specular
reflection. The transmission and reflection probabilities depend on the valley
degree of freedom, thus allowing the quasiparticles to be filtered according
to their valley degree of freedom with a polarization near 100% for
quasiparticles arriving at the line defect with a high angle of incidence.
This filter is much different from the valley filter proposed by Rycerz et
al.,Ryce07 which relies on the isolation of a few one-dimensional channels in
a narrowly confined region. The latter filter has not yet been demonstrated,
presumably due to challenges in fabricating the structure, which requires a
sub-10 nm constriction with saturated zigzag edges. In contrast, the filter
proposed herein relies on the two-dimensional geometry of graphene and its
required structure has already been observed,Lahi10 as demonstrated by the
micrograph in Fig. 1. The valley filter is expected to be a central component
in valleytronics,Ryce07 just as the spin filter is central in spintronics.
Electronics that make use of the two valleys in graphene is attractive because
the valleys are separated by a large wave vector, making valley information
robust against scattering from slowly varying potentials,Ando98 including
scattering caused by intravalley acoustic phonons that often limit coherent
low-bias devices to low-temperature operation. Therefore, the valley
information generated by the filter proposed herein could in principle be
preserved even in a diffusive charge transport regime.
Figure 2: Valley states scattering off the line defect. (a) Energies close to
the Fermi level in graphene have dark contours and are located in the corners
of the first Brillouin zone contained within the gold hexagon. The two
valleys, K and K′, are identified as the two disjoint low-energy regions in
the reciprocal primitive cell enclosed by the blue rectangle. (b) An incident
quasiparticle state is defined by the valley index $\tau$ and wave vector
$\vec{q}$, where the latter points in the direction $\hat{q}$ given by the
angle of incidence $\alpha$. (c) Owing to energy and momentum conservation
along the line defect, there are only two nonevanescent scattered states
allowed. (d) The sublattice symmetric $|+\rangle$ and antisymmetric
$|-\rangle$ components of the incident state $|\Phi_{\tau}\rangle$ are
transmitted and reflected, respectively. The thickness of each arrow indicates
the probability the quasiparticle will follow the respective path.
Consider a low-energy electron (hole) quasiparticle with energy $\varepsilon$
and valley index $\tau$ approaching the line defect from the left (right) at
the angle of incidence $\alpha$. Asymptotically far from the line defect, the
quasiparticle occupies a graphene state $|\Phi_{\tau}\rangle$, where $\tau=\pm
1$ is a valley index. See Fig. 2. Let the quasiparticle wave vector
$\vec{q}=(q_{x},q_{y})$ be the wave vector measured from the center of the
occupied valley, located at $\vec{K}_{\tau}=4\pi\tau\hat{y}/3a$, where $a$ is
the graphene lattice constant and $\hat{y}$ is the unit vector along the line
defect. To first order in $q\equiv|\vec{q}|$, the nearest-neighbor tight-
binding Hamiltonian in grapheneWall47_1 can be expressedSlon57 as
$H_{\tau}=\hbar v_{F}\left(q_{x}\sigma_{y}+\tau q_{y}\sigma_{x}\right)$, where
$v_{F}=\sqrt{3}|\gamma|a/2\hbar$ is the Fermi velocity with the nearest-
neighbor hopping parameter $\gamma\approx-2.6$ eV, and $\sigma_{x}$ and
$\sigma_{y}$ are Pauli matrices. The Hamiltonian has energy eigenvalues
$E=\eta\varepsilon$, where $\varepsilon=\hbar v_{F}q$ and $\eta=+1$ ($-1$) if
the quasiparticle is an electron (hole). From the quasiparticle energy
dispersion $\varepsilon$, it follows that the quasiparticle group velocity is
$v_{F}\hat{q}$, where $\hat{q}$ is the unit vector in the direction $\vec{q}$.
Because the quasiparticle travels in the direction $\vec{q}$, $q_{x}=\eta
q\cos\alpha$ and $q_{y}=\eta q\sin\alpha$. Using these relations, the
eigenstate of the graphene Hamiltonian for a given $\tau$ and $\alpha$ can be
expressed as
$|\Phi_{\tau}\rangle=\frac{1}{\sqrt{2}}\left(|A\rangle+ie^{-i\tau\alpha}|B\rangle\right),$
(1)
where $|A\rangle$ and $|B\rangle$ refer to the two sublattices in graphene.
The structure in Fig. 1 exhibits a useful symmetry line through the line
defect. In the limit $q\rightarrow 0$, the reflection operator commutes with
the graphene translation operator perpendicular to the line defect. Therefore,
symmetry-adapted states $|\pm\rangle$ can be constructed that are simultaneous
eigenstates of the graphene Hamiltonian and the reflection operator. As the
reflection operator maps $A$ sites onto $B$ sites, and vice versa, it can be
represented by the operator $\sigma_{x}$ acting on the two sublattices. From
the eigenstates of $\sigma_{x}$, one obtains
$|\pm\rangle=\frac{1}{\sqrt{2}}\left(|A\rangle\pm|B\rangle\right).$ (2)
The graphene state (1) expressed in the symmetry-adapted basis is
$|\Phi_{\tau}\rangle=\frac{1+ie^{-i\tau\alpha}}{2}|+\rangle+\frac{1-ie^{-i\tau\alpha}}{2}|-\rangle.$
(3)
The full Hamiltonian describing the system can be divided into three terms,
$\mathcal{H}_{\tau}=H_{\tau}+H_{D}+V$, which represent graphene, the isolated
line defect, and the interaction between graphene and the line defect,
respectively. As each term commutes with the reflection operator, the full
Hamiltonian must commute with the reflection operator, and thus, the
eigenstates of $\mathcal{H}_{\tau}$ in the symmetry-adapted basis are either
symmetric or antisymmetric about the line defect. Antisymmetric states have a
node at the line defect, and as a result, there are no matrix elements within
the nearest-neighbor model coupling the left and right sides. Therefore,
antisymmetric states cannot contribute to any transmission across the line
defect. As shown below, however, there are two symmetric states at the Fermi
level without a node on the line defect. As these states are extended
eigenstates of the full Hamiltonian, they carry quasiparticles across the line
defect without scattering. Thus, we can conclude that the transmission
probability of the quasiparticle approaching the line defect is
$T_{\tau}=\left|\langle+|\Phi_{\tau}\rangle\right|^{2}=\frac{1}{2}\left(1+\tau\sin\alpha\right).$
(4)
As the sum of the transmission probabilities in Eq. (4) over the two valleys
is $\sum_{\tau}T_{\tau}=1$, we can also conclude that the line defect is
semitransparent. The semitransparency follows from the relation
$\langle+|\Phi_{-\tau}\rangle=\langle-|\Phi_{\tau}\rangle^{*}$ and the
normalization of $|\Phi_{\tau}\rangle$. See Fig. 2. Fig. 3 shows that the
transmission probability of a quasiparticle varies significantly with its
angle of incidence $\alpha$.
Figure 3: The probability that an incident quasiparticle at the Fermi level
with valley index $\tau$ and angle of incidence $\alpha$ will transmit through
the line defect.
At a high angle of incidence, there is almost full transmission or reflection,
depending on the valley index $\tau$.
Owing to the semitransparency, given an unpolarized beam of incident
quasiparticles, the probability $P_{\tau}$ that a transmitted quasiparticle
has valley index $\tau$ is given by $T_{\tau}$. The polarization
$\mathcal{P}\equiv\langle\tau\rangle=P_{+1}-P_{-1}$ of the transmitted beam is
$\mathcal{P}=\sin\alpha.$ (5)
This expression shows that an unpolarized beam of quasiparticles approaching
the line defect at a high angle of incidence will lead to outgoing transmitted
and reflected beams that are almost completely polarized.
The results presented above are based on symmetry arguments that are valid
only in the limit as the quasiparticle energy $\varepsilon\rightarrow 0$. The
results, however, hold to an excellent approximation as long as
$\varepsilon\ll\hbar v_{F}/a\approx 2.3$ eV. To show this, we have performed
numerical transport calculations that are treated exactly within the nearest-
neighbor tight-binding model. The transmission probability is shown in Fig. 4
as a function of the components of the wave vector $\vec{q}$.
Figure 4: Transmission probability for a quasiparticle with finite energy. (a)
The transmission probability for a quasiparticle approaching the line defect
at an angle of incidence $\alpha$ is indicated by the brightness, ranging from
0 to 1 in increments of 0.05. For the angle $\alpha$ shown, the transmission
is near one if the quasiparticle has valley index $\tau=+1$, as illustrated on
the right, where the filled (open) circle represents an electron (hole)
quasiparticle. (b) The transmission probability for the corresponding
quasiparticle with valley index $\tau=-1$.
Tracing the transmission probability in the figure along any constant energy
contour, which is almost perfectly circular due to the approximately conic
dispersion, yields a dependence on the angle of incidence that is virtually
indistinguishable from that in Fig. 3, thus confirming that almost full
polarization can be achieved as long as $qa\ll 1$ or $\varepsilon\ll\hbar
v_{F}/a$. To further test the robustness of the valley filter, we have
performed calculations with interactions across the line defect and with
potentials on the line defect and their neighboring sites. None of these tests
led to results qualitatively different from those presented in Fig. 4.
Because the structure in Fig. 1(a) exhibits a symmetry plane through the
center of the primitive cell, there is time-reversal symmetry in the direction
along the line defect. This time-reversal symmetry implies that the
transmission probability of a quasiparticle with valley index $-\tau$ can be
obtained from that of a quasiparticle with valley index $\tau$ by letting
$q_{y}\rightarrow-q_{y}$. This relationship between $\tau=\pm 1$ can be seen
in Fig. 4. Note, however, that the transmission probabilities in Fig. 4 are
not symmetric about $q_{y}$, and thus one can conclude that the scattering of
a quasiparticle depends on the valley index, which is a necessary requirement
for a valley filter. As both graphene and the line defect have Hamiltonians
exhibiting electron–hole symmetry, one might expect that
$T_{\tau}(\vec{q})=T_{\tau}(-\vec{q})$. The scattering does, in general
however, not obey electron–hole symmetry. That the condition is satisfied by
Eq. (4) is in part a consequence of the neglect of evanescent waves. These
evanescent waves are accounted for in the numerically obtained transmission
probability in Fig. 4. Note, for instance, that
$T_{\tau}(0.005\pi/a,0.01\pi/a)\neq T_{\tau}(-0.005\pi/a,-0.01\pi/a)$. To
understand the lack of electron–hole symmetry in the combined graphene–line
defect system, we note that the structure in Fig. 1(a) is not bipartite; in
particular, the sites participating in the pentagons at the line defect cannot
be divided into two types where one type has only nearest neighbors of the
other type.
To gain further insight into the numerical calculations and how they lead to
Eq. (4), it is useful to perform the transport calculations analytically in
the limit of small $q$. As there is translational symmetry along the line
defect, one can focus on those atoms within the primitive cell shown in Fig.
1. The Hamiltonian of the isolated line defect is then
$H_{D}=\gamma\left(\begin{array}[]{cc}0&1\\\ 1&0\end{array}\right).$ (6)
Next, we seek a retarded self energy $\Sigma$ that accounts for the coupling
of the two line defect atoms to the semi-infinite portion of graphene on each
side of the line defect. As shown in Fig. 1, there are two atoms neighboring
the line defect on each side. Expressed in the basis of the atoms parallel to
the line defect, the graphene state with valley index $\tau$ is given by
$|\tau\rangle=\frac{1}{\sqrt{2}}\left(\begin{array}[]{c}1\\\ e^{-2\pi
i\tau/3}\end{array}\right).$ (7)
As the basis contains two atoms belonging to the same sublattice, the graphene
state above can be folded onto another graphene state with the same wave
vector in the full system. This latter state, however, is evanescent near the
Fermi level and can be neglected. Requiring $\Sigma$ to be retarded fixes the
relative phase between the atoms on the line defect and their neighbors,
resulting in the relation $\Sigma\langle B|\Phi_{\tau}\rangle=\gamma\langle
A|\Phi_{\tau}\rangle|\tau\rangle\langle\tau|$, from which one obtains
$\Sigma=-\frac{i\gamma}{2}e^{i\tau\alpha}\left(\begin{array}[]{cc}1&e^{2\pi
i\tau/3}\\\ e^{-2\pi i\tau/3}&1\end{array}\right).$ (8)
Equipped with $H_{D}$ describing the interactions within the line defect and
$\Sigma$ describing the coupling to the semi-infinite graphene on each side,
one can calculate the retarded Green function on the line defect,
$G=\left(\eta\varepsilon I-H_{D}-2\Sigma\right)^{-1}$, where $I$ is the unit
matrix. To zeroth order in $q$,
$G=\frac{-\gamma^{-1}}{1+ie^{i\tau\alpha}}\left(\begin{array}[]{cc}ie^{i\tau\alpha}&1-ie^{i\tau(\alpha+2\pi/3)}\\\
1-ie^{i\tau(\alpha-2\pi/3)}&ie^{i\tau\alpha}\end{array}\right).$ (9)
The probability that the quasiparticle will transmit through the line defect
is given by $T_{\tau}=\langle\tau|\Gamma G\Gamma G^{\dagger}|\tau\rangle$,
where $\Gamma\equiv i\left(\Sigma-\Sigma^{\dagger}\right)$. Inserting Eqs.
(7–9) into this equation, one recovers Eq. (4) exactly.
In the initial analysis leading to Eq. (4), an assertion was made that there
are two symmetric states at the Fermi level without a node on the line defect.
That claim can now be verified using Eqs. (1, 6, 8). According to Eq. (1), a
symmetric state must satisfy $\tau\alpha=\pi/2$. When this condition is
satisfied, one finds that the determinant
$\operatorname{det}\left(H_{D}+2\Sigma\right)=0$, which implies that there are
exactly two symmetric states at the Fermi level, one for each valley index
$\tau$. The corresponding eigenstates are $|\Psi_{\tau}\rangle=|-\tau\rangle$,
confirming that the states have no node at the line defect.
Rather than forming isolated Bloch waves, experiments exploiting the valley
filter will likely construct more complex wave patterns. The dimensions of the
system should be chosen such that the mean-free path is longer than the
distance between the source and the line defect. For low-energy
quasiparticles, the wavelength $\lambda=2\pi/q$ is much greater than the
repeating length $2a$ of the line defect. As long as this repeating length is
also much shorter than any spatial features of the waves, to an excellent
approximation the scattering can be treated within ray optics, where rays
travel in straight lines and only scatter at the line defect, where they will
either transmit with a probability approximately given by Eq. (4) or reflect
while obeying the law of specular reflection.
The filter can be used to create a valley-polarized beam of electrons or
holes. By probing the current passing through the line defect at a particular
angle, one could also measure the valley polarization of the incident
quasiparticles. Demonstration of these components should significantly
accelerate research on graphene valleytronics.
###### Acknowledgements.
The authors acknowledge support from the U.S. Office of Naval Research,
directly and through the U.S. Naval Research Laboratory. D.G. thanks C.W.J.
Beenakker for helpful comments.
## References
* (1) K. I. Bolotin, K. J. Sikes, Z. Jiang, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Solid State Commun. 146, 351 (2008)
* (2) A. A. Balandin, S. Ghosh, W. Bao, I. Calizo, D. Teweldebrhan, F. Miao, and C. N. Lau, Nano Lett. 8, 902 (2008)
* (3) K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science 306, 666 (2004)
* (4) P. R. Wallace, Phys. Rev. 71, 622 (1947)
* (5) J. Lahiri, Y. Lin, P. Bozkurt, I. I. Oleynik, and M. Batzill, Nat. Nanotech. 5, 326 (2010)
* (6) P. Y. Huang, C. S. Ruiz-Vargas, A. M. van der Zande, W. S. Whitney, M. P. Levendorf, J. W. Kevek, S. Garg, J. S. Alden, C. J. Hustedt, Y. Zhu, J. Park, P. L. McEuen, and D. A. Muller, Nature 469, 389 (2011)
* (7) S. Malola, H. Häkkinen, and P. Koskinen, Phys. Rev. B 81, 165447 (2010)
* (8) O. V. Yazyev and S. G. Louie, Phys. Rev. B 81, 195420 (2010)
* (9) Y. Liu and B. I. Yakobson, Nano Lett. 10, 2178 (2010)
* (10) O. V. Yazyev and S. G. Louie, Nat. Mat. 9, 806 (2010)
* (11) A. Rycerz, J. Tworzydło, and C. W. J. Beenakker, Nat. Phys. 3, 172 (2007)
* (12) T. Ando and T. Nakanishi, J. Phys. Soc. Jpn. 67, 1704 (1998)
* (13) J. C. Slonczewski and P. R. Weiss, Phys. Rev. 109, 272 (1958)
|
arxiv-papers
| 2011-03-22T17:05:05 |
2024-09-04T02:49:17.857359
|
{
"license": "Public Domain",
"authors": "Daniel Gunlycke and Carter T. White",
"submitter": "Daniel Gunlycke",
"url": "https://arxiv.org/abs/1103.4313"
}
|
1103.4372
|
# Laplace-isospectral hyperbolic 2-orbifolds are representation-equivalent
Peter G. Doyle Dartmouth College. Juan Pablo Rossetti FaMAF-CIEM, Univ. Nac.
Córdoba.
(Version 1.0 dated 22 March 2011
No Copyright††thanks: The authors hereby waive all copyright and related or
neighboring rights to this work, and dedicate it to the public domain. This
applies worldwide. )
###### Abstract
Using the Selberg trace formula, we show that for a hyperbolic 2-orbifold, the
spectrum of the Laplacian acting on functions determines, and is determined
by, the following data: the volume; the total length of the mirror boundary;
the number of conepoints of each order, counting a mirror corner as half a
conepoint; and the number of primitive closed geodesics of each length and
orientability class, counting a geodesic running along the boundary as half
orientation-preserving and half orientation-reversing, and discounting
imprimitive geodesics appropriately. This implies that Laplace-isospectral
hyperbolic 2-orbifolds determine equivalent linear representations of
$\mathrm{Isom}(H^{2})$, and are isospectral for any natural operator.
## 1 Statement
We consider compact hyperbolic 2-orbifolds $M$, not necessarily connected.
Denote the eigenvalues of the Laplacian acting on functions on $M$ by
$0=\lambda_{0}\leq\lambda_{1}\leq\ldots.$
We call the sequence $(\lambda_{0},\lambda_{1},\ldots)$ the _Laplace spectrum_
of $M$. If two spaces have the same Laplace spectrum we call them _Laplace-
isospectral_. (Note that we don’t simply call them ‘isospectral’, because this
term is used in different ways by different authors.)
Our goal here will be to prove:
###### Theorem 1.
Let $M$ be a compact hyperbolic $2$-orbifold, not necessarily connected. The
Laplace spectrum of $M$ determines, and is determined by, the following data:
1. 1.
the volume;
2. 2.
the total length of the mirror boundary;
3. 3.
the number of conepoints of each order, counting a mirror corner as half a
conepoint of the corresponding order;
4. 4.
the number of closed geodesics of each length and orientability class,
counting a geodesic running along the boundary as half orientation-preserving
and half orientation-reversing, and counting the $k$-fold iterative of a
primitive geodesic as worth $\frac{1}{k}$ of a primitive geodesic of the same
length and orientability.
Of course the Laplace spectrum determines other data as well, for example the
number of connected components. The data we list here determine those other
data, since they determine the spectrum.
Theorem 1 can be recast less picturesquely as follows. Associated to a
2-orbifold $M$ is a linear representation $\rho_{\bar{M}}$ of
$\mathrm{Isom}(H^{2})$ on functions on the frame bundle ${\bar{M}}$ of $M$.
Associated to this representation is its character $\chi_{\bar{M}}$, a
function on the set of conjugacy classes of $\mathrm{Isom}(H^{2})$. The
geometrical data listed in Theorem 1 are just a way of describing
geometrically the information conveyed by the character $\chi_{\bar{M}}$.
Selberg tells us that the character determines the Laplace spectrum of $M$.
This is a very general fact. What Theorem 1 tells us that is special is that
for hyperbolic $2$-orbifolds, we can get back from the Laplace spectrum to the
character.
Once we have the character, we get by general principles the linear
equivalence class of the representation $\rho_{\bar{M}}$, hence the title,
‘Laplace-isospectral hyperbolic 2-orbifolds are representation-equivalent’. We
also get the spectrum of any natural operator on any natural bundle. We will
discuss these matters further in 9 below; for now we concentrate on Theorem 1.
## 2 Plan
This paper is rather longer than you might think it needs to be, even if you
disregard the large Appendix. As we observed in [5], this theorem for
orbifolds is a short step from the result about manifolds proven there. We
carry on at such length because we are hoping to sell readers on the
usefulness of the counting kernel technique, and to provide a general
background on Selberg’s methods for those not familiar with them. We also take
some excursions which we hope will prove interesting.
We will begin with some examples of Theorem 1 in action; fill in the necessary
background on Selberg’s method, and its particular application to what we call
the _counting kernel_ ; outline the proof of Theorem 1; fill in details; and
discuss the implications for linear equivalence and strong isospectrality.
Then we’ll given examples to show show how the Theorem breaks down in the flat
case; show how the Selberg formula works in practice; and finish up by
discussing some conjectures.
## 3 Examples
In this section, we give examples to show that the trade-offs between boundary
and interior features that are allowed for in the statement of Theorem 1 do
actually take place: Boundary corners on one orbifold may appear as conepoints
on the other, while boundary geodesics may migrate to the interior.
All the examples here will be obtained by glueing bunches of congruent
hyperbolic triangles. The glueing patterns arise from transplantable pairs, as
described by Buser et al. [2]. In Appendix A below we reproduce a large
catalog of such transplantable pairs, as computed by John Conway. We will
refer to this catalog to identify specific glueing patterns.
Examples of trading boundary and interior geodesics abound. A variation on the
famous example of Gordon, Webb, and Wolpert [8] yields a pair of planar
hyperbolic 2-orbifolds of types $*224236$ and $*224623$, shown in Figure 1.
(This is presumably the simplest pair of this kind—see Section 12 below.) Each
member of the pair is glued together from $7$ copies of a so-called _$346$
triangle_: a hyperbolic triangle with angles $\pi/3$, $\pi/4$, $\pi/6$. The
glueing pattern appears in Appendix A below as pattern $7(3)$.
Figure 1: The pair $*224236$, $*224623$.
There is no issue here with geodesics passing through the interior of any of
the triangles that make up these two orbifolds: These can be matched so as to
preserve length, orientability, and index of imprimitivity. But when it comes
to geodesics that run along the edges of the triangles, whether along the
mirror boundary or in the interior of the orbifold, it is necessary to balance
boundary geodesics on one orbifold against interior geodesics on the other, as
provided for in Theorem 1. To see this, look at Figure 1, and count geodesics
on the two sides. You’ll find that getting the count right is tricky, but fun.
Beware that boundary geodesics turn back at corners of even order, but
continue along around the boundary at corners of odd order. Beware also of the
way interior geodesics bounce when they hit the boundary. The answers are
indicated in Table 1. The names ‘recto’ and ‘verso’ are short for
‘orientation-preserving’ and ‘orientation-reversing’, in analogy with the
names for the front and back of a printed page. The table only shows lengths
for which there is at least one imprimitive geodesic. Trade-offs between
boundary and interior geodesics continue at multiples of these lengths.
Geodesics for $*224236$:
$\begin{array}[]{c||cc|cc|cc}\lx@intercol\hfil\hfil\lx@intercol&\lx@intercol\hfil\mbox{boundary}\hfil\lx@intercol&\lx@intercol\hfil\mbox{interior}\hfil\lx@intercol&\lx@intercol\hfil\mbox{total}\hfil\lx@intercol\\\\[5.69046pt]
\mbox{length}&\mbox{recto}&\mbox{verso}&\mbox{recto}&\mbox{verso}&\mbox{recto}&\mbox{verso}\\\\[5.69046pt]
\hline\cr 2c&\frac{3}{2}&\frac{3}{2}&&&\frac{3}{2}&\frac{3}{2}\\\\[5.69046pt]
2a+2b&\frac{1}{2}&\frac{1}{2}&1&1&\frac{3}{2}&\frac{3}{2}\\\\[5.69046pt]
4c&\frac{3}{2}\cdot\frac{1}{2}&\frac{3}{2}\cdot\frac{1}{2}&1&&\frac{7}{4}&\frac{3}{4}\\\\[5.69046pt]
4a+4b&\frac{1}{2}+\frac{1}{2}\cdot\frac{1}{2}&\frac{1}{2}+\frac{1}{2}\cdot\frac{1}{2}&2\cdot\frac{1}{2}&&\frac{7}{4}&\frac{3}{4}\\\\[5.69046pt]
\end{array}$
Geodesics for $*224623$:
$\begin{array}[]{c||cc|cc|cc}\lx@intercol\hfil\hfil\lx@intercol&\lx@intercol\hfil\mbox{boundary}\hfil\lx@intercol&\lx@intercol\hfil\mbox{interior}\hfil\lx@intercol&\lx@intercol\hfil\mbox{total}\hfil\lx@intercol\\\\[5.69046pt]
\mbox{length}&\mbox{recto}&\mbox{verso}&\mbox{recto}&\mbox{verso}&\mbox{recto}&\mbox{verso}\\\\[5.69046pt]
\hline\cr
2c&\frac{1}{2}&\frac{1}{2}&1&1&\frac{3}{2}&\frac{3}{2}\\\\[5.69046pt]
2a+2b&\frac{3}{2}&\frac{3}{2}&&&\frac{3}{2}&\frac{3}{2}\\\\[5.69046pt]
4c&\frac{1}{2}+\frac{1}{2}\cdot\frac{1}{2}&\frac{1}{2}+\frac{1}{2}\cdot\frac{1}{2}&2\cdot\frac{1}{2}&&\frac{7}{4}&\frac{3}{4}\\\\[5.69046pt]
4a+4b&\frac{3}{2}\cdot\frac{1}{2}&\frac{3}{2}\cdot\frac{1}{2}&1&&\frac{7}{4}&\frac{3}{4}\\\\[5.69046pt]
\end{array}$
Table 1: Counting geodesics.
Examples of trading corners for conepoints are not as ominpresent as examples
of trading boundary and interior geodesics, but they are still plentiful.
Figures 2 and 3 show how to construct a pair $6*2232233$,$23*22366$ from
transplantable pair $11g(3)$ of Appendix A. Note how the order-6 conepoint
moves to the boundary going one way, while the order-2 and order-3 conepoints
move to the boundary going the other way. Many other examples of cone-trading
can be produced using the diagrams of Appendix A, some of them much readier to
hand than this one.
Figure 2: The transplantable pair $11g(3)$.
Figure 3: Replace the prototype equilateral triangle in Figure 2 by a $366$
triangle so that six 3-vertices come together in the left-hand diagram. This
yields a Laplace-isospectral pair of hyperbolic 2-orbifolds of types
$6*2232233$ and $23*22366$.
## 4 Background
Huber [9] proved the result of Theorem 1 in the case of orientable hyperbolic
2-manifolds. Huber used what would nowadays be seen as a version of the
Selberg trace formula, which allows us to read off the lengths of geodesics
from the spectrum in a straight-forward way. Doyle and Rossetti [5] extended
the result to non-orientable hyperbolic 2-manifolds. In this case we can’t
simply read off the data about geodesics using the trace formula, because of
interference between the spectral contributions of orientation-preserving and
orientation-reversing geodesics of the same length. However, it turns out that
any possible scenario for matching spectral contributions would require too
many geodesics.
What we will show here is that, as we indicated in [5], it is a short step
from non-orientable surfaces to general orbifolds. The reason is that the
Selberg formula permits us to read off the data about orbifold features, just
as Huber read off the data about geodesics in the case of orientable
manifolds. Then we have only to check that there is still no scenario for
matching the spectral contributions of the geodesics.
There are other ways to approach showing that the Laplace spectrum determines
the data about orbifold features. By looking at the wave trace, Dryden and
Strohmaier [7] proved the result of Theorem 1 for orientable hyperbolic
2-orbifolds. While they did not consider non-orientable surfaces or orbifolds,
it seems likely that wave techniques could be used to show that the Laplace
spectrum determines all the orbifold data. The argument would be essentially
equivalent to the argument we give here, only more complicated.
Another possible approach is via the heat equation. By looking at short-time
asymptotics of the heat trace, Dryden et al. [6] got information about the
singular set of general orbifolds (for example, the volume of the reflecting
boundary). Their results yield information about orbifolds of variable
curvature, and in any dimension. Restricted to hyperbolic $2$-orbifolds, the
results they state don’t yield complete information about the singular set.
All this information is there in the short-time asymptotics of the heat trace,
however, and presumably it could be extracted using their approach, by looking
at higher and higher terms in the asymptotic expansion.
The beauty of the wave and heat approaches is that they can work in great
generality. The Selberg method depends on having spaces whose underlying
geometry is homogeneous, such as manifolds and orbifolds of constant
curvature. For studying such spaces, it is a good bet that the Selberg method
will beat the wave and heat approaches. Of course, the Selberg method can be
used to treat the heat and wave kernels; the bet is that you will do better to
consider a simpler kernel, like the counting kernel that we use here.
## 5 What we need from Selberg
Here we assemble what we will need from Selberg for the proof of Theorem 1.
All the ideas come from Selberg [13].
Let $G=\mathrm{Isom}(H^{2})$ be the group of isometries of $H^{2}$. Note that
$G$ has two components, corresponding to orientation-preserving and
orientation-reversing isometries. A hyperbolic $2$-orbifold can be written as
a union of quotients of $H^{2}$ by discrete cocompact subgroups
$\Gamma_{j}\subset G$, one for each connected component of $M$:
$M=\cup_{j}\,\Gamma_{j}\backslash H^{2}.$
We’ll denote by $F(\Gamma_{j})$ a fundamental domain for $\Gamma_{j}$.
In what follows, we could assume that $M$ is connected, and write
$M=\Gamma\backslash H^{2},$
because the extension to the disconnected case presents no difficulties. We
prefer not to do this, in part to combat the common prejudice against
disconnected spaces.
Define the _counting kernel_ on $H^{2}$ by
$c(x,y;s)=\left\\{\begin{array}[]{ll}1,&d(x,y)\leq s\\\
0,&d(x,y)>s\end{array}\right..$
It tells when the hyperbolic distance $d(x,y)$ is at most $s$. Define the
_counting trace_
$\displaystyle C(s)$ $\displaystyle=$
$\displaystyle\sum_{j}\int\limits_{F(\Gamma_{j})}\sum_{\gamma\in\Gamma_{j}}c(x,\gamma
x;t)\,dx$ $\displaystyle=$
$\displaystyle\sum_{j}\int\limits_{F(\Gamma_{j})}\\#\\{\gamma\in\Gamma_{j}:d(x,\gamma
x)\leq s\\}\,dx.$
The counting trace tells (after dividing by the volume of $M$) the average
number of broken geodesic loops on $M$ of length at most $s$,
The reason for the name ‘counting trace’ is that formally, $C(s)$ is the trace
of the linear operator $L_{s}$ whose kernel is the counting kernel $c(x,y;s)$
pushed down to $M$. $L_{s}$ associates to a function $f:M\to R$ the function
whose value at $x\in M$ is the integral over a ball of radius $s$ of the lift
of $f$ to the universal cover $H^{2}$. $L_{s}$ is not a actually trace class
operator, because of the discontinuity of the counting kernel, but our
expression for $C(s)$ is still well-defined.
We will need the following:
###### Proposition 1.
The Laplace spectrum determines the counting trace.
Proof. This is a standard application of the methods of Selberg [13], which by
now are ‘classical’. If you aren’t familiar with these methods, you should
look at [13], or failing that, look at section 6 below. The only wrinkle here
is that the kernel $c(x,y;s)$ is not continuous, so the Selberg trace formula
doesn’t apply directly. One way to deal with this is to approximate by a
sequence of smooth positive kernels that approach the counting kernel from
above; use the trace formula to see that the Laplace spectrum determines the
trace of each approximation; and then take a limit, bearing in mind that as a
function of $S$, $C(s)$ is positive, increasing, and right-continuous.
$\quad\qed$
The other thing we will need is the ‘other side’ of the trace formula: which
will allow us to isolate the contributions to the counting trace arising from
reflectors and conepoints.
Let $\mathrm{Cl}(\gamma,\Gamma)$ denote the conjugacy class of $\gamma$ in
$\Gamma$, and let $\mathrm{Z}(\gamma,\Gamma)$ denote the centralizer. We have
the usual one-to-one correspondence between the set of cosets
$\mathrm{Z}(\gamma,\Gamma)\backslash\Gamma$ and the conjugacy class
$\mathrm{Cl}(\gamma,\Gamma)$, where to the coset
$\mathrm{Z}(\gamma,\Gamma)\delta$ we associate the conjugate
$\delta^{-1}\gamma\delta$.
###### Proposition 2.
The counting trace can be expressed as a sum of contributions from conjugacy
classes of $\Gamma$:
$C(s)=\sum_{j}\sum_{\mathrm{Cl}(\gamma,\Gamma_{j})}\mathrm{Vol}(\\{x\in
F(\mathrm{Z}(\gamma,\Gamma_{j})):d(x,\gamma x)\leq s\\}).$
Proof. It’s possible to visualize how this works—for a warm-up, think about
the case of $1$-orbifolds and flat $2$-orbifolds. The trick is to group terms
belonging to the same conjugacy class:
$\displaystyle C(s)$ $\displaystyle=$
$\displaystyle\sum_{j}\int\limits_{F(\Gamma_{j})}\sum_{\gamma\in\Gamma_{j}}c(x,\gamma
x;t)\,dx$ $\displaystyle=$
$\displaystyle\sum_{j}\sum_{\gamma\in\Gamma_{j}}\int\limits_{F(\Gamma_{j})}c(x,\gamma
x;t)\,dx$ $\displaystyle=$
$\displaystyle\sum_{j}\sum_{\mathrm{Cl}(\gamma,\Gamma_{j})}\sum_{\mathrm{Z}(\gamma,\Gamma_{j})\delta}\int\limits_{F(\Gamma_{j})}c(x,\delta^{-1}\gamma\delta;s)\,dx$
$\displaystyle=$
$\displaystyle\sum_{j}\sum_{\mathrm{Cl}(\gamma,\Gamma_{j})}\sum_{\mathrm{Z}(\gamma,\Gamma_{j})\delta}\int\limits_{F(\Gamma_{j})}c(\delta
x,\gamma\delta;s)\,dx$ $\displaystyle=$
$\displaystyle\sum_{j}\sum_{\mathrm{Cl}(\gamma,\Gamma_{j})}\sum_{\mathrm{Z}(\gamma,\Gamma_{j})\delta}\int\limits_{\delta
F(\Gamma_{j})}c(x,\gamma x;s)\,dx$ $\displaystyle=$
$\displaystyle\sum_{j}\sum_{\mathrm{Cl}(\gamma,\Gamma_{j})}\int\limits_{F(\mathrm{Z}(\gamma,\Gamma_{j}))}c(x,\gamma
x;s)\,dx$ $\displaystyle=$
$\displaystyle\sum_{j}\sum_{\mathrm{Cl}(\gamma,\Gamma_{j})}\mathrm{Vol}(\\{x\in
F(\mathrm{Z}(\gamma,\Gamma_{j})):d(x,\gamma x)\leq s\\}).\quad\qed$
The virtue of this proposition is that we can evaluate the summands using
simple hyperbolic trigonometry.
## 6 The counting trace and the Laplace spectrum
Here we extract from Selberg [13] the beautiful ideas behind the fact, so
briefly disposed of in the last section, that the Laplace spectrum determines
the counting trace.
We start by going in the opposite direction:
###### Proposition 3.
The counting trace determines the Laplace spectrum.
Proof. Denote the heat kernel on $H^{2}$ by $k(x,y;s)=h(d(x,y),t)$. The trace
of the heat kernel on $M$ is the Laplace transform of the Laplace spectrum:
$\displaystyle K(t)$ $\displaystyle=$ $\displaystyle\sum_{j}\int_{x\in
F_{j}}\sum_{\gamma\in\Gamma_{j}}k(x,\gamma x;t)$ $\displaystyle=$
$\displaystyle\sum_{i}e^{-\lambda_{i}t}.$
We can write the heat trace $K(t)$ in terms of the counting trace $C(s)$ by
means of a Stieltjes integral:
$K(t)=\int h(s,t)dC(s).$
This is nothing more than Kelvin’s method of images. So the counting trace
determines the heat trace, and hence the Laplace spectrum. $\quad\qed$
What we will need here is the converse: The Laplace spectrum determines the
counting trace. This is hardly surprising—indeed, it would be astonishing if
it did not do so. That would be like discovering that you could not determine
a well-behaved initial temperature distribution on a semi-infinite bar by
keeping track of the temperature at the end of the bar. Heat at nearby points
makes itself felt sooner, and this should be enough to allow us to solve the
inverse problem, at least in principle. This isn’t how our proof is going to
work, and we don’t know if it is possible to prove the result this way. But it
should be.
The beauty and power of Selberg’s approach is that it gives us a formula for
how the Laplace spectrum determines the counting trace. But bear in mind that
we need only the fact, not the formula.
Here, briefly, is how it works.
Consider the operator $L_{s}$ that associates to a function $f:M\to R$ the
function whose value at $x\in M$ is the integral over a ball of radius $s$ of
the lift of $f$ to the universal cover $H^{2}$. This operator has kernel
$c(x,y;s)$, with $s$ fixed.
$L_{s}$ commutes with the Laplacian, so we can expect to be able to choose a
basis of eigenfunctions $\phi_{0},\phi_{1},\ldots$ of the Laplacian on $M$
that are simultaneously eigenfunction for $L_{s}$:
$\Delta\phi_{i}=\lambda_{i}\phi_{i};$ $L_{s}\phi_{i}=\mu_{i}\phi_{i}.$
(If you are worried that the kernel is not sufficiently smooth, approximate by
a smooth function.) Selberg’s great observation is that any eigenfunction
$\phi$ of $\Delta$ on $H^{2}$ is automatically an eigenfunction for $L_{s}$:
If
$\Delta\phi=\lambda\phi$
then
$L_{s}\phi=\Lambda(\lambda)\phi,$
where $\Lambda$ is a function that is fixed, known in advance, and independent
of $M$. (We’ll give a formula for it presently.) This means that any
eigenbasis for $\Delta$ on $M$ is automatically an eigenbasis for $L_{s}$, and
that the Laplace spectrum determines the spectrum (and hence the trace) of
$L_{s}$.
###### Lemma 1 (Selberg).
For any $\lambda$ and any $x_{0}\in H^{2}$, there is a unique eigenfunction
$\omega_{\lambda,x_{0}}(x)$ of $\Delta$ that is radially symmetric about
$x_{0}$ and normalized so that $\omega_{\lambda,x_{0}}(x_{0})=1$.
Proof. Write
$\omega_{\lambda,x_{0}}(x)=f(d(x,x_{0})),$
and
$\Delta\omega_{\lambda,x_{0}}(x)=Lf(d(x,x_{0})),$
where $L$ is some second order differential operator that we won’t bother to
write down. Then we have $Lf=\lambda f$; $f(0)=1$; $f^{\prime}(0)=0$. This
determines $f$ uniquely. (It’s some kind of Bessel function.) $\quad\qed$
Let
$\theta_{s}(\lambda)=[L_{s}\omega_{\lambda,x_{0}}](x_{0}),$
and note that this doesn’t depend on $x_{0}$. (It’s determined by the function
we called $f$ in the proof above.)
###### Theorem 2 (Selberg).
If $\phi$ is any eigenfunction of $\Delta$ on $H^{2}$ with eigenvalue
$\lambda$, then
$L_{s}\phi=\theta_{s}(\lambda)\phi.$
Note. While we’ve formulated this result for operators derived from the
counting kernel, it applies to any kernel $k(x,y)$ depending only on the
distance $d(x,y)$.
Proof. Fix $x_{0}$. Denote by $\phi_{x_{0}}$ the result of making $\phi$
radially symmetric about $x_{0}$ by averaging its rotations. This
symmetrization process commutes with $\Delta$, so
$\Delta\phi_{x_{0}}=\lambda\phi_{x_{0}}$
and hence by the Lemma
$\phi_{x_{0}}=\phi_{x_{0}}(x_{0})\omega_{\lambda,x_{0}}=\phi(x_{0})\omega_{\lambda,x_{0}}.$
But symmetrization about $x_{0}$ does not change the integral of $\phi$ over a
ball about $x_{0}$, so
$\displaystyle L_{s}\phi(x_{0})$ $\displaystyle=$ $\displaystyle
L_{s}\phi_{x_{0}}(x_{0})$ $\displaystyle=$
$\displaystyle[L_{s}\phi(x_{0})\omega_{\lambda,x_{0}}](x_{0})$
$\displaystyle=$
$\displaystyle\phi(x_{0})[L_{s}\omega_{\lambda,x_{0}}](x_{0})$
$\displaystyle=$ $\displaystyle\theta_{s}(\lambda)\phi(x_{0}).\quad\qed$
Note. In light of result, we may say that $L_{s}$ is ‘a function of the
Laplacian’, and write $L_{s}=\theta_{s}(\Delta)$.
This completes the background for the proof that the Laplace spectrum
determines the counting trace (Proposition 1). As we observed there, the
kernel $c(x,y;s)$ is discontinuous, so the operators $L_{s}$ aren’t trace
class, and the formal series for $C(s)$ is not guaranteed to be absolutely
convergent. To get around this, we suggested applying the trace formula to a
sequence of smooth approximations to $c(x,y;s)$.
An alternative approach is to integrate the counting kernel $c(x,y;s)$ with
respect to $s$ to get the continuous kernel
$d(x,y;s)=\left\\{\begin{array}[]{ll}s-d(x,y)&d(x,y)\leq s\\\
0,&d(x,y)>s\end{array}\right..$
The trace of this kernel is
$D(s)=\int_{s=0}^{s}C(s).$
Since $d(x,y;s)$ is still not smooth, the corresponding trace formula series
is still not guaranteed to be absolutely convergent. But now we can
approximate by smoother kernels whose traces will converge pointwise to the
value of $D(s)$. (This amounts to an alternative way of summing the series for
$D(s)$: See Selberg [13, p. 68].) So the Laplace spectrum determines $D(s)$,
and hence also $C(s)$.
Now, in practice the formal sum for $C(s)$ might converge pretty well with
perhaps a little coddling. At least that is the case for Euclidean
orbifolds—see Section 11 below.
## 7 Outline of the proof
The counting trace $C(s)$ is built up of contributions from the conjugacy
classes of $\mathrm{Isom}(H^{2})$. The contribution from the identity is
independent of $s$, and equal to the volume of $M$. The remaining conjugacy
classes are of four kinds: reflections, rotations, translations, and glide
reflections. Their contributions all vanish for $s=0$, which means that $C(0)$
is the volume of $M$. Translations and glide reflections make no contribution
for $s$ smaller than the length of the shortest closed geodesic, so for small
$s$ the only contributions are from the identity, reflections, and rotations.
Begin by reading off the volume $C(0)$, and subtracting this constant from
$C(s)$. In what is left, reflections make a contribution of first order in
$s$, proportional to the total length of the mirror boundary. Rotations
contribute only to second order, so we can read off the length of the mirror
boundary, subtract out its contribution, and what is left is (for small $s$)
entirely the result of rotations.
A simple computation now shows that the contributions of rotations through
different angles are linearly independent. This allows us to read off the
orders of the conepoints, and subtract out their contributions. Once we’ve
zapped the contributions of the identity, reflections, and rotations, what is
left of the counting trace vanishes identically for small $s$, and overall is
entirely the result of translations and glide reflections, corresponding in
the quotient to closed geodesics.
Here we run into the main difficulty in the proof, which is that the
contributions of orientation-preserving and orientation-reversing geodesics of
a given length are not linearly independent. Indeed, they are proportional,
with a constant of proportionality depending on the length of the geodesic. So
we can’t simply read off these lengths. Fortunately, we have already
established that the spectrum determines the lengths and twists of geodesics
for hyperbolic 2-manifolds, (See Doyle and Rossetti [5].) and the argument
carries over directly to the orbifold case, providing we take care to count
geodesics along the boundary as half orientation-preserving and half
orientation-reversing.
## 8 Details
Denote by $R$ the combined length of all reflecting boundaries.
###### Lemma 2.
The combined contribution of the reflecting boundaries to the counting trace
$C(s)$ is
$R\sinh\frac{s}{2}.$
Proof. Let $\rho\in\Gamma_{j}$ be a reflection. There are two possibilities
for $\mathrm{Z}(\rho,\Gamma_{j})\backslash H^{2}$: a funnel or a planar
domain. (See Figure 4.)
Figure 4: Possible quotients for the centralizer of a reflection.
In either case, the contribution to $C(s)$ is $R_{\rho}\sinh\frac{s}{2}$,
where $R_{\rho}$ is the portion of $R$ attributable to
$\mathrm{Cl}(\rho,\Gamma_{j})$, because this measures points in the quotient
that are within $\frac{s}{2}$ of the axis of $\rho$, and hence within $s$ of
their images under the reflection $\rho$. (See Figure 5.)
Figure 5: The contribution of a reflection to the counting trace.
Summing over the conjugacy classes of reflections in all the groups
$\Gamma_{j}$ gives $R\sinh\frac{s}{2}$. $\quad\qed$
Now we turn to conjugacy classes of rotations. Each such is associated to a
unique conepoint or boundary corner. From a conepoint of order $n$ we get
$n-1$ rotations through angles $\frac{2\pi k}{n}$, $k=1,\ldots,n-1$. For each
such rotation $\gamma$ the centralizer $\mathrm{Z}(\gamma,\Gamma_{j})$ is
cyclic of order $n$, consisting of just these $n-1$ rotations, together with
the identity. The quotient $\mathrm{Z}(\gamma,\Gamma_{j})$ is (surprise!) an
infinite cone of cone angle $\frac{2\pi}{n}$. For a boundary corner of angle
$\frac{\pi}{n}$ the centralizer is a dihedral group of order $2n$, and the
quotient is an infinite sector of angle $\frac{\pi}{n}$. This quotient is half
of a cone of angle $\frac{\pi}{n}$, and it should be clear that the
contribution to $C(s)$ of rotations associated to a boundary corner is just
half what it is for a conepoint—which is why a boundary corner counts as half
a cone-point. So we can forget about boundary corners, and consider just
conepoints.
Denote by $g_{n}(s)$ the combined contribution to the counting trace of a
conepoint of order $n$. This contribution is the sum of the contributions of
the $n-1$ non-trivial conjugacy classes of rotations associated to the
conepoint. We denote these individual contributions by $g_{k,n}(s)$:
$g_{n}(s)=\sum_{k=1}^{n}g_{k,n}(s).$
###### Lemma 3.
$g_{k,n}(s)=\frac{1}{n}f_{\frac{2\pi k}{n}}(s),$
where
$f_{\theta}(s)=2\pi\left(\sqrt{1+\frac{\sinh^{2}\frac{s}{2}}{\sin^{2}\frac{\theta}{2}}}-1\right).$
Proof. It should be clear that
$g_{k,n}(s)=\frac{1}{n}f_{\frac{2\pi k}{n}}(s),$
where $f_{\theta}(s)$ is the area $2\pi(\cosh r-1)$ of a hyperbolic disk of
such a radius $r$ that a chord of length $s$ subtends angle $\theta$. (A
fundamental domain for the cone hits a fraction $\frac{1}{n}$ of any disk
about the center of rotation.) We just have to make sure that we have the
correct formula for $f_{\theta}$. The quantities $r,\theta,s$ satisfy
$\sinh r=\frac{\sinh\frac{s}{2}}{\sin\frac{\theta}{2}}.$
(See Figure 6.)
Figure 6: The circle whose chord of length $s$ subtends angle $\theta$
Thus
$\displaystyle f_{\theta}(s)$ $\displaystyle=$ $\displaystyle 2\pi(\cosh r-1)$
$\displaystyle=$ $\displaystyle 2\pi\left(\sqrt{1+\sinh^{2}r}-1\right)$
$\displaystyle=$ $\displaystyle
2\pi\left(\sqrt{1+\frac{\sinh^{2}\frac{s}{2}}{\sin^{2}\frac{\theta}{2}}}-1\right).\quad\qed$
Hopefully it will seem absolutely incredible that there might be any non-
trivial linear relation between these functions $f_{\theta}$. There isn’t:
###### Lemma 4.
The functions $f_{\theta},0<\theta\leq\pi$ are linearly independent: No non-
trivial linear combination of any finite subset of these functions vanishes
identically on any interval $0<s<S$.
Proof. It suffices to show linear independence of the family of functions
$\sqrt{1+au}-1$, $1<a<\infty$, on all intervals $0<u<U$. (Set
$u=\sinh^{2}\frac{t}{2}$ and $a=\frac{1}{\sin^{2}\frac{\theta}{2}}$.) Actually
functions of this form remain independent when $a$ is allowed to range
throughout the complex plane, and not just over the specified real interval. A
quick way to see this is to write in Taylor series
$g(u)=\sqrt{1+u}-1=b_{1}u+\frac{b_{2}}{2}u^{2}+\frac{b_{3}}{6}+\ldots$
and observe (look up?) that only the constant term vanishes:
$b_{1},b_{2},\ldots\neq 0.$
If a linear combination $\sum_{i=1}^{n}c_{i}g(a_{i}u)$ is to vanish on
$0<u<U$, then all its derivatives must vanish at $u=0$:
$\sum_{i=1}^{n}c_{i}a_{i}^{k}b_{k}=0,\quad k=1,2,\ldots.$
(We ignore the constant term because all the functions involved here vanish at
$u=0$.) Dividing by $b_{k}$ gives
$\sum_{i=1}^{n}c_{i}a_{i}^{k}=0,\quad k=1,2,\ldots.$
Could this system of equations for $c_{1},\ldots,c_{n}$ have a non-trivial
solution? If so, then the subsystem consisting of only the first $n$ of these
equations would have a non-trivial solution:
$\sum_{i=1}^{n}c_{i}a_{i}^{k}=0,\quad k=1,\ldots,n.$
Let $d_{i}=c_{i}a_{i}$, so that
$\sum_{i=1}^{n}d_{i}a_{i}^{k-1}=0,\quad k=1,\ldots,n.$
The matrix $(a_{i}^{k-1})$ of this system of linear equations is a Vandermonde
matrix, with determinant $\prod_{i<j}(a_{i}-a_{j})$, so as long as the $a_{i}$
are distinct the system has no non-trivial solution. $\quad\qed$
Note. In appealing to the fact that none of the derivatives of $\sqrt{1+u}$
happen to vanish at $u=0$, we are seizing upon an accidental feature of the
problem. We should give a more robust proof.
Proof of Theorem 1. Subtract from $C(s)$ from the counting trace the volume
$C(0)$ and the contributions of the reflectors. Because of the linear
independence of the functions $f_{t}heta$, we can identify the number of
conepoints of highest order. (It should go without saying that this includes
the half-conepoints at boundary corners.) Subtract out their combined
contribution from $C(s)$. Now we can identify the number of conepoints of the
next highest order, and subtract out their contribution. Proceed until all
conepoint contributions have been removed. What remains arises from the closed
geodesics. In the manifold case, it is shown that the characteristics of the
closed geodesics are determined by the counting trace, and hence by the
spectrum. The proof carries over here essentially without change. $\quad\qed$
## 9 Implications
Caveat. The strange and infelicitous notation we use here arises from trying
to handle disconnected spaces. This leads to things like considering the frame
bundle of a connected orientable space to be disconnected. There could be
mistakes in the notation, though hopefully not in the ideas it is meant to
express.
Let $G=\mathrm{Isom}(H^{2})$ denote the group of isometries of $H^{2}$. Note
that $G$ has two connected components, corresponding to orientation-preserving
and orientation-reversing isometries. A hyperbolic $2$-orbifold $M$ can be
written as a union of quotients of $H^{2}$ by discrete cocompact subgroups
$\Gamma_{j}\subset G$, one for each connected component of $M$:
$M=\cup_{j}\,\Gamma_{j}\backslash H^{2}.$
$M$ is naturally covered by
${\bar{M}}=\cup_{j}\,\Gamma_{j}\backslash G.$
${\bar{M}}$ is the bundle of all orthonormal frames of $M$, not just those
that have a particular orientation; every point $x$ of $M$ is covered by two
disjoint circles in ${\bar{M}}$, corresponding to the two orientation classes
of frames of the tangent space $T_{x}M$. Note that there will be two connected
components of ${\bar{M}}$ for each orientable component of $M$, and one for
each non-orientable component, because dragging a frame around an
orientatation-reversing loop (e.g. one that simply bumps off a mirror
boundary) will take you from one orientation class of frames to the other.
Note. Even if $M$ happens to be orientable, we should not think of it as
oriented, because we are discussing manifolds up to isometry, not up to
orientation-preserving isometry. The Laplace spectrum cannot not detect
orientation, so it makes no sense to discuss oriented manifolds, and since we
can’t orient our manifolds, there is no obvious good reason to consider only
orientable manifolds, as has commonly been done in the past when discussing
isospectrality. But then again, there is no obvious good reason to consider
only manifolds, as opposed to general orbifolds—possibly disconnected.
$G$ acts naturally on the right on ${\bar{M}}$, and hence on
$L^{2}({\bar{M}})$. This linear representation of $G$ is analogous to the
matrix representation of a finite permutation group. A finite permutation
representation $\rho$ is determined up to linear equivalence by its character
$\chi$, with $\chi(g)=\mathrm{tr}\rho(g)$ counting the fixed points of $g$.
The situation here should be exactly analogous, the only question being
exactly how to define the character. The answer comes from the Selberg trace
formula.
For now let’s extend the discussion to an arbitrary unimodular Lie group $G$,
possibly disconnected. Let $\Gamma<G$ be a discrete subgroup with compact
quotient ${\bar{M}}=\Gamma\backslash G$. (Notice that we make no mention here
of $M$, though in the intended application ${\bar{M}}$ will arise from a
homogeneous quotient $M=\Gamma\backslash G/K$.) Denote by $\mathrm{Cl}(g,G)$
the conjugacy class of $g$ in $G$, and by $\mathrm{Z}(g,G)$ the centralizer of
$g$ in $G$. Introduce Haar measure $\rho^{g}$ on $Z_{G}(g)$, normalized in a
consistent (i.e., $G$-equivariant) way. Attribute to
$\mathrm{Cl}(\gamma,\Gamma)$ the _weight_
$\rho^{\gamma}(\mathrm{Z}(\gamma,\Gamma)\backslash\mathrm{Z}(\gamma,G)$, and
define the _character_ $\chi_{\bar{M}}$ to be the function associating to
$g\in G$ the total weight of all conjugacy classes
$\mathrm{Cl}(\gamma,\Gamma)$ for which
$\mathrm{Cl}(\gamma,G)=\mathrm{Cl}(g,G)$. Extend the definition of
$\chi_{\bar{M}}$ to ${\bar{M}}=\cup_{j}\,\Gamma_{j}\backslash G$ by linearity.
As in the finite case, the character is a _central function_ , which means
that $\chi_{\bar{M}}(gh)=\chi_{\bar{M}}(hg)$ for all $g,h\in G$, or what is
the same, $\chi_{\bar{M}}(g)$ depends only on the conjugacy class
$\mathrm{Cl}(g,G)$. In analogy to the finite case, we can think of
$\chi_{\bar{M}}(g)$ as measuring (in appropriate units) the size of the fixed
point set $\\{x\in{\bar{M}}:xg=x\\}$.
###### Proposition 4 (Bérard).
The character $\chi_{\bar{M}}$ determines the representation
$L^{2}({\bar{M}})$ up to linear equivalence.
Proof. Using a version of Selberg’s trace formula we can write the trace of
the integral operator associated to any smooth function of compact support on
$G$ in terms of the values of the character $\chi_{\bar{M}}$. (Cf. Wallach
[15, Theorem 2.1], Selberg [13, (2.10) on p. 66].) By standard representation
theory, these traces determine the representation. For details, see Bérard
[1]. $\quad\qed$
###### Proposition 5 (DeTurck and Gordon).
The character $\chi_{\bar{M}}$ determines the trace of any natural operator on
any natural vector bundle over $M$.
Proof. Selberg again. (Cf. DeTurck and Gordon [4].) $\quad\qed$
With this preparation, we have the following corollaries of Theorem 1.
###### Corollary 1 (Character is determined).
The Laplace spectrum of a hyperbolic 2-orbifold $M$ determines the character
$\chi_{\bar{M}}$. $\quad\qed$
###### Corollary 2 (Representation-equivalence).
Laplace-isospectral hyperbolic $2$-orbifolds determine equivalent
representations of $\mathrm{Isom}(H^{2})$. $\quad\qed$
###### Corollary 3 (Strong isospectrality).
If two compact hyperbolic 2-orbifolds are Laplace-isospectral then they have
the same spectrum for any natural operator acting on sections of any natural
bundle. $\quad\qed$
## 10 Counterexamples
The analog of Theorem 1 fails for flat 2-orbifolds. In the connected case it
does go through more or less by accident, just because there aren’t many
connected flat 2-orbifolds. But there are counterexamples among disconnected
flat 2-orbifolds. We described such examples briefly in [5], and we want to
recall them briefly here. Someday we will write about them at greater length.
Here’s our premier example, which we call _the $1/2+1/6=2/3$ example_. Let
$H_{1}$ denote the standard hexagonal flat torus
$\langle(1,0),(-\frac{1}{2},\frac{\sqrt{3}}{2})\rangle\backslash\mathbf{R}^{2}$.
$H_{1}$ has as 2-, 3-, and 6-fold quotients a $2222$ orbifold $H_{2}$ (this is
a regular tetrahedron); a $333$ orbifold $H_{3}$; and a $236$ orbifold
$H_{6}$. (See Figures 7 and 8.)
Figure 7: The universal covers of the standard hexagonal torus $H_{1}$ and its
quotient orbifolds $H_{2}$, $H_{3}$, and $H_{6}$.
Figure 8: The orbifolds $H_{1}=\bigcirc$, $H_{2}=2222$, $H_{3}=333$, and
$H_{6}=236$.
Spectrally,
$H_{2}+H_{6}\equiv 2H_{3},$ (1)
meaning that these spaces have the same Laplace spectrum. (In fact, as we’ll
discuss below, they are isospectral for the Laplacian on $k$-forms for
$k=0,1,2$.) Of course these two spaces match as to volume (1/2+1/6=2/3) and
number of components (1+1=2). But conepoints do not match. On the left we have
a $2222$ and a $236$, so five conepoints of order $2$; one of order $3$; one
of order $6$. On the right we have two $333$s, so nine conepoints of order
$3$.
The reason this example is possible is that in the flat case, the
contributions of rotations to the counting kernel differ only by a
multiplicative constant. Lumping together the contribution of all the
rotations associated with a conepoint of order $n$, we get something
proportional to $\frac{n^{2}-1}{n}$. (This nice simple formula was discovered
by Dryden et al. [6].) For a general 2-orbifold with variable curvature, it
measures the contribution of conepoints to the short-time asymptotics of the
heat trace. In the flat case, the short-time asymptotics determine the entire
contribution, because the contributions of all rotations are exactly
proportional.
Combining contributions of all conepoints belonging to connected orbifold of
various kinds, we get the following totals:
$\begin{array}[]{rc}2222:&4\cdot\frac{3}{2}=6\\\ 333:&3\cdot\frac{8}{3}=8\\\
244:&\frac{3}{2}+2\cdot\frac{15}{4}=9\\\
236:&\frac{3}{2}+\frac{8}{3}+\frac{35}{6}=10.\end{array}$
The isospectrality $H_{2}+H_{6}\equiv 2H_{3}$ arises because $6+10=2\cdot 8$
makes conepoint contributions match; we’ve already observed that the volumes
match; and geodesic contributions match because on both sides the covering
manifolds are $2H_{1}$.
We get a second isospectrality
$H_{1}+H_{3}+H_{6}\equiv 3H_{2}$
because $8+10=3\cdot 6$, $1+\frac{1}{3}+\frac{1}{6}=3\cdot\frac{1}{2}$, and on
both sides the covering manifolds are $3H_{1}$.
Combining these two relations yields other Laplace-isospectral pairs, e.g.:
$H_{1}+3H_{3}\equiv 4H_{2};$ $H_{1}+4H_{6}\equiv 5H_{3};$ $2H_{1}+3H_{6}\equiv
5H_{2}.$
By linearity, all these relations satisfy the condition of having equal
volume, number of components, and contributions from conepoints. But look
here: We’re talking, in effect, about formal combinations of
$H_{1},H_{2},H_{3},H_{6}$, so we’re in a space of dimension $4$. We have three
linear conditions, but to our surprise, the subspace they determine has
dimension $2$. Our three linear conditions are not independent. If we match
volume and number of components, agreement of conepoint contributions follows
for free. But why? We don’t know.
A similar coincidence happens for square tori. Let $T_{1}$ denote the standard
square torus $\mathbf{Z}^{2}\backslash\mathbf{R}^{2}$. $T_{1}$ has as 2-, and
4-fold quotients a $2222$ orbifold $T_{2}$ and a $244$ orbifold $T_{4}$. We’re
in a 3-dimensional space, so we expect to be out of luck when we impose
3-constraints, but in fact we have the relation
$T_{1}+2T_{4}\equiv 3T_{2}.$
We check equality of volume $1+\frac{2}{4}=\frac{3}{2}$ and number of
components $1+2=3$, and then we find that equality of conepoint contributions
follows for free: $2\cdot 9=3\cdot 6$. Again, why?
Conjecturally, up to Sunada-equivalence the relations we’ve given for
hexagonal and square tori are all there are: See Section 12 below.
These relations apply only to the spectrum of the Laplacian on functions, not
$1$-forms. When when we apply the Selberg formula to $1$-forms, there is still
interference between spectral contributions of conepoints, which allows our
$1/2+1/6=2/3$ example to continue to slip through: It is isospectral for
$1$-forms as well as for functions. But we get additional constraints, which
taken with those coming from the $0$-spectrum force there to be the same
number of torus components on each side, and this wipes out the other
examples. The reason we have to have the same number of torus components is
that this is just half the dimension of the space of ‘constant’ or ‘harmonic’
$1$-forms, meaning those belonging to the $0$-eigenspace of the Laplacian. And
since we are expecting just one more linear constraint, this must be it. So,
no additional mystery here, though when you do the computation it still seems
a little surprising.
The isospectrality of the $1/2+1/6=2/3$ example does not hold for all natural
operators, because the left side admits harmonic differentials that the right
side does not.
## 11 Putting the Selberg formula to work
Here we compute the counting trace for some flat orbifolds, using the Selberg
formula.
Figure 9 shows a plot of the counting trace of a $*333$ orbifold, computed
approximately from the Laplace spectrum.
Figure 9: The counting trace for the $*333$ orbifold with side length
$\frac{1}{\sqrt{3}}$, computed numerically by summing the contributions of the
$11626$ eigenvalues with wave number $k<1000$. To combat the Gibbs phenomenon
(also called ‘ringing’), we have used the windowing function
$1-(k/k_{\mathrm{max}})^{2}$ to reduce the contribution of the sum from larger
eigenvalues. The upper line is the counting trace; the middle is the
theoretical contributions from orbifold features (reflectors and boundary
corners); the lower line is the difference, attributable to the identity and
other translations. Observe the jump at $0$; the bend at $\frac{\sqrt{3}}{2}$;
the jumps at $1$ and $2$; the jump plus bend at $\sqrt{3}$.
We can think of $*333$ as an equilateral triangle with Neumann boundary
conditions. We have chosen the side length to be $\frac{1}{\sqrt{3}}$, so that
the covering torus is a standard hexagonal torus with shortest geodesics of
length $1$. The shortest geodesic in the quotient $*333$ is the $3$-bounce
geodesic in the middle, which has length $\frac{\sqrt{3}}{2}$. Next shortest
are the torus geodesics of length $1$. Then at length $\sqrt{3}$ we get the
second power of the middle geodesic, which is orientation preserving and hence
a torus geodesic, plus geodesics parallel to the boundary, which count as both
orientation-preserving and orientation-reversing.
Figure 10 shows another plot of the counting kernel as reconstructed from the
Laplace spectrum.
Figure 10: The counting trace of an equilateral triangle with side length
$\frac{1}{\sqrt{3}}$, with Dirichlet boundary conditions.
This time the space is an equilateral triangle with Dirichlet boundary
conditions. Such a space is a generalized orbifold, whose functions change
sign under some deck transformation (in this case, just those which reverse
orientation). Kelvin in effect worked in such a space when computing the field
of a point charge near an infinite conducting planar boundary. The counting
kernel counts images with sign $\pm 1$. Note the we see the same jumps and
bends as in the Neumann case above, only now the bends are downward rather
than upward.
Now let’s try computing the counting trace of a nonorbifold triangle, using
the same formula as for orbifolds. This can be justified by defining the
counting kernel to be the Green’s function for a variant of the wave equation
that we may call the _counting wave equation_. Figures 12 shows Neumann and
Dirichlet counting trace plots for an equilateral triangle with angles
$\frac{\pi}{5},\frac{\pi}{5},\frac{3\pi}{5}$ (side lengths $1,1,\phi$).
Because we are using fewer eigenvalues than in the graphs above, the bends and
jumps are less distinct here. It would be great to have more eigenvalues, but
these are surprisingly hard to come by! The 611 Neumann eigenvalues and 591
Dirichlet eigenvalues used here were kindly supplied by Alex Barnett.
Figure 11: The Neumann and Dirichlet counting traces for a triangle with
angles $\frac{\pi}{5},\frac{\pi}{5},\frac{3\pi}{5}$ and sides $1,1,\phi$. 611
Neumann and 591 Dirichlet eigenvalues courtesy of Alex Barnett.
Figure 12: The Neumann and Dirichlet counting traces from Figure 11, together
with their average and half their difference.
In a non-orbifold triangle we expect bends in the counting trace graphs
corresponding to orbits that are diffracted at the non-orbifold vertices. For
example, here we see a bend at length $2\sin frac{\pi}{5}$, corresponding to
the orbit that goes back and forth between the $\frac{3\pi}{5}$ vertex and the
midpoint of the opposite edge. (This is most visible in the Dirichlet graph,
but it is present in the Neumann graph as well.) You might enjoy identifying
the closed orbits that are the source of the other bends and jumps that are
visible in these graphs.
Note. Three of the four graphs in Figure 12 appear to be increasing, just as
they would be in the case of an orbifold. We don’t know if these curves really
are monotonic, and if so, whether this illustrates a general phenomenon. To
investigate this, we will want to know how a counting wavefront diffracts when
it encounters a non-orbifold vertex. We could try some numerical experiments,
if we had eigenfunctions to go along with our list of eigenvalues.
Alternatively, we could try using the analog for counting waves of
Sommerfeld’s explicit formula for the Green’s function of the classical wave
equation in a wedge.
## 12 Conjectures
As we observed in section 10, we conjecture, or at least hypothesize, that the
only spectral relations between flat orbifolds beyond those coming from
Sunada’s method are those relations we have found for hexagonal and square
tori; of course these come in all sizes. Now, at this point, Peter doesn’t
remember checking orbifolds with reflectors, though he can’t imagine why he
wouldn’t have. But, the hard thing here is not going to be dealing with
relations between orbifolds. There are two real issues. The first is to see
that any isospectrality breaks apart into commensurable pieces. The second is
going to be dealing with the case of tori. Specifically, it seems like for
tori, the only relations should be those implicit in the theory of Hecke
operators. The simplest example of a Hecke relation is the isospectrality
between ‘two $1$-by-$1$s and a $2$-by-$2$; two $2$-by-$1$s and a roo-by-roo’.
Here ‘roo’ is my (Peter’s) proposed abbreviation for ‘the square root of two’.
Moving now to the hyperbolic case, Theorem 1 implies that all hyperbolic
triangular orbifolds $*abc$ (e.g. $*237$) are determined by their Laplace
spectrum. It seems likely that all hyperbolic quadrilateral and pentagonal
orbifolds are also determined. Note that in light of Theorem 1, we can treat
this as a purely geometrical question. This question could most likely be
settled by a computer-aided investigation, along the lines of the computer-
aided proof that flat $3$-tori are determined by their Laplace spectrum. We
can expect complications in the neighborhood of the isometric pairs of
quadrilateral and pentagonal orbifolds to which the seven-triangle pairs
$7(3)$ and $7(2)$ of Appendix A degenerate when the triangles are made too
symmetrical. (See Figures 13 and 14.)
Figure 13: An isometric pair of type $*2334$, from glueing pattern $7(3)$.
Figure 14: An isometric pair of type $*23424$, from glueing pattern $7(2)$.
This is exactly the kind of issue Rich Schwartz and Pat Hooper faced down in
the proof of the ‘$100$ degree theorem’ [11] [12], which states that any flat
triangle with no angle larger than $100$ degrees has a closed billiard
trajectory. So maybe that investigation would be a better guide here than the
business about $3$-tori. We hope that this problem will turn out to be much
easier than the billiard problem.
It seems likely that the hexagons of Figure 1 constitute the simplest
isospectral pair of hyperbolic hexagonal orbifolds, and possible that all such
pairs arise from the glueing diagram $7(1)$. In trying to prove this we can
expect problems in the neighborhood of the isometric pair we get from glueing
pattern $7(1)$. (See Figure 15.)
Figure 15: An isometric pair of type $*242424$, from glueing pattern $7(1)$.
Note that this orbifold has 3-fold symmetry, as its Conway symbol suggests
(but does not imply).
Another problematic pair arises from pattern $13a(5)$.
A similar computer-assisted approach might show that hyperbolic surfaces of
genus $2$ and $3$ are determined by their Laplace spectra, as suggested by Bob
Brooks. Here again there is an obvious to expect difficulties, where a
degenerate Sunada pair yields isometric surfaces.
One more thing: It seems likely that in any dimension, if two hyperbolic (or
spherical) orbifolds are isospectral on $k$-forms for all $k$, then they are
representation-equivalent. Pesce [10] showed that that representation-
equivalence follows from isospectrality for all natural self-adjoint elliptic
differential operators on natural bundles. In formulating his result, Pesce
simply says that representation equivalence follow from isospectrality of all
natural operators; if you allow non-self-adjoint operators, the problem
becomes much easier, because representation-equivalence just amounts to
isospectrality for the regular action of $G$ on functions on $G$, and a
function on $G$ is a section of a natural bundle over $\Gamma\backslash G$.
Here we are hoping to make do with only the Laplacian on forms. What made
Peter think this would be doable is that fact that for $2$-orbifolds, if you
know the spectrum on functions and $1$-forms, you can simply read off the
character $\chi$, using an extension of the ‘hard side’ of Selberg’s formula
to sections of bundles that are not flat. This works in the $2$-dimensional
case, where it turns out that if you know the spectrum of the Laplacian on
functions and on $1$-forms, you can just ‘read off’ the character without all
the difficulty you face when you try to use functions alone. (This is the
difficulty that never really arose in this paper, since we outsourced the crux
of the argument to our earlier paper on nonorientable hyperbolic
$2$-manifolds.) The issue comes down to this: Is an eigenfunction of the
Laplacian on a $k$ form automatically an eigenfunction for the averaging
operator where you restrict the form to a ball in the universal cover $H^{n}$
and translate the values back to the center of the ball using the exponential
map? If this is false, as it may well be when $n>2$, then the problem could be
harder than Peter had imagined.
## Appendix: Conway’s quilts
In this section we reproduce John Conway’s catalog of transplantable pairs
coming from small projective groups. Conway generated these pairs using his
theory of quilts, while Peter watched admiringly.
The 16 pairs of sizes 7, 13, and 15, which are treelike and thus give planar
isospectral domains, were reported in Buser et al. [2]. In [2] we also
presented the first of the size 21 pairs, which yields planar isospectral
domains for certain triangle shapes. (The second of these size 21 pairs can
also yield planar domains, so despite what some authors have mistakenly
concluded, is no special role for the number 17 here.) The rest of these
examples have been languishing on my webpage.
There is much to be said about these examples and where they come from, but as
our purpose here is mainly to bring them to light, we make just a few remarks.
A transplantable pair can be thought of as arising from a pair of finite
permutation actions of the free group on generators $a,b,c$. These actions are
equivalent as linear representations but (if the pair is to be of any use) not
as permutation representations. In the examples at hand, each representing
permutation is an involution, so these representations factor through the
quotient $F=\langle a,b,c:a^{2}=b^{2}=c^{2}=1\rangle$, the free product of
three copies of the group of order $2$.
From any transplantable pair we can get other pairs through the process of
braiding, which amounts to precomposing the permutation representations with
automorphisms of $F$, called $L$ and $R$:
$L:(a,b,c)\mapsto(aba^{-1},a,c);$ $R:(a,b,c)\mapsto(a,c,cbc^{-1}).$
Left-braiding a permutation representation $\rho$ can be viewed as first
conjugating $\rho(b)$ by $\rho(a)$, i.e. applying the permutation $\rho(a)$ to
each index in the cycle representation of $\rho(b)$, and then switching
$\rho(a)$ with the new $\rho(b)$. Right-braiding is the same, only with $c$
taking over the role of $a$.
Pairs of permutations that are equivalent in this way belong to the same
_quilt_. We identify pairs that differ only by permuting $a,b,c$, or by
reversing the pair. A quilt has extra structure which we are ignoring here:
See Conway and Hsu [3]. This structure makes it easier to understand and
enumerate the pairs. But the computer has no trouble churning out pairs
belonging to the same quilt.
In the diagrams to follow, the permutations corresponding to $a,b,c$ are
represented by dotted, dashed, and solid lines. Thin lines connect elements
that are interchanged, while thick lines delimit fixed points. Thick lines in
the interior of the diagram separate points that are each fixed, rather than
interchanged. Sometimes thin lines occur on the boundary of the diagram, which
means that the boundary must be glued up. We have taken care to lay out the
diagrams so that there is at most one pair of thin boundary lines of each type
(dotted, dashed, or solid), so that even without the usual glueing arrows
there is no ambiguity of how the boundary is to be glued up.
The label below a pair tells the sequence of braiding operations that gets you
this pair from the first pair of the quilt. The labels read left-to-right, so
that $\\{L,L,R,L\\}$ means two lefts, a right, then another left. In most but
not all cases, the quilt has been explored in ‘left-first search’ order.
To refer to these pairs, we will write $7(1)$ for the first pair of quilt 7,
$13a(5)$ for the fifth pair of quilt $13a$, etc. As this is not very
canonical, we reserve the right to replace the parenthesized number by a list
(possibly empty) of $L$s and $R$s, so that for example $13a(5)$ would become
$13aLLRL$, and $7(1)$ would become simply $7$.
For historical reasons, the four quilts of size 11 are called
$11f,11g,11h,11i$.
Figure 16: Quilt 7
Figure 17: Quilt 13a
Figure 18: Quilt 13b
Figure 19: Quilt 15
Figure 20: Quilt 11f
Figure 21: Quilt 11g
Figure 22: Quilt 11h
Figure 23: Quilt 11i
Figure 24: Quilt 21
## Acknowledgements
We are grateful to Alex Barnett for supplying the spectral data for non-
orbifold triangles that we used in section 11, and for much helpful advice
about waves and the like.
## References
* [1] Pierre Bérard. Transplantation et isospectralité. II. J. London Math. Soc. (2), 48(3):565–576, 1993.
* [2] Peter Buser, John Conway, Peter Doyle, and Klaus-Dieter Semmler. Some planar isospectral domains. Internat. Math. Res. Notices, 9:391ff., approx. 9 pp. (electronic), 1994.
* [3] John H. Conway and Tim Hsu. Quilts and $T$-systems. J. Algebra, 174(3):856–908, 1995.
* [4] Dennis M. DeTurck and Carolyn S. Gordon. Isospectral deformations. II. Trace formulas, metrics, and potentials. Comm. Pure Appl. Math., 42(8):1067–1095, 1989. With an appendix by Kyung Bai Lee.
* [5] Peter G. Doyle and Juan Pablo Rossetti. Isospectral hyperbolic surfaces have matching geodesics, 2008, arXiv:math/0605765v2 [math.DG].
* [6] Emily B. Dryden, Carolyn S. Gordon, and Sarah J. Greenwald. Asymptotic expansion of the heat kernel for orbifolds, 2008.
* [7] Emily B. Dryden and Alexander Strohmaier. Huber’s theorem for hyperbolic orbisurfaces, 2005, arXiv:math.SP/0504571.
* [8] C. Gordon, D. Webb, and S. Wolpert. One cannot hear the shape of a drum. Bull. Amer. Math. Soc., 27:134–138, 1992.
* [9] Heinz Huber. Zur analytischen Theorie hyperbolischen Raumformen und Bewegungsgruppen. Math. Ann., 138:1–26, 1959.
* [10] Hubert Pesce. Variétés hyperboliques et elliptiques fortement isospectrales. J. Funct. Anal., 134(2):363–391, 1995.
* [11] Richard Evan Schwartz. Obtuse triangular billiards. I. Near the $(2,3,6)$ triangle. Experiment. Math., 15(2):161–182, 2006.
* [12] Richard Evan Schwartz. Obtuse triangular billiards. II. One hundred degrees worth of periodic trajectories. Experiment. Math., 18(2):137–171, 2009.
* [13] Atle Selberg. Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series. J. Indian Math. Soc. B, 20:47–87, 1956. Reprinted in [14].
* [14] Atle Selberg. Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series. In Collected Papers, vol. 1, pages 423–463. Springer, 1989.
* [15] Nolan R. Wallach. On the Selberg trace formula in the case of compact quotient. Bull. Amer. Math. Soc., 82(2):171–195, 1976.
|
arxiv-papers
| 2011-03-22T20:34:49 |
2024-09-04T02:49:17.864000
|
{
"license": "Public Domain",
"authors": "Peter G. Doyle and Juan Pablo Rossetti",
"submitter": "Peter G. Doyle",
"url": "https://arxiv.org/abs/1103.4372"
}
|
1103.4385
|
# Dansgaard-Oeschger events: tipping points in the climate system
A. A. Cimatoribus cimatori@knmi.nl Royal Netherlands Meteorological
Institute, Wilhelminalaan 10, 3732GK De Bilt, Netherlands S. S. Drijfhout
Royal Netherlands Meteorological Institute, Wilhelminalaan 10, 3732GK De Bilt,
Netherlands V. Livina School of Environmental Sciences, University of East
Anglia, Norwich NR4 7TJ, UK G. van der Schrier Royal Netherlands
Meteorological Institute, Wilhelminalaan 10, 3732GK De Bilt, Netherlands
###### Abstract
Dansgaard–Oeschger events are a prominent mode of variability in the records
of the last glacial cycle. Various prototype models have been proposed to
explain these rapid climate fluctuations, and no agreement has emerged on
which may be the more correct for describing the paleoclimatic signal. In this
work, we assess the bimodality of the system reconstructing the topology of
the multi–dimensional attractor over which the climate system evolves. We use
high–resolution ice core isotope data to investigate the statistical
properties of the climate fluctuations in the period before the onset of the
abrupt change. We show that Dansgaard–Oeschger events have weak early warning
signals if the ensemble of events is considered. We find that the statistics
are consistent with the switches between two different climate equilibrium
states in response to a changing external forcing (e.g. solar, ice sheets…),
either forcing directly the transition or pacing it through stochastic
resonance. These findings are most consistent with a model that associates
Dansgaard–Oeschger with changing boundary conditions, and with the presence of
a bifurcation point.
## I Introduction
A dominant mode of temperature variability over the last sixty thousand years
is connected with Dansgaard–Oeschger events (DOs) (Dansgaard et al., 1993;
EPICA Community Members, 2004). These are fast warming episodes (in the North
Atlantic region $5\div 10\mathrm{{}^{\circ}C}$ in a few decades), followed by
a gradual cooling that lasts from hundreds to thousands of years, often with a
final jump back to stadial condition. The spectral power of their time series
shows a peak at approximately 1,500 years and integer multiples of this value,
suggesting the presence of such a periodicity in the glacial climate (Alley
and Anandakrishnan, 2001; Ganopolski and Rahmstorf, 2002). The relation
between DOs and large changes in the Atlantic meridional overturning
circulation is generally considered well established, despite the limitations
of the paleoclimatic records (Keigwin et al., 1994; Sarnthein et al., 1994;
Sakai and Peltier, 1999; Ganopolski and Rahmstorf, 2001; Timmermann et al.,
2003). Different low–dimensional models have been proposed to explain these
rapid climate fluctuations (Sakai and Peltier, 1999; Ganopolski and Rahmstorf,
2001; Timmermann et al., 2003; Ditlevsen and Johnsen, 2010), linking DOs to
ocean and ice sheet dynamics. No clear evidence in favour of one of them has
emerged from observational data. Here, we use high–resolution ice core isotope
data (North Greenland Ice Core Project members, 2004; Svensson et al., 2008)
to investigate the statistical properties of the climate fluctuations (Held
and Kleinen, 2004; Livina and Lenton, 2007; Scheffer et al., 2009) in the
period before the onset of the abrupt change. We analyse $\delta^{18}O$
isotope data from the NGRIP ice core North Greenland Ice Core Project members
(2004); Svensson et al. (2008). The data spans the time interval from 59,420
to 14,777 years before 2,000 A.D. (years b2k) with an average resolution of
2.7 years (more than 80% of the data has a resolution better than 3.5 years).
$\delta^{18}O$ can be interpreted as a proxy for atmospheric temperature over
Greenland. We use the dating for the onset of the interstadials given in
Svensson et al. (2008). The dataset then spans the events numbered 2 to 16.
In the first part of the paper, we discuss the bimodality of the time series,
already demonstrated in other works (Livina et al., 2011, 2010; Wunsch, 2003).
In particular, we establish that bimodality is a robust feature of the climate
system that produced it, and not an artifact of the projection of complex
dynamics onto a scalar quantity. This is done using a phase embedding
technique well known in the non–linear dynamics community, but not often used
in climate sciences.
In the second part of the paper, we show that the statistical properties of
the paleoclimatic time series can be used to distinguish between the various
models that have been proposed to explain DOs. We suggest that considering the
properties of an ensemble of events may uncover signals otherwise hidden, and
we show that this seems to be the case for the DOs in the time series
considered here. Within the limitations of the available data and of the
techniques used, we find that the statistics are most compatible with a system
that switches between two different climate equilibrium states in response to
a changing external forcing (Paillard and Labeyriet, 1994), or with stochastic
resonance (Ganopolski and Rahmstorf, 2002). In both cases, the external
forcing controls the distance of the system from a bifurcation point. It must
be clarified here that we use “external forcing” in a purely mathematical
sense: the presence of an external forcing means that the climate system can
be described by a non–autonomous system of equations, i.e. the evolution of
the system explicitely depends on time, as a forcing term is present (see e.g.
Arfken and Weber, 2005). No assumption on the nature of this forcing is made.
Other hypotheses (noise-induced transitions and autonomous oscillations,
suggested by Timmermann et al., 2003; Sakai and Peltier, 1999; Ditlevsen and
Johnsen, 2010) are less compatible with the data.
## II Bimodality
The bimodality hypothesis for this climate proxy has been tested in previous
works (Livina et al., 2011, 2010; Wunsch, 2003), but these make the strong
assumption that the bimodality of the recorded time series can be directly
ascribed to the climate system that produced it. This is not necessarily the
case if the evolution of a system with many degrees of freedom, as the real
climate system, is projected on a scalar record. Exploiting Takens’ embedding
theorem (Broer and Takens, 2011) for reconstructing phase space (the space
that includes all the states of a system), we assess bimodality without making
any assumption on the underlying model beforehand. This approach has the
advantage of avoiding artifacts that may arise from the projection of a higher
dimensional system onto a scalar time series.
### II.1 Phase space reconstruction technique
The technique that we apply is outlined in Abarbanel (1996), and has some
points of contact with the Singular Spectrum Analysis described e.g. in Ghil
et al. (2002). First, the irregular time series is linearly interpolated to a
regular time step, 4 years in the results shown here. Data after the last DO
is discarded for producing the phase space reconstruction. After detrending,
an optimal time lag has to be empirically chosen, guided by the first minimum
of the average mutual information function (20 years in our case). With copies
of the regular time series lagged by multiples of the optimal time lag as
coordinates, a phase space reconstruction is obtained. The dimension of the
embedding space is, in general, greater than or equal to the one of the
physical system which produced the signal, and the dynamics of the system are
thus completely unfolded. The number of dimensions is chosen as the minimum
one that brings the fraction of empirical global false neighbours to the
background value (Kennel et al., 1992). In our reconstructions, four
dimensions are used. We define as global false neighbours in dimension $n$
those couples of points whose distance increases more than 15 times, or more
than two times the standard deviation of the data, when the embedding
dimension is increased by one. The Scientific Tools for Python (SciPy)
implementation of the kd–tree algorithm is used for computing distances in the
_n_ –dimensional space. The 4–dimensional phase space reconstruction is
presented in Figure 1, performed separately for the data before and after year
22,000 b2k. The results are converted to a polar coordinate system and the
anomaly from the average is considered, following Stephenson et al. (2004).
Subsequently, the PDFs for the coordinates distributions are computed through
a gaussian kernel estimation (blue line in Fig. 1). The error of the PDFs is
defined as the standard deviation of an ensemble of 50,000 synthetic datasets
(shaded region in the figure), obtained by bootstrapping (see e.g. Efron,
1982). The theoretical distributions (black lines) refer to a multidimensional
normal distribution (the simplest null hypothesis, following Stephenson et al.
(2004)).
### II.2 Results: bimodality and bifurcation
The $\delta^{18}O$ isotope data possess a distinct trend that marks the drift
of the glacial climate towards the last glacial maximum (approximately 22,000
years ago), see Figure 2 (top). Also, associated with the drift is a gradual
increase in variance that makes up an early warning signal (EWS) for the
abrupt change towards the last glacial termination (approximately 18,000 years
ago, Dakos et al., 2008; Scheffer et al., 2009). It turns out that the trend
enhances the bimodality of the data but detrending is required to
unequivocally attribute this bimodality to the DOs, as the trend is connected
with a longer time scale process.
In the four left panels of Fig. 1 (between the beginning of the time series,
year 59,000b2k, and year 22,000 b2k), the normal distribution clearly lies
outside the error bars for all four dimensions, and has to be rejected. The
bimodality of the data is especially evident from the PDFs of the angular
coordinates. The radial distribution is well below the normal one only at
intermediate distances from the baricenter ($\rho^{2}\approx 5$), while it
seems to be systematically above the normal distribution close to zero and in
the tail, but the error bars include the normal distribution here. These
features are very robust, and they can be unambiguously detected also in phase
space reconstructions using different interpolation steps, time lags and
number of dimensions. In the four right panels (after year 22,000 b2k until
the end of the time series, year 15,000b2k), the deviations from normality are
much smaller, and become indistinguishable from a normal distribution when the
errors are taken into account. The only statistically non insignificant
deviations from normality after year 22,000 b2k are seen in panel h, which
represents the longitude coordinate of the polar system. However, no
bimodality is observed, but only a non spherically–symmetrical distribution of
the data. On this basis, we can confirm the claim of Livina et al. (2011,
2010); around year 22,000 b2k the climate undergoes a transition from a
bimodal to a unimodal behaviour. The climate system remains unimodal until the
end of the time series (15,000 b2k); the behaviour after this time can not be
assessed using this time series. The cause of the transition at year 22,000
b2k may be a shift of the bifurcation diagram of the system towards a more
stable regime (see Fig. 3a), either externally forced or due to internal
dynamics, but may as well be caused by a reduction of the strength of
stochastic perturbations.
## III External forcing, noise or autonomous dynamics?
In several recent works (Held and Kleinen, 2004; Livina and Lenton, 2007;
Scheffer et al., 2009; Ditlevsen and Johnsen, 2010), an abrupt transition is
treated as the jump between different steady states in a system that can be
decomposed in a low dimensional non–linear component and a small additive
white noise component. The jump is considered to be due to the approach to a
bifurcation point, usually a fold bifurcation (see e.g. Guckenheimer and
Holmes, 1988, for a discussion of fold bifurcations), in which a steady state
loses its stability in response to a change in the boundary conditions. If a
system perturbed by a small amount of additive white noise approaches a fold
bifurcation point, an increase in the variance ($\sigma^{2}$) is observed, as
well as in its lag–1 autocorrelation (Kuehn, 2011; Ditlevsen and Johnsen,
2010; Scheffer et al., 2009; Held and Kleinen, 2004). In addition, Detrended
Fluctuation Analysis (DFA) can be used to detect the approach to a bifurcation
point (Livina and Lenton, 2007). This tool, equivalent to others that measure
the decay rate of fluctuations in a nonlinear system (Heneghan and McDarby,
2000; Peng et al., 1994), has the clear advantage over autocorrelation and
variance of inherently distinguishing between the actual fluctuations of the
system and trends that may be part of the signal. This follows from the fact
that the detrending is part of the DFA procedure itself, and different orders
of detrending can be tested, provided that a sufficient amount of data is
available to sustain the higher order fit. Approaching a bifurcation point,
the lag–1 autocorrelation coefficient ($c$) and the DFA exponent ($\alpha$)
both tend to increase, ideally towards $1$ and $3/2$ respectively (Held and
Kleinen, 2004; Livina and Lenton, 2007). This reflects the fact that the
fluctuations in the system are increasingly correlated as the bifurcation
point is approached: they become more similar to a Brownian motion as the
restoring force of the system approaches zero. It must be reminded that all
the results obtained using these techniques are based on strong assumptions on
the nature of the noise. The noise in the signal is generally assumed to be of
additive nature, to be colorless and to be recorded in the time series through
the same processes that give rise to the slower dynamics. A clear separation
of time scales between the noise and the low–dimensional dynamics is also
assumed. These assumptions may not be satisfied by the climate system, and
this important caveat has to be considered when evaluating the results.
Our aim is to analyse the statistical properties of the shorter time–scale
fluctuations in the ice core record that occur before the onset of DOs. The
quality of the paleoclimatic time series and the intrinsic limits of the EWS
technique are a serious constraint to the conclusions that can be drawn.
Still, the analysis identifies EWSs before the onset of DOs. This suggests
that the onset of DOs is connected with the approach to a point of reduced
stability in the system. This finding is compatible only with some of the
models proposed as an explanation of DOs. Such a selection is possible since
the different prototype models have different, peculiar, “noise signatures”
before an abrupt transition takes place. It must be stressed that the problem
is an inverse one; multiple mathematical models qualitatively consistent with
the “noise signatures” can be developed, and only physical mechanisms can
guide the final choice.
We will focus on the three characteristics of the data mentioned above: the
correlation coefficient of the time series at lag–1, the variance, and the DFA
exponent. They are plotted, together with paleoclimatic time series used, in
fig. 2. A similar type of analysis has been used before in the search of EWSs
for abrupt climate transitions (see e.g. Held and Kleinen, 2004; Livina and
Lenton, 2007; Scheffer et al., 2009; Ditlevsen and Johnsen, 2010). We use here
EWSs to put observational constraints to the various prototype models for DOs,
considering only the properties of the ensemble of events. In contrast to
earlier studies we consider an EWS not as a precursor of a single event, but
analyse instead the average characteristics of the EWS found in the ensemble
of DOs. It must be stressed that Ditlevsen and Johnsen (2010) did consider the
ensemble of events, but did not compute any average quantity. Given the low to
signal–to–noise ratio, computing the mean of the ensemble and an error
estimate of it may uncover signals otherwise too weak to be detected. This
approach is more robust than the one which considers each realisation (each
DO) separately, since EWSs have a predictable behaviour only when computed as
an average over an ensemble of events, as demonstrated by Kuehn (2011). We
argue that only considering the mean of the events we can hope to draw general
conclusions on the nature of these climate signals.
### III.1 Prototype models and early warning signals
The reason why we can discriminate between various prototype models is that,
in the case of a system that crosses, or comes close to a bifurcation point,
$\sigma^{2}$, $c$ and $\alpha$ all increase before abrupt change occurs, while
for other prototype models this is not the case.
For each prototype model that we considered, the relevant equation, or set of
equations, is integrated with an Euler–Maruyama method (Higham, 2001). The
associated time series is then analysed with respect to lag–1 autocorrelation,
variance, and the DFA exponent (details of the analyses are given in section
III.2).
#### III.1.1 Bifurcation point in a double well
EWSs are visible if the DOs are caused by a slowly changing external forcing,
in which a critical parameter, like northward freshwater transport, varies
over a large enough range for the meridional overturning circulation of the
ocean to cross the two bifurcation points that mark the regime of multiple
equilibria. This idea, in the context of paleoclimate, dates back to Paillard
and Labeyriet (1994) who suggested it for explaining Heinrich events, and is
often invoked more in general as a paradigm for understanding the meridional
overturning circulation stability (see e.g. Rahmstorf et al., 2005). In
physical terms, this model implies that a changing external forcing is causing
a previously stable state to disappear. In Figure 3a this situation is
sketched.
As a prototype for a system crossing a bifurcation point, the same equation
used in Ditlevsen and Johnsen (2010) is used:
$\dot{x}=-x^{3}+x+q+\sigma\eta(t),$ (1)
where $x$ is the state variable (time dependent), $\dot{x}$ is its time
derivative, $q$ is the only system parameter and $\sigma\eta(t)$ represents a
white noise (normal distribution), with standard deviation given by $\sigma$.
In this case, the evolution of the system can be thought of as the motion of a
sphere on a potential surface that has either one or two stable equilibrium
states, divided by a potential ridge. This potential surface can change in
time, due to slow changes in the parameter $q$. In this case, $q$ will be time
dependent, and will represent a forcing external to the dynamics of the
system, represented only by the scalar function $x$. When $q$ crosses one of
the two bifurcation points ($q_{0}=\pm 2\sqrt{3}/9$), the system changes from
having two possible equilibrium states to having only one. At these points,
one of the two states loses its stability and the system can jump to the
second equilibrium (Figure 4, shown with $\sigma=0.1$ and $q$ going from
$-0.5$ to $0.5$ during the integration).
#### III.1.2 Noise induced transition
If $q$ is instead constant in time (and the system is thus an autonomous one),
abrupt transitions can still occur if more than one equilibrium state is
available, and noise is sufficiently strong to trigger the transition. An
example of this case, in the simple case of a fixed double well with constant
white noise, is shown in Fig. 5 (in this case, $\sigma=0.3$ and $q=0$). In
this model, the system is normally in the cold stadial state but can jump due
to the noise in the forcing, say atmospheric weather, to a warmer,
interstadial state. In this case no trend is visible in the statistical
properties of the time series, thus no EWS can be detected. The two cases
described are those considered by Ditlevsen and Johnsen (2010), to which we
refer for a more detailed discussion.
#### III.1.3 Stochastic resonance
A slightly different case, again based on the model of Eq. 1, is that
discussed by Alley and Anandakrishnan (2001); Ganopolski and Rahmstorf (2001,
2002). They suggest that DOs may be connected to stochastic resonance, where
the parameter $q$ in Eq. 1 becomes a periodic function of time. In this case,
transitions from one state to the other take place preferentially when $q$ is
closer to the bifurcation point, but are still forced by the noise term rather
than by $q$ actually reaching its bifurcation point. In Fig. 6 this case is
explored, using $q\equiv q(t)=q_{0}\mathrm{sin}(\frac{2\pi t}{\tau})$ with
$q_{0}=0.1$ and $\tau=1000$, the noise level is $\sigma=0.35$. In this case,
EWSs are clearly present before some of the events (e.g. the second and the
fourth), while for others no clear EWS is detected.111The upwards and
downwards transitions are, on average, equivalent, similarly to the case of
noise induced transitions with $q=0$. EWSs are present in some cases, while
absent in others, since transitions take place preferably when $q$ is closer
to the bifuration point, but not only in this phase. Transitions can still
take place purely due to the presence of noise, and in those cases no clear
EWS will be detected. If the average from a sufficiently large ensemble of
events is considered EWSs can be detected unambiguously. A very similar
situation will be found in the paleoclimatic data, and the presence of EWSs
will be shown there in the ensemble average.
#### III.1.4 Other models
We will show that other models proposed in the literature for explaining DOs
do not posses any EWS. No EWS can be detected if DOs are due to an unforced
(ocean) mode of oscillation (Sakai and Peltier, 1999; de Verdière et al.,
2006) since, for an autonomous oscillation, there are in general no changes in
the stability properties of the system while it evolves in time (see e.g.
Guckenheimer and Holmes, 1983). DOs have been linked to oscillations close to
a “homoclinic orbit” by Timmermann et al. (2003). A homoclinic orbit is a
periodic solution to a system of ordinary differential equations that has
infinite period, but in the presence of a small amount of noise it features
finite return times. A sketch of this prototype model is shown in Figure 3b.
To describe this model the set of Ordinary Differential Equations given in
Crommelin et al. (2004) is used. This minimal mathematical model has been
suggested before in the context of atmosphere regime behaviour Charney and
DeVore (1979), but it is mathematically equivalent to the mechanism for DOs
suggested in Timmermann et al. (2003); Abshagen and Timmermann (2004).
To investigate EWSs for a periodic cycle close to a homoclinic orbit, the
following system of ordinary differential equations (Crommelin et al., 2004)
is integrated (see Figure 7):
$\begin{split}\dot{x_{1}}&=\widetilde{\gamma}_{1}x_{3}-C(x_{1}-x_{1}^{*})\\\
\dot{x_{2}}&=-(\alpha_{1}x_{1}-\beta_{1})x_{3}-Cx_{2}-\delta_{1}x_{4}x_{6}\\\
\dot{x_{3}}&=(\alpha_{1}x_{1}-\beta_{1})x_{2}-\gamma_{1}x_{1}-Cx_{3}+\delta_{1}x_{4}x_{5}\\\
\dot{x_{4}}&=\widetilde{\gamma}_{2}x_{6}-C(x_{4}-x_{4}^{*})+\varepsilon(x_{2}x_{6}-x_{3}x_{5})\\\
\dot{x_{5}}&=-(\alpha_{2}x_{1}-\beta_{2})x_{6}-Cx_{5}-\delta_{2}x_{3}x_{4}\\\
\dot{x_{6}}&=(\alpha_{2}x_{1}-\beta_{2})x_{5}-\gamma_{2}x_{4}-Cx_{6}+\delta_{2}x_{2}x_{4}.\\\
\end{split}$
The system has six dimensions ($x_{i}$) with parameters as in Crommelin et al.
(2004), chosen as they produce the abrupt, quasi periodic transitions observed
in Fig. 7, with $\widetilde{\gamma}_{1}=0.06$, $C=0.1$, $x_{1}^{*}=0.89$,
$\alpha_{1}=0.24$, $\beta_{1}=0.25$, $\delta_{1}=0.384$, $\gamma_{1}=0.048$,
$\widetilde{\gamma}_{2}=0.024$, $x_{4}^{*}=-0.82325$, $\varepsilon=1.44$,
$\alpha_{2}=0.734$, $\beta_{2}=0.0735$, $\delta_{2}=-1.243$. White noise is
added to each component, the noise has standard deviation of $0.001$. This
model is chosen as it provides a simple description of the results of
Timmermann et al. (2003), having a very similar bifurcation diagram (the
fundamental element is the presence of a fold–Hopf bifurcation). As the time
series is strongly non–stationary, linear DFA is not a good measure of the
fluctuation decay in this case, and higher order detrending is needed (orders
higher than 2 give results consistent with the case of quadratic detrending).
After integrating these equations, again no precursor for abrupt changes can
be detected (Fig. 7). It must be stressed that, in order to correctly compute
the statistical properties of the time series, it is of fundamental importance
that the data are detrended (linear or higher order detrending), to ensure
that the findings can be ascribed to random fluctuations and not to the
non–stationarity of the data. In particular, in the case of the DFA exponent
computation for the latter prototype model, the strong non–stationarity of the
data requires a quadratic detrending.
The results from other models that have been suggested as a prototype for
oscillations close to a homoclinic orbit (Welander, 1982; Stone and
Armbruster, 1999) consistently show no EWS before abrupt transitions (not
shown).
### III.2 Fluctuations analysis
We now have seen that EWSs are characteristic of abrupt changes connected with
the approach to a bifurcation point. The bifurcation point is either crossed
while a parameter of the system is changed (Sec. III.1.1) or is only
approached, without actually crossing it, as in the case of stochasitc
resonance (Sec. III.1.3). If the ice core record also features EWSs, the other
prototype models are falsified on these bases. The ice core data are shown in
Figure 2. The EWSs of the time series are computed in a sliding window 250
years wide, and the results are plotted at the right end of the sliding window
to avoid contamination from points after the DO onset. This window size
corresponds on average to 100 data points for the variance computation. For
$c$ and $\alpha$, the time series is first linearly interpolated to a step of
4 years and approximately 60 points are used in the window. The time step is
chosen in order to avoid overfitting the data even in the parts of the time
series with lower resolution. The window width choice is a compromise between
the need to minimise noise (larger window) and to maximise the number of
independent points for the computation of trends (smaller window). The data is
linearly detrended in the window before the computation, to compensate for
potential nonstationarities in the time series. In the computation of the DFA
exponent (Peng et al., 1994), 10 box lengths were used, ranging from 16 to 52
years. The number of time lengths is limited for computational reasons.
Different time steps for the interpolation and boxes lengths were tested. The
results show small sensitivity to these choices, as long as the time step for
the interpolation is kept below approximately 10 years.
If one considers the successive DOs one by one, it is clear that for some of
the DOs a marked increase is found (e.g. the one at approximately 45,000 years
b2k), implying an EWS, but for other DOs no EWSs can be detected (e.g. the
last one). This has to be expected in general from a stochastic system for
which only the average behaviour is predictable (Kuehn, 2011), and may also be
the fingerprint of stochastic resonance, as discussed in Sec. III.1.3.
### III.3 Results
As discussed by Kuehn (2011), analysing the properties of an ensemble of
events, instead of a single realisation, is a more robust approach and better
justified on theoretical grounds. We thus use EWSs to characterise the
properties of the “mean DO” instead of trying to predict the onset of the
following transition. For this reason, we consider the whole ensemble of DOs,
instead of each single DO. With this aim, the time series is cut into slices
that end 100 years after the transition onset and start either 100 years after
the previous DO onset or, if the time span between two DOs is sufficiently
long, 2,900 years before the following DO onset (Figure 8a). The onset of the
transitions then are translated to year -100. In this way, an ensemble of DOs
is defined, being composed of slices of the original time series of variable
length, synchronised at the onset of each DO. If the quantities used for EWS
detection are then averaged for the whole ensemble, a moderate but significant
increase is observed in all three fluctuation properties, starting
approximately from year -1,800 until year -250 (Figure 8b–d). The standard
deviation of the ensemble is large, in particular at times far from the DO
onset because the ensemble has fewer members there (at times smaller than
-1000 years), but the trend can not be discarded as a random fluctuation for
all three cases. To test this, a linear least square fit of the data in the
interval -1,800 to -250 is performed, only using independent points from the
ensemble (thus at a distance of 250 years from each other, given the window
width of 250 years), and obtaining an error interval from a bootstrapped
ensemble of 50,000 members. The results of this fitting procedure are reported
in Table 1. In all three cases the linear trends are well above their standard
deviation, providing a strong indication of the robustness of the signal. In
order to check the robustness of the findings on the order of detrending for
DFA, the computation has been repeated with a quadratic detrending, actually
giving a slightly stronger trend (see Table 1). These results are consistent
with a scenario that connects DO onset with either the crossing or the
approach of a bifurcation point. In other words, these findings are consistent
with either a model where the system is forced to shift from a steady state to
a different one, or with the stochastic resonance mechanism, where transitions
take place preferentially closer to the bifurcation point, even if this one is
not actually reached, and transitions are due to the noise.
The fact that $c$ and $\alpha$ do not reach their theoretical values for a
bifurcation point, respectively $1$ and $3/2$, can easily be explained: the
noise in the system triggers the transition before the actual bifurcation
point is reached. This clearly must be the case for stochastic resonance, but
is relevant also if the bifurcation point is crossed (Meunier and Verga,
1988). Also, the three quantities $c$, $\sigma^{2}$ and $\alpha$ decrease
again in the last few hundred years before the DO onset. This may be a bias in
the relatively small ensemble considered here, but a similar decrease has been
shown by Kuehn (2011) for various bifurcations in fast–slow systems. For
several idealised systems containing different forms of bifurcation points
(included the fold bifurcation considered here), a decrease in variance in the
immediate vicinity of the bifurcation point is found, while the well known
increase is observed when the bifurcation point is farther away in the
parameter space. To confirm that the decrease observed in the data is
consistent with the one discussed by Kuehn (2011), the time scale of the
decrease should be linked to the distance of the bifurcation parameter from
the bifurcation point. Unfortunately, this is not possible, as we have no
information on the parameter range that may be spanned by the real system.
Variance in particular starts to decrease quite far from the DO onset
(approximately 700 years). This may indicate a weakness of the results (even
if, as discussed above, the increase is still statistically significant), but
it has been shown that variance may not be the most robust indicator of EWS,
and autocorrelation may be given more emphasis (see Dakos et al., 2012). The
observation of a decrease in variance just before the DO onset, after a longer
period of increase, is an important lesson for the practical application of
these techniques in the detection of approaching bifurcation points.
Our findings are in contrast with the previous suggestions of Ditlevsen and
Johnsen (2010). Ditlevsen and Johnsen (2010) considered all the events
together, without considering neither the ensemble mean nor the individual
events. This approach may prevent from uncovering a weak mean signal, as well
as the clear EWS visible before some of the events. We have seen in particular
that in the case of stochastic resonance the presence of EWS is not guaranteed
before each abrupt transition. Furthermore, strong noise may hide the increase
in the indicators for some of the events even if a bifurcation is actually
crossed (Kuehn, 2011). We do not think that our different results may be
ascribed to a different choice in the parameters of the analysis, as several
different parameter sets have been used, and the results are robust.
Still, an important caveat must be reminded, relevant for most of the works
dealing with EWSs. The signal detected as an EWS is described by the simple
models discussed, but other models may give a similar signal as well; here we
try to distinguish only among the models that have been proposed to describe
DOs. Furthermore, the effect of other types of noise other than white and
additive are not studied. This is the most common approach in the field (one
of the few exceptions, to our knowledge, is Bathiany et al., 2012). A closer
investigation of the effects of using red and multiplicative noise is an
interesting topic, but outstide the scope of this paper.
From a broader perspective, the motivation for classifying DOs into three
groups222Stochastic resonance is a fourth group, which can be considered as a
hybrid between noise induced transitions and bifurcation induced transistions.
may seem unclear, but this is instead a very important point. If DOs are
associated with the presence (or the crossing) of a bifurcation point, as our
results suggests, this would mean that the climate can potentially show
hystheresis behaviour, i.e. abrupt transitions in opposite directions take
place under different forcing conditions, and abrupt transitions are thus to
some extent irreversible, with obvious implications e.g. for global warming.
## IV Summary
In this work, we performed two sets of analysis on a well-known paleoclimatic
record, the $\delta^{18}O$ isotope data from the NGRIP ice core (North
Greenland Ice Core Project members, 2004; Svensson et al., 2008), believed to
be representative of temperature over the North Atlantic sector. We assessed
bimodality of the system using a phase–space embedding technique, that
guarantees that the bimodality is not an artifact of projection of the complex
dynamics of the climate system on a scalar time series. We confirm with this
technique the claim of Livina et al. (2011, 2010), that a switch from bimodal
to unimodal behaviour is observed in the time series around year 22,000 before
present.
Secondly, we analysed the statistical properties of the fluctuations in the
paleoclimatic time series, before the onset of DOs. In particular, we focused
on the average properties of the events considered as an ensemble instead of
each separately. Despite the high level of noise in the data, EWSs can be
detected in the ensemble average, consistently with the hypothesis that DOs
take place preferentially close to a bifurcation point in the climate system.
In particular, our findings seem to be particularly close to the stochastic
resonance scenario proposed by Alley and Anandakrishnan (2001); Ganopolski and
Rahmstorf (2001, 2002). Other prototype models that have been proposed
(Timmermann et al., 2003; Sakai and Peltier, 1999; Ditlevsen and Johnsen,
2010) are less consistent with the data, as their mechanisms do not involve
any transition from which EWSs can, at least in general, be expected.
Ditlevsen and Johnsen (2010) came to opposite conclusions, but they did not
consider the average behaviour of the ensemble, while we think that this may
be a step of fundamental importance.
A disclaimer has to be made: our conclusions hold for the ensemble of the
events, and are a probabilistic statement: we can only claim that a scenario
that does not include an approach to, or the crossing of, a bifurcation point
with EWS is unlikely. The trends of Table 1 are between two and three standard
deviations apart from zero. This means that the probability that the real
trend in each indicator is zero is low but non–zero, in the order of $1\%$,
assuming a normal distribution. Given the rather complex techniques used, we
can not rule out the possibility that the error estimates given in the Table
may be underestimated. Apart from the general limitations of the techniques
used, we also want to remind that we considered only models already discussed
in the literature in the context of DOs. Other models, giving similar signals,
may be developed, but given the inverse nature of the problem faced here, the
models considered must be limited in number.
A connection with the meridional overturning circulation instability remains
in the domain of speculation, but is a plausible hypothesis considering the
large evidence linking DOs signals to rapid changes in the meridional
overturning circulation. Further investigation is needed to confirm this
hypothesis and, more importantly, to address the fundamental question that
remains open: if DOs are due to an external forcing, what is the nature of
this forcing? Furthermore, the relation between the variability recorded in
the $\delta^{18}O$ time series and the AMOC variability is still uncertain,
and the link between the two may be far from direct, involving e.g.
atmospheric or sea ice variability.
Looking beyond the results of the particular time–series used here, we suggest
that EWSs may provide a useful guide for discriminating between different
models meant to describe a climate signal. When data is scarce, the analysis
of the average properties of the fluctuations can give important hints on the
nature of the signal which produced it.
###### Acknowledgements.
The authors acknowledge Peter Ditlevsen (University of Copenhagen) for
discussion and providing the data, Anders Svensson (University of Copenhagen)
for making the data available, Henk Dijkstra (University of Utrecht) for
suggesting to use Charney–Devore model as a prototype for homoclinic orbits
and for pointing out some inconsistencies in an earlier version of the
manuscript, Timothy Lenton (University of Exeter) for valuable comments.
A.A.C. acknowledges the Netherlands Organization for Scientific Research for
funding in the ALW program. V.L. acknowledges NERC and AXA Research Fund for
funding. The authors would also like to thank the reviewers for their comments
and precious suggestions.
## References
* Abarbanel (1996) Abarbanel, H. D. I.: Analysis of Observed Chaotic Data, Springer-Verlag, New York, 1996.
* Abshagen and Timmermann (2004) Abshagen, J. and Timmermann, A.: An Organizing Center for Thermohaline Excitability., Journal of Physical Oceanography, 34, 2756–2760, 2004.
* Alley and Anandakrishnan (2001) Alley, R. B. and Anandakrishnan, S.: Stochastic resonance in the North Atlantic, Paleoceanography, 16, 190–198, 2001.
* Arfken and Weber (2005) Arfken, G. B. and Weber, H. J.: Mathematical Methods For Physicists, Elsevier Science, 2005.
* Bathiany et al. (2012) Bathiany, S., Claussen, M., and Fraedrich, K.: Detecting hotspots of atmosphere–vegetation interaction via slowing down; Part 1: A stochastic approach, Earth System Dynamics Discussions, 3, 643–682, 2012.
* Broer and Takens (2011) Broer, H. and Takens, F.: Dynamical Systems and Chaos, Springer Science+Businnes Media, New York, 2011.
* Charney and DeVore (1979) Charney, J. G. and DeVore, J. G.: Multiple Flow Equilibria in the Atmosphere and Blocking, Journal of the Atmospheric Sciences, 36, 1205, 1979.
* Crommelin et al. (2004) Crommelin, D. T., Opsteegh, J. D., and Verhulst, F.: A mechanism for atmospheric regime behavior, Journal of the Atmospheric Sciences, 61, 1406, 2004\.
* Dakos et al. (2012) Dakos, V., van Nes, E. H., D’Odorico, P., and Scheffer, M.: Robustness of variance and autocorrelation as indicators of critical slowing down, Ecology, 93, 264–271, 2012.
* Dakos et al. (2008) Dakos et al., V.: Slowing down as an early warning signal for abrupt climate change., Proceedings of the National Academy of Sciences of the United States of America, 105, 14 308–14 312, 2008.
* Dansgaard et al. (1993) Dansgaard et al., W.: Evidence for general instability of past climate from a 250-kyr ice-core record., Nature, 364, 218–220, 1993.
* de Verdière et al. (2006) de Verdière, A. C., Jelloul, M. B., and Sévellec, F.: Bifurcation structure of thermohaline millennial oscillations, Journal of climate, 19, 5777, 2006.
* Ditlevsen and Johnsen (2010) Ditlevsen, P. D. and Johnsen, S. J.: Tipping points: Early warning and wishful thinking., Geophysical Research Letters, 37, 2–5, 2010.
* Efron (1982) Efron, B.: The Jackknife, the Boostrap and other Resampling Plans., p. pp. 92, Society for Industrial and Applied Mathematics, Philadelphia, 1982.
* Ganopolski and Rahmstorf (2001) Ganopolski, A. and Rahmstorf, S.: Rapid changes of glacial climate simulated in a coupled climate model., Nature, 409, 153–8, 2001.
* Ganopolski and Rahmstorf (2002) Ganopolski, A. and Rahmstorf, S.: Abrupt Glacial Climate Changes due to Stochastic Resonance., Physical Review Letters, 88, 3–6, 2002.
* Ghil et al. (2002) Ghil, M., Allen, M. R., Dettinger, M. D., Ide, K., Kondrashov, D., Mann, M. E., Robertson, A. W., Saunders, A., Tian, Y., Varadi, F., and Yiou, P.: Advanced spectral methods for climatic time series, Reviews of Geophysics, 40, 1003, 2002\.
* Guckenheimer and Holmes (1983) Guckenheimer, J. and Holmes, P.: Nonlinear Oscillations, Dynamical Systems and Bifurcation of Vector Fields, Springer-Verlag, New York, 1983.
* Guckenheimer and Holmes (1988) Guckenheimer, J. and Holmes, P.: Structurally stable heteroclinic cycles., Mathematical Proceedings of Cambridge Philosophical Society, 103, 189–192, 1988\.
* Held and Kleinen (2004) Held, H. and Kleinen, T.: Detection of climate system bifurcations by degenerate fingerprinting, Geophysical Research Letters, 31, L23 207, 2004.
* Heneghan and McDarby (2000) Heneghan, C. and McDarby, G.: Establishing the relation between detrended fluctuation analysis and power spectral density analysis for stochastic processes., Physical review E, 62, 6103–6110, 2000.
* Higham (2001) Higham, D. J.: An Algorithmic Introduction to Numerical Simulation of Equations., SIAM review, 43, 525–546, 2001.
* Keigwin et al. (1994) Keigwin, L. D., Curry, W. B., Lehman, S. J., and Johnsen, S.: The role of the deep ocean in North Atlantic climate change between 70 and 130 kyr ago., Nature, 371, 323–326, 1994.
* Kennel et al. (1992) Kennel, M. B., Brown, R., and Abarbanel, H. D. I.: Determining embedding dimension for phase-space reconstruction using a geometrical construction., Physical Review A, 45, 3403–3411, 1992.
* Kuehn (2011) Kuehn, C.: A mathematical framework for critical transitions: Bifurcations, fast-slow systems and stochastic dynamics, Physica D, 240, 1020–1035, 2011.
* Livina et al. (2011) Livina, V., Kwasniok, F., Lohmann, G., Kantelhardt, J., and Lenton, T.: Changing climate states and stability: from Pliocene to present, Climate Dynamics, 37, 2437–2453, 2011.
* Livina and Lenton (2007) Livina, V. N. and Lenton, T. M.: A modified method for detecting incipient bifurcations in a dynamical system., Geophysical Research Letters, 34, 1–5, 2007\.
* Livina et al. (2010) Livina, V. N., Kwasniok, F., and Lenton, T. M.: Potential analysis reveals changing number of climate states during the last 60 kyr., Climate of the Past, 6, 77–82, 2010.
* EPICA Community Members (2004) EPICA Community Members: Eight glacial cycles from an Antarctic ice core., Nature, 429, 623–628, 2004.
* Meunier and Verga (1988) Meunier, C. and Verga, A. D.: Noise and bifurcations., Journal of Statistical Physics, 50, 345–375, 1988.
* North Greenland Ice Core Project members (2004) North Greenland Ice Core Project members: High-resolution record of Northern Hemisphere climate extending into the last interglacial period, Nature, 431, 147–151, 2004.
* Paillard and Labeyriet (1994) Paillard, D. and Labeyriet, L.: Role of the thermohaline circulation in the abrupt warming after Heinrich events, Nature, 372, 162–164, 1994.
* Peng et al. (1994) Peng et al., C. K.: Mosaic organization of DNA nucleotides., Physical Review E, 49, 1685–1689, 1994.
* Rahmstorf et al. (2005) Rahmstorf et al., S.: Thermohaline circulation hysteresis: A model intercomparison., Geophysical Research Letters, 32, L23 605, 2005.
* Sakai and Peltier (1999) Sakai, K. and Peltier, W. R.: A Dynamical Systems Model of the Dansgaard-Oeschger Oscillation and the Origin of the Bond Cycle., Journal of Climate, 12, 2238–2255, 1999.
* Sarnthein et al. (1994) Sarnthein et al., M.: Changes in east Atlantic deepwater circulation over the last 30 , 000 years : Eight time slice reconstructions., Paleoceanography, 9, 209–267, 1994.
* Scheffer et al. (2009) Scheffer et al., M.: Early-warning signals for critical transitions., Nature, 461, 53–59, 2009.
* Stephenson et al. (2004) Stephenson, D. B., Hannachi, A., and O’Neill, A.: On the existence of multiple climate regimes., Quarterly Journal of the Royal Meteorological Society, 130, 583–605, 2004.
* Stone and Armbruster (1999) Stone, E. and Armbruster, D.: Noise and O(1) amplitude effects on heteroclinic cycles, Chaos, 9, 499, 1999.
* Svensson et al. (2008) Svensson et al., A.: A 60 000 year Greenland stratigraphic ice core chronology., Climate of the Past, 4, 47–57, 2008.
* Timmermann et al. (2003) Timmermann, A., Gildor, H., Schulz, M., and Tziperman, E.: Coherent Resonant Millennial-Scale Climate Oscillations Triggered by Massive Meltwater Pulses., Journal of Climate, 16, 2569–2585, 2003.
* Welander (1982) Welander, P.: A simple heat-salt oscillator., Dynamics of Atmospheres and Oceans, 6, 233–242, 1982.
* Wunsch (2003) Wunsch, C.: Greenland-Antarctic phase relations and millennial time-scale climate fluctuations in the Greenland ice-cores, Quaternary Science Reviews, 22, 1631–1646, 2003.
Quantity | Linear trend | Standard deviation
---|---|---
$c$ | $1.2\cdot 10^{-4}$ | $3\cdot 10^{-5}$
$\sigma^{2}$ | $5.1\cdot 10^{-4}$ | $2\cdot 10^{-4}$
$\alpha$ | $9.1\cdot 10^{-5}$ | $3\cdot 10^{-5}$
$\alpha$ (quadr.) | $1.2\cdot 10^{-4}$ | $4\cdot 10^{-5}$
Table 1: Linear trends of EWS. Results of the linear fit of the trends in the
ensemble mean for autocorrelation ($c$), variance ($\sigma^{2}$) and DFA
exponent ($\alpha$) in the time interval from -1800 to -250 years before the
DO onset. The last line refers to the result of the fit from the quadratic DFA
(see text). The error values are computed from a bootstrapped ensemble of
50,000 members.
Figure 1: Phase space reconstruction of linearly detrended data, before and
after year 22,000 b2k. Blue lines denote kernel estimators of PDFs, with error
margins shaded (see text). The left (a, b, c and d) and right panels (e, f, g,
h) are, respectively, the phase space reconstruction for the portion of data
before and after year 22,000 b2k. PDFs for the four coordinates of the phase
space reconstruction are plotted. The origin of the space is the barycentre of
the data, 4–dimensional polar coordinates are used. a and e are the PDF for
the square of the radial coordinate. b–c and f–g are the PDFs for the two
“latitude–like” coordinates. d and e are the PDF for the “longitude–like”
coordinate. The black lines are the theoretical PDFs for a unimodal
multinormal sample: in the polar reference system, a $\chi^{2}$ distribution,
with number of degrees of freedom equal to the number of dimensions, for the
square of the radius ($\rho^{2}$). For the angular coordinates, a unimodal
multinormal PDF would be proportional to $\mathrm{sin}^{2}(\phi_{1})$ and
$\mathrm{sin}(\phi_{2})$ respectively, while it would be uniform in
$\phi_{3}$. Figure 2: Ice core data analysis From top to bottom, the plots
show the original data, the lag–1 correlation coefficient, the variance and
the linear DFA exponent. The time scale follows the convention of years before
2,000 A.D. Dashed grey lines mark the DO onset dating as given in Svensson et
al. (2008). Figure 3: Sketch of prototype DO models: bifurcation points and
homoclinic orbit. a Bifurcation points hypothesis. The system crosses the two
bifurcation points (green triangles) periodically, due to an external forcing.
The DOs give EWSs before the abrupt transition occurs. Overshooting is
possible when overturning circulation recovers. b Homoclinic orbit model
Timmermann et al. (2003). Here, DOs are due to the motion of the climate
system close to a homoclinic orbit (grey) that connects to a periodic
oscillation (Hopf bifurcation) Timmermann et al. (2003); Abshagen and
Timmermann (2004); Crommelin et al. (2004) (green circle). Bifurcation points
are present in the climate system, but they do not determine the abrupt
transitions. The climate is most of the time in stadial conditions. A possible
path followed during the oscillation is shown in black for each case. Figure
4: Bifurcation point mechanism: double well potential under changing external
forcing. An external parameter is slowly changed forcing the system to undergo
an abrupt transition, that shows clear EWSs. The top panel (blue line) shows
the time series for this system. Below, the lag–1 autocorrelation ($c$, green
circles) and variance ($\sigma^{2}$, black circles) are shown. In the bottom
panel, DFA exponent $\alpha$ for linearly (red circles) or quadratic (green
triangles) detrending case. Grey vertical line marks the abrupt transition
onset, the red line marks a critical $c$ or $\alpha$ value. Figure 5: As in
Figure 4, but for a noise induced transition. The external forcing is kept
constant, but the noise can still trigger abrupt transitions between two
available states. No trend in the statistical properties computed is detected,
thus no EWS is observed. Figure 6: Stochastic resonance. Same as in Figure 4,
but for the case of stochastic resonance. The forcing oscillates, and
transitions take place preferentially when the system is closer to the
bifurcation point. The deterministic part of the forcing (see text) is shown
as a gray line on the top panel. Figure 7: Charney and Devore model. As in
Figure 4, for the Charney and DeVore model. In this case, the abrupt
transitions are not connected with a changing external forcing, but are an
autonomous mode of the equations. Dimension number 5 ($x_{5}$) of the
6–dimensional ODE system is considered. It is evident that, for $\alpha$
(bottom panel), a detrending of order higher than one is needed to remove the
effect of non–stationarities in the time series. After proper detrending, no
EWSs are observed in $\alpha$. Red circles (green triangles) refer to the
linear (quadratic) DFA. Figure 8: Ensemble analysis. a Time series of DOs
number 2 to 16 from the data of North Greenland Ice Core Project members
(2004); Svensson et al. (2008). The time series are aligned so that the DOs
start at year -100 (grey shading after event onset). Colours mark different
events, from number 2 (blue) to number 16 (red). The time window used in the
computations is drawn in black. Below, the lag–1 autocorrelation (b), variance
(c) and linear detrended fluctuation analysis exponent (d). The blue line is
the ensemble average, the shaded area marks the standard deviation. The dashed
red line shows a least square regression of the data over the marked time
range.
|
arxiv-papers
| 2011-03-22T21:09:27 |
2024-09-04T02:49:17.872443
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Andrea A. Cimatoribus, Sybren S. Drijfhout, Valerie Livina, Gerard van\n der Schrier",
"submitter": "Andrea Cimatoribus",
"url": "https://arxiv.org/abs/1103.4385"
}
|
1103.4431
|
# Trihyperkähler reduction
and instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$
Marcos Jardim
IMECC - UNICAMP
Departamento de Matemática
Rua Sérgio Buarque de Holanda, 651
13083-859 Campinas, SP, Brazil
Misha Verbitsky
Laboratory of Algebraic Geometry,
Faculty of Mathematics, NRU HSE,
7 Vavilova Str. Moscow, Russia, and
Institute for the Physics and Mathematics of the Universe,
University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, 277-8583, Japan
###### Abstract
A trisymplectic structure on a complex $2n$-manifold is a triple of
holomorphic symplectic forms such that any linear combination of these forms
has rank $2n$, $n$ or 0. We show that a trisymplectic manifold is equipped
with a holomorphic 3-web and the Chern connection of this 3-web is
holomorphic, torsion-free, and preserves the three symplectic forms. We
construct a trisymplectic structure on the moduli of regular rational curves
in the twistor space of a hyperkähler manifold, and define a trisymplectic
reduction of a trisymplectic manifold, which is a complexified form of a
hyperkähler reduction. We prove that the trisymplectic reduction in the space
of regular rational curves on the twistor space of a hyperkähler manifold $M$
is compatible with the hyperkähler reduction on $M$.
As an application of these geometric ideas, we consider the ADHM construction
of instantons and show that the moduli space of rank $r$, charge $c$ framed
instanton bundles on $\mathbb{C}\mathbb{P}^{3}$ is a smooth, connected,
trisymplectic manifold of complex dimension $4rc$. In particular, it follows
that the moduli space of rank $2$, charge $c$ instanton bundles on
$\mathbb{C}\mathbb{P}^{3}$ is a smooth complex manifold dimension $8c-3$, thus
settling part of a 30-year old conjecture.
###### Contents
1. 1 Introduction
1. 1.1 An overview
2. 1.2 3-webs, $SL(2)$-webs and trisymplectic structures
3. 1.3 Trisymplectic reduction
4. 1.4 Trihyperkähler reduction
5. 1.5 Framed instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$
2. 2 $SL(2)$-webs on complex manifolds
1. 2.1 $SL(2)$-webs and twistor sections
2. 2.2 Hyperkähler manifolds
3. 2.3 An example: rational curves on a twistor space
3. 3 Trisymplectic structures on vector spaces
1. 3.1 Trisymplectic structures and $\operatorname{Mat}(2)$-action
2. 3.2 Trisymplectic structures and invariant quadratic forms on vector spaces with $\operatorname{Mat}(2)$-action
4. 4 $SL(2)$-webs and trisymplectic structures
1. 4.1 Trisymplectic structures on manifolds
2. 4.2 Chern connection on $SL(2)$-webs and trisymplectic structures
3. 4.3 Trisymplectic reduction
5. 5 Trihyperkähler reduction
1. 5.1 Hyperkähler reduction
2. 5.2 Trisymplectic reduction on the space of twistor sections
3. 5.3 Trihyperkähler reduction on the space of twistor sections
6. 6 Moment map on twistor sections
7. 7 Trisymplectic reduction and hyperkähler reduction
1. 7.1 The tautological map $\tau:\;{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G{\>\longrightarrow\>}{\operatorname{Sec}}(M{/\\!\\!/\\!\\!/}G)$
2. 7.2 Trihyperkähler reduction and homogeneous bundles on $\mathbb{C}{\mathbb{P}^{1}}$
8. 8 Case study: moduli spaces of instantons
1. 8.1 Moduli space of framed instantons on $\mathbb{R}^{4}$
2. 8.2 Moduli space of framed instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$
3. 8.3 Moduli space of rank $\mathbf{2}$ instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$
Acknowledgments. The first named author is partially supported by the CNPq
grant number 305464/2007-8 and the FAPESP grant number 2005/04558-0. He thanks
Amar Henni and Renato Vidal Martins for several discussions related to
instanton bundles. The second named author was partially supported by the
FAPESP grant number 2009/12576-9, RFBR grant 10-01-93113-NCNIL-a, RFBR grant
09-01-00242-a, AG Laboratory SU-HSE, RF government grant, ag. 11.G34.31.0023,
and Science Foundation of the SU-HSE award No. 10-09-0015. We are grateful to
Alexander Tikhomirov for his insight and comments.
## 1 Introduction
### 1.1 An overview
In our previous paper [JV], we introduced the notion of holomorphic
$SL(2)$-webs, and argued that such manifolds may be regarded as the
complexification of hypercomplex manifolds. We showed that manifolds $M$
carrying such structures have a canonical holomorphic connection, called the
_Chern connection_ , which is torsion-free and has holonomy in
$GL(n,\mathbb{C})$, where $\dim_{\mathbb{C}}M=2n$.
The main example of holomorphic $SL(2)$-webs are given by twistor theory:
given a hyperkähler manifold $M$, then the space of regular holomorphic
sections of the twistor fibration
$\pi:\operatorname{Tw}(M)\to\mathbb{C}{\mathbb{P}^{1}}$ is equipped with a
holomorphic $SL(2)$-web. We then exploited this fact and the
Atiyah–Drinfeld–Hitchin–Manin (ADHM) construction of instantons to show that
the moduli space of framed instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$
is a holomorphic $SL(2)$-web.
The present paper is a sequel to [JV]. Here, we expand in both aspects of our
previous paper. On one hand, we describe a new geometric structure on complex
manifolds, called a trisymplectic structure. The trisymplectic structure is an
important special case of a holomorphic $SL(2)$-web. For a trisymplectic
structure, we define reduction procedure, allowing us to define a
trisymplectic quotient. Applying these new ideas to the ADHM construction of
instantons allows us to give a better description of the moduli space of
framed instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$, and to prove its
smoothness and connectness. This allows us to solve a part of a 30-year old
conjecture regarding the moduli space of rank $2$ instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$.
To be more precise, we begin by introducing the notion of _trisymplectic
structures on complex manifolds_ (see Definition 4.1 below), and show that
trisymplectic manifolds carry an induced holomorphic $SL(2)$-web. Our first
main goal is to introduce the notion of a _trisymplectic quotient_ of a
trisymplectic manifold, which would enable us to construct new examples of
trisymplectic manifolds out of known ones, e.g. flat ones.
Next, we introduce the notion of _trihyperkähler quotient_
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ for a hyperkähler
manifold $M$, equipped with an action of a Lie group $G$ by considering the
trisymplectic quotient of the space ${\operatorname{Sec}}_{0}(M)$ of regular
holomorphic sections of the twistor fibration of $M$.
Our first main result (Theorem 5.11) is compatibility between this procedure
and the hyperkähler quotient, which we denote by $M{/\\!\\!/\\!\\!/}G$. We
show that, under some reasonable conditions, the trihyperkähler reduction
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ admits an open
embedding to the space ${\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$ of
regular sections of the twistor fibration of the hyperkähler quotient
$M{/\\!\\!/\\!\\!/}G$. This shows, in particular, that (similarly to the
smoothness of the hyperkähler reduction) the trihyperkähler reduction of $M$
is a smooth trisymplectic manifold.
Our second main result provides an affirmative answer to a long standing
conjecture regarding the smoothness and dimension of the moduli space of rank
$2$ instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$, a.k.a mathematical
instanton bundles (see Section 8 for precise definitions). More precisely, the
moduli space of mathematical instanton bundles with second Chern class (or
_charge_) $c$ is conjectured to be an irreducible, nonsingular quasi-
projective variety of dimension $8c-3$ (c.f. [CTT, Conjecture 1.2]). The truth
of the conjecture for $c\leq 5$ was established by various authors in the past
four decades: Barth settled the $c=1$ case in 1977 [B1]; Hartshorne
established the case $c=2$ in 1978 [H]; Ellingsrud and Stromme settled the
$c=3$ case in 1981 [ES]; the irreducibility of the $c=4$ case was proved by
Barth in 1981 [B2], while the smoothness is due to Le Potier [LeP] (1983); and
Coanda–Tikhomirov–Trautmann (2003). More recently, Tikhomirov has shown in [T]
that irreducibility holds for odd values of $c$.
In the present paper, we apply the geometric techniques established above to
the ADHM construction of instantons, and show that the moduli space of rank
$r$, charge $c$ _framed_ instanton bundles on $\mathbb{C}\mathbb{P}^{3}$ is a
smooth, trisymplectic manifold of complex dimension $4rc$ (see Theorem 8.4
below). It then follows easily (see Section 8.3 for the details) that the
moduli space of mathematical instanton bundles of charge $c$ is a smooth
complex manifold of dimension $8c-3$, thus settling the smoothness part of the
conjecture for all values of $c$.
### 1.2 3-webs, $SL(2)$-webs and trisymplectic structures
Let $M$ be a real analytic manifold equipped with an atlas
$\\{U_{i}\hookrightarrow\mathbb{R}^{n}\\}$ and real analytic transition
functions $\psi_{ij}$. A _complexification_ of $M$ is a germ of a complex
manifold, covered by open sets $\\{V_{i}\hookrightarrow\mathbb{C}^{n}\\}$
indexed by the same set as $\\{U_{i}\\}$, and with the transition function
$\psi_{ij}^{\mathbb{C}}$ obtained by analytic extension of $\psi_{ij}$ into
the complex domain.
Complexification can be applied to a complex manifold, by considering it as a
real analytic manifold first. As shown by Kaledin and Feix (see [F1], [K] and
the argument in [JV, Section 1]), a complexification of a real analytic Kähler
manifold naturally gives a germ of a hyperkähler manifold. In the paper [JV]
we took the next step by looking at a complexification of a hyperkähler
manifold. We have shown that such a complexification is equipped with an
interesting geometric structure which we called a holomorphic $SL(2)$-web.
A holomorphic $SL(2)$-web on a complex manifold $M$ is a collection of
involutive holomorphic sub-bundles $S_{t}\subset TM$, $\operatorname{\sf
rk}S_{t}=\frac{1}{2}\dim M$, parametrized by $t\in\mathbb{C}{\mathbb{P}^{1}}$,
and satisfying the following two conditions. First, $S_{t}\cap
S_{t^{\prime}}=0$ for $t\neq t^{\prime}$, and second, the projector operators
$\Pi_{t,t^{\prime}}$ of $TM$ onto $S_{t^{\prime}}$ along $S_{t}$ generate an
algebra isomorphic to the matrix algebra $\operatorname{Mat}(2)$ (Definition
2.1).
This structure is a special case of a notion of 3-web developed in 1930-ies by
Blaschke and Chern. Let $M$ be an even-dimensional manifold, and
$S_{1},S_{2},S_{3}$ a triple of pairwise non-intersecting involutive sub-
bundles of $TM$ of dimension $\frac{1}{2}\dim M$. Then $S_{1},S_{2},S_{3}$ is
called a 3-web. Any 3-web on $M$ gives a rise to a natural connection on $TM$,
called a Chern connection. A Chern connection is one which preserves $S_{i}$,
and its torsion vanishes on $S_{1}\otimes S_{2}$; such a connection exists,
and is unique.
Let $a,b,c\in\mathbb{C}{\mathbb{P}^{1}}$ be three distinct points. For any
$SL(2)$-web, $S_{a},S_{b},S_{c}$ is clearly a 3-web. In [JV] we proved that
the corresponding Chern connection is torsion-free and holomorphic; also, it
is independent from the choice of $a,b,c\in\mathbb{C}{\mathbb{P}^{1}}$. We
also characterized such connections in terms of holonomy, and characterized an
$SL(2)$-web in terms of a connection with prescribed holonomy.
Furthermore, we constructed an $SL(2)$-web structure on a component of the
moduli space of rational curves on a twistor space of a hyperkähler manifold.
By interpreting the moduli space of framed instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$ in terms of rational curves on the twistor space
of the moduli space of framed bundles on $\mathbb{C}{\mathbb{P}^{2}}$, we
obtained a $SL(2)$-web on the smooth part of the moduli space of framed
instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$.
In the present paper we explore this notion further, studying those
$SL(2)$-webs which appear as moduli spaces of rational lines in the twistor
space of a hyperkähler manifold.
It turns out that (in addition to the $SL(2)$-web structure), this space is
equipped with the so-called trisymplectic structure (see also Definition 4.1).
###### Definition 1.1.
A trisymplectic structure on a complex manifold $M$ is a 3-dimensional
subspace of $\Omega^{2}M$ generated by a triple of holomorphic symplectic
forms $\Omega_{1},\Omega_{2},\Omega_{3}$, such that any linear combination of
$\Omega_{1},\Omega_{2},\Omega_{3}$ has rank $n=\dim M$, $\frac{1}{2}n$, or 0.
In differential geometry, similar structures known as hypersymplectic
structures were studied by Arnol′d, Atiyah, Hitchin and others (see e.g.
[Ar]). The hypersymplectic manifolds are similar to hyperkähler, but instead
of quaternions one deals with an algebra $\operatorname{Mat}(2,\mathbb{R})$ of
split quaternions. As one passes to complex manifolds and complex-valued
holomorphic symplectic forms, the distinction between quaternions and split
quaternions becomes irrelevant. Therefore, trisymplectic structures serve as
complexifications of both hypersymplectic and hyperkähler structures.
Consider a trisymplectic manifold $(M,\Omega_{1},\Omega_{2},\Omega_{3})$. In
Theorem 4.4 we show that the set of degenerate linear combinations of
$\Omega_{i}$ is parametrized by $\mathbb{C}{\mathbb{P}^{1}}$ (up to a
constant), and the null-spaces of these 2-forms form an $SL(2)$-web. We also
prove that the Chern connection associated with this $SL(2)$-web preserves the
2-forms $\Omega_{i}$ (Theorem 4.6). This allows one to characterize
trisymplectic manifolds in terms of the holonomy, similarly as it is done in
[JV] with $SL(2)$-webs.
###### Claim 1.2.
Let $M$ be a complex manifold. Then there is a bijective correspondence
between trisymplectic structures on $M$, and holomorphic connections with
holonomy which lies in $G=Sp(n,\mathbb{C})$ acting on
$\mathbb{C}^{2n}\otimes_{\mathbb{C}}\mathbb{C}^{2}$ trivially on the second
tensor multiplier and in the usual way on $\mathbb{C}^{2n}$.
Proof: Follows immediately from Theorem 4.6.
### 1.3 Trisymplectic reduction
In complex geometry, the symplectic reduction is understood as a way of
constructing the GIT quotient geometrically. Consider a Kähler manifold $M$
equipped with an action of a compact Lie group $G$. Assume that $G$ acts by
holomorphic isometries, and admits an equivariant moment map
$M\stackrel{{\scriptstyle\mu}}{{{\>\longrightarrow\>}}}{\mathfrak{g}}^{*}$,
where ${\mathfrak{g}}^{*}$ is the dual of the Lie algebra of $G$. The
symplectic reduction $M{/\\!\\!/}G$ is the quotient of $\mu^{-1}(0)$ by $G$.
This quotient is a complex variety, Kähler outside of its singular points.
When $M$ is projective, one can identify $M{/\\!\\!/}G$ with the GIT quotient
of $M$ by the action of the complex Lie group.
A hyperkähler quotient is defined in a similar way. Recall that a hyperkähler
manifold is a Riemannian manifold equipped with a triple of complex structures
$I,J,K$ which are Kähler and satisfy the quaternionic relations. Suppose that
a compact Lie group acts on $(M,g)$ by isometries which are holomorphic with
respect to $I,J,K$; such maps are called _hyperkähler isometries_. Suppose,
moreover, that there exists a triple of moment maps
$\mu_{I},\mu_{J},\mu_{K}:\;M{\>\longrightarrow\>}{\mathfrak{g}}^{*}$
associated with the symplectic forms $\omega_{I},\omega_{J},\omega_{K}$
constructed from $g$ and $I,J,K$. The _hyperkähler quotient_ ([HKLR])
$M{/\\!\\!/\\!\\!/}G$ is defined as
$\big{(}\mu^{-1}_{I}(0)\cap\mu^{-1}_{J}(0)\cap\mu^{-1}_{K}(0)\big{)}/G$.
Similarly to the Kähler case, this quotient is known to be hyperkähler outside
of the singular locus.
This result is easy to explain if one looks at the 2-form
$\Omega:=\omega_{J}+{\sqrt{-1}}\omega_{K}$. This form is holomorphically
symplectic on $(M,I)$. Then the complex moment map
$\mu_{\mathbb{C}}:=\mu_{J}+{\sqrt{-1}}\mu_{K}$ is holomorphic on $(M,I)$, and
the quotient
$M{/\\!\\!/\\!\\!/}G:=\mu_{\mathbb{C}}^{-1}{/\\!\\!/}G_{\mathbb{C}}$ is a
Kähler manifold. Starting from $J$ and $K$ instead of $I$, we construct other
complex structures on $M{/\\!\\!/\\!\\!/}G$; an easy linear-algebraic argument
is applied to show that these three complex structures satisfy the
quaternionic relations.
Carrying this argument a step farther, we repeat it for trisymplectic
manifolds, as follows. Let $(M,\Omega_{1},\Omega_{2},\Omega_{3})$ be a
trisymplectic manifold, that is, a complex manifold equipped with a triple of
holomorphic symplectic forms satisfying the rank conditions of Definition 1.1,
and $G_{\mathbb{C}}$ a complex Lie group acting on $M$ by biholomorphisms
preserving $\Omega_{1},\Omega_{2},\Omega_{3}$. Denote by
$\mu_{1},\mu_{2},\mu_{3}$ the corresponding complex moment maps, which are
assumed to be equivariant. The _trisymplectic reduction_ is the quotient of
$\mu^{-1}_{1}(0)\cap\mu^{-1}_{2}(0)\cap\mu^{-1}_{3}(0)$ by $G_{\mathbb{C}}$.
Under some reasonable non-degeneracy assumptions, we can show that a
trisymplectic quotient is also a trisymplectic manifold (Theorem 4.9).
Notice that since $G_{\mathbb{C}}$ is non-compact, this quotient is not always
well-defined. To rectify this, a trisymplectic version of GIT quotient is
proposed (Subsection 5.3), under the name of _trihyperkähler reduction_.
### 1.4 Trihyperkähler reduction
Let $M$ be a hyperkähler manifold, and
$\operatorname{Tw}(M)\stackrel{{\scriptstyle\pi}}{{{\>\longrightarrow\>}}}\mathbb{C}{\mathbb{P}^{1}}$
its twistor space (Subsection 2.2). A holomorphic section of $\pi$ is called
_regular_ if the normal bundle to its image is isomorphic to a sum of $\dim M$
copies of ${\cal O}(1)$. Denote by ${\operatorname{Sec}}_{0}(M)$ the space of
regular sections of $\pi$ (Definition 2.11).
One may think of ${\operatorname{Sec}}_{0}(M)$ as of a complexification of a
hyperkähler manifold $M$. It is the main example of a trisymplectic manifold
used in this paper.
The trisymplectic structure on ${\operatorname{Sec}}_{0}(M)$ is easy to obtain
explicitly. Let $L$ be a complex structure on $M$ induced by the quaternions
(Subsection 2.2), and $\Omega_{L}$ the corresponding holomorphic symplectic
form on $(M,L)$. Let
$\operatorname{\sf
ev}_{L}:\;{\operatorname{Sec}}_{0}(M){\>\longrightarrow\>}(M,L)$
be the evaluation map setting a twistor section
$s:\;\mathbb{C}{\mathbb{P}^{1}}{\>\longrightarrow\>}\operatorname{Tw}(M)$ to
$s(L)$ (we use the standard identification of the space of induced complex
structures with $\mathbb{C}{\mathbb{P}^{1}}$). Let
${\boldsymbol{\boldsymbol{\Omega}}}$ be a 3-dimensional space of holomorphic
2-forms on ${\operatorname{Sec}}_{0}(M)$ generated by $\operatorname{\sf
ev}_{I}^{*}(\Omega_{I})$, $\operatorname{\sf ev}_{J}^{*}(\Omega_{J})$ and
$\operatorname{\sf ev}_{K}^{*}(\Omega_{K})$. Then
${\boldsymbol{\boldsymbol{\Omega}}}$ defines a trisymplectic structure (Claim
5.4).
In this particular situation, the trisymplectic quotient can be defined using
a GIT-like construction as follows.
Let $G$ be a compact Lie group acting on a hyperkähler manifold $M$ by
hyperkähler isometries. Then $G$ acts on ${\operatorname{Sec}}_{0}(M)$
preserving the trisymplectic structure described above. Moreover, there is a
natural Kähler metric on ${\operatorname{Sec}}_{0}(M)$ constructed in [KV] as
follows. The twistor space $\operatorname{Tw}(M)$ is naturally isomorphic, as
a smooth manifold, to $M\times\mathbb{C}{\mathbb{P}^{1}}$. Consider the
product metric on $\operatorname{Tw}(M)$, and let
$\nu:\;{\operatorname{Sec}}_{0}(M){\>\longrightarrow\>}\mathbb{R}^{+}$ be a
map associating to a complex curve its total Riemannian volume. In [KV] it was
shown that $\nu$ is a Kähler potential, that is, $dd^{c}\nu$ is a Kähler form
on ${\operatorname{Sec}}_{0}(M)$.
Let ${\boldsymbol{\boldsymbol{\Omega}}}$ be the standard 3-dimensional space
of holomorphic 2-forms on
${\operatorname{Sec}}_{0}(M)$,
${\boldsymbol{\boldsymbol{\Omega}}}=\langle\operatorname{\sf
ev}_{I}^{*}(\Omega_{I}),\operatorname{\sf
ev}_{J}^{*}(\Omega_{J}),\operatorname{\sf ev}_{K}^{*}(\Omega_{K})\rangle.$
Then the corresponding triple of holomorphic moment maps is generated by
$\operatorname{\sf ev}_{I}\circ\mu^{\mathbb{C}}_{I}$, $\operatorname{\sf
ev}_{J}\circ\mu^{\mathbb{C}}_{J}$, $\operatorname{\sf
ev}_{K}\circ\mu^{\mathbb{C}}_{K}$, where $\mu^{\mathbb{C}}_{L}$ is a
holomorphic moment map of $(M,L)$. This gives a description of the zero set of
the trisymplectic moment map
${\boldsymbol{\boldsymbol{\mu}}}:\;{\operatorname{Sec}}_{0}(M){\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}\mathbb{C}^{3}$,
As follows from Proposition 5.5, a rational curve
$s\in{\operatorname{Sec}}_{0}(M)$ lies in
${\boldsymbol{\boldsymbol{\mu}}}^{-1}(0)$ if and only if $s$ lies in a set of
all pairs $(m,t)\in
M\times\mathbb{C}{\mathbb{P}^{1}}\simeq\operatorname{Tw}(M)$ satisfying
$\mu^{\mathbb{C}}_{t}(m)=0$, where
$\mu^{\mathbb{C}}_{t}:\;(M,t){\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}\mathbb{C}$
is the holomorphic moment map corresponding to the complex structure $t$.
Now, the zero set ${\boldsymbol{\boldsymbol{\mu}}}^{-1}(0)$ of the
trisymplectic moment map is a Kähler manifold, with the Kähler metric
$dd^{c}\nu$ defined as above. Therefore, one could define the symplectic
quotient ${\boldsymbol{\boldsymbol{\mu}}}^{-1}(0){/\\!\\!/}G$. This quotient,
denoted by ${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$, is called
the _trihyperkähler quotient_ of ${\operatorname{Sec}}_{0}(M)$ (see Definition
5.9).
One of the main results of the present paper is the following theorem relating
the trihyperkähler quotient and the hyperkähler quotient.
###### Theorem 1.3.
Let $M$ be flat hyperkähler manifold, and $G$ a compact Lie group acting on
$M$ by hyperkähler automorphisms. Suppose that the hyperkähler moment map
exists, and the hyperkähler quotient $M{/\\!\\!/\\!\\!/}G$ is smooth. Then
there exists an open embedding
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G\stackrel{{\scriptstyle\Psi}}{{{\>\longrightarrow\>}}}{\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$,
which is compatible with the trisymplectic structures on
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ and
${\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$.
Proof: This is Theorem 5.11.
The flatness of $M$, assumed in Theorem 1.3, does not seem to be necessary,
but we were unable to prove it without this assumption.
### 1.5 Framed instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$
In Section 8, the geometric techniques introduced in the previous sections are
applied to the study of the moduli space of framed instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$.
Recall that a holomorphic vector bundle $E\to\mathbb{C}{\mathbb{P}^{3}}$ is
called an instanton bundle if $c_{1}(E)=0$ and
$H^{0}(E(-1))=H^{1}(E(-2))=H^{2}(E(-2))=H^{3}(E(-3))=0$. The integer
$c:=c_{2}(E)$ is called the charge of $E$.
This nomenclature comes from the fact that instanton bundles which are trivial
on the lines of the twistor fibration $\mathbb{C}{\mathbb{P}^{3}}\to S^{4}$
(a.k.a. _real lines_) are in 1-1 correspondence, via twistor transform, with
non-Hermitian anti-self-dual connections on $S^{4}$ (see [JV, Section 3]).
Note however that there are instanton bundles which are not trivial on every
real line.
Moreover, given a line $\ell\subset{\mathbb{P}^{3}}$, a framing on $E$ at
$\ell$ is a choice of an isomorphism $\phi:E|_{\ell}\to{\cal
O}_{\ell}^{\oplus{\rm rk}E}$. A framed instanton bundle is a pair $(E,\phi)$
consisting of an instanton bundle $E$ restricting trivially to $\ell$ and a
framing $\phi$ at $\ell$. Two framed bundles $(E,\phi)$ and
$(E^{\prime},\phi^{\prime})$ are isomorphic if there exists a bundle
isomorphism $\Psi:E\to E^{\prime}$ such that
$\phi^{\prime}=\phi\circ(\Psi|_{\ell})$.
Frenkel and the first named author established in [FJ] a 1-1 correspondence
between isomorphism classes of framed instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$ and solutions of the _complex ADHM equations_
(a.k.a _1-dimensional ADHM equation_) in [J2]).
More precisely, let $V$ and $W$ be complex vector spaces of dimension $c$ and
$r$, respectively, and consider matrices ($k=1,2$) $A_{k},B_{k}\in{\rm
End}(V)$, $I_{k}\in{\rm Hom}(W,V)$ and $J_{k}\in{\rm Hom}(V,W)$. The
1-dimensional ADHM equations are
$\left\\{\begin{array}[]{l}~{}[A_{1},B_{1}]+I_{1}J_{1}=0\\\
~{}[A_{2},B_{2}]+I_{2}J_{2}=0\\\
~{}[A_{1},B_{2}]+[A_{2},B_{1}]+I_{1}J_{2}+I_{2}J_{1}=0\end{array}\right.$
(1.1)
One can show [FJ, Main Theorem] that the moduli space of framed instanton
bundles on $\mathbb{C}{\mathbb{P}^{3}}$ coincides with the set of _globally
regular_ solutions (see Definition 8.1 below) of the 1-dimensional ADHM
equations modulo the action of $GL(V)$.
It turns out that the three equations in (1.1) are precisely the three
components of a trisymplectic moment map
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}:{\operatorname{Sec}}_{0}(M)\to{\mathfrak{u}}(V)^{*}\otimes_{\mathbb{R}}\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(2))$ on (an open subset of) a flat hyperkähler
manifold $M$, so that the moduli space of framed instanton bundles coincides
with a trihyperkähler reduction of a flat space (Theorem 8.3). It then follows
that the moduli space of framed instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$ of rank $r$ and charge $c$ is a smooth
trisymplectic manifold of dimension $4rc$.
On the other hand, a _mathematical instanton bundle_ is a rank $2$ stable
holomorphic vector bundle $E\to\mathbb{C}{\mathbb{P}^{3}}$ with $c_{1}(E)=0$
and $H^{1}(E(-2))=0$. It is easy to see, using Serre duality and stability,
that every mathematical instanton bundle is a rank $2$ instanton bundle.
Conversely, every rank $2$ instanton bundle is stable, and thus a mathematical
instanton bundle. We explore this fact to complete the paper in Section 8.3 by
showing how the smoothness of the moduli space of framed rank $2$ instanton
bundles settles the smoothness part of the conjecture on the moduli space of
mathematical instanton bundles.
## 2 $SL(2)$-webs on complex manifolds
In this section, we repeat basic results about $SL(2)$-webs on complex
manifolds. We follow [JV].
### 2.1 $SL(2)$-webs and twistor sections
The following notion is based on a classical notion of a 3-web, developed in
the 1930-ies by Blaschke and Chern, and much studied since then.
###### Definition 2.1.
Let $M$ be a complex manifold, $\dim_{\mathbb{C}}M=2n$, and $S_{t}\subset TM$
a family of $n$-dimensional holomorphic sub-bundles, parametrized by
$t\in\mathbb{C}{\mathbb{P}^{1}}$. This family is called a holomorphic
$SL(2)$-web if the following conditions are satisfied
(i)
Each $S_{t}$ is involutive (integrable), that is, $[S_{t},S_{t}]\subset
S_{t}$.
(ii)
For any distinct points $t,t^{\prime}\in\mathbb{C}{\mathbb{P}^{1}}$, the
foliations $S_{t}$, $S_{t^{\prime}}$ are transversal: $S_{t}\cap
S_{t^{\prime}}=\emptyset$.
(iii)
Let $P_{t,t^{\prime}}:\;TM{\>\longrightarrow\>}S_{t}\hookrightarrow TM$ be a
projection of $TM$ to $S_{t}$ along $S_{t^{\prime}}$. Then the operators
$A(P_{t,t^{\prime}})\in\operatorname{End}(TM)$ generate a 3-dimensional sub-
bundle in $\operatorname{End}(TM)$, where
$A(P_{t,t^{\prime}})=P_{t,t^{\prime}}-\frac{1}{2}\dim_{\mathbb{C}}M{\mathbf{1}}$
denotes the traceless part of $P_{t,t^{\prime}}$.
Since $S_{t}$ and $S_{t^{\prime}}$ are mid-dimensional, transversal
foliations, it follows that $T_{m}M=S_{t}(m)\oplus S_{t^{\prime}}(m)$ for each
point $m\in M$. According to this splitting, $P_{t,t^{\prime}}(m)$ is simply a
projection onto the first factor.
###### Remark 2.2.
The operators $P_{t,t^{\prime}}\subset\operatorname{End}(M)$ generate a Lie
algebra isomorphic to $\mathfrak{sl}(2)$.
###### Definition 2.3.
(see e.g. [A]) Let $B$ be a holomorphic vector bundle over a complex manifold
$M$. A holomorphic connection on $B$ is a holomorphic differential operator
$\nabla:\;B{\>\longrightarrow\>}B\otimes\Omega^{1}M$ satisfying
$\nabla(fb)=b\otimes df+f\nabla(b)$, for any folomorphic function $f$ on $M$.
###### Remark 2.4.
Let $\nabla$ be a holomorphic connection on a holomorphic bundle, considered
as a map $\nabla:\;B{\>\longrightarrow\>}B\otimes\Lambda^{1,0}M$, and
$\bar{\partial}:\;B{\>\longrightarrow\>}B\otimes\Lambda^{0,1}M$ the
holomorphic structure operator. The sum $\nabla_{f}:=\nabla+\bar{\partial}$ is
clearly a connection. Since $\nabla$ is holomorphic,
$\nabla\bar{\partial}+\bar{\partial}\nabla=0$, hence the curvature
$\nabla_{f}^{2}$ is of type $(2,0)$. The converse is also true: a $(1,0)$-part
of a connection with curvature of type $(2,0)$ is always a holomorphic
connection.
###### Proposition 2.5.
([JV]) Let $S_{t},t\in\mathbb{C}{\mathbb{P}^{1}}$ be an $SL(2)$-web. Then
there exists a unique torsion-free holomorphic connection preserving $S_{t}$,
for all $t\in\mathbb{C}{\mathbb{P}^{1}}$.
###### Definition 2.6.
This connection is called a Chern connection of an $SL(2)$-web.
###### Theorem 2.7.
([JV]) Let $M$ be a manifold equipped with a holomorphic $SL(2)$-web. Then its
Chern connection is a torsion-free affine holomorphic connection with holonomy
in $GL(n,\mathbb{C})$ acting on $\mathbb{C}^{2n}$ as a centralizer of an
$SL(2)$-action, where $\mathbb{C}^{2n}$ is a direct sum of $n$ irreducible
$GL(2)$-representations of weight 1. Conversely, every connection with such
holonomy preserves a holomorphic $SL(2)$-web.
### 2.2 Hyperkähler manifolds
###### Definition 2.8.
Let $(M,g)$ be a Riemannian manifold, and $I,J,K$ endomorphisms of the tangent
bundle $TM$ satisfying the quaternionic relations
$I^{2}=J^{2}=K^{2}=IJK=-{\mathbf{1}}_{TM}.$
The triple $(I,J,K)$ together with the metric $g$ is called a hyperkähler
structure if $I,J$ and $K$ are integrable and Kähler with respect to $g$.
Consider the Kähler forms $\omega_{I},\omega_{J},\omega_{K}$ on $M$:
$\omega_{I}(\cdot,\cdot):=g(\cdot,I\cdot),\ \
\omega_{J}(\cdot,\cdot):=g(\cdot,J\cdot),\ \
\omega_{K}(\cdot,\cdot):=g(\cdot,K\cdot).$ (2.1)
An elementary linear-algebraic calculation implies that the 2-form
$\Omega:=\omega_{J}+{\sqrt{-1}}\omega_{K}$ (2.2)
is of Hodge type $(2,0)$ on $(M,I)$. This form is clearly closed and non-
degenerate, hence it is a holomorphic symplectic form.
In algebraic geometry, the word “hyperkähler” is essentially synonymous with
“holomorphically symplectic”, due to the following theorem, which is implied
by Yau’s solution of Calabi conjecture ([Bea, Bes]).
###### Theorem 2.9.
Let $M$ be a compact, Kähler, holomorphically symplectic manifold, $\omega$
its Kähler form, $\dim_{\mathbb{C}}M=2n$. Denote by $\Omega$ the holomorphic
symplectic form on $M$. Assume that
$\int_{M}\omega^{2n}=\int_{M}(\operatorname{Re}\Omega)^{2n}$. Then there
exists a unique hyperkähler metric $g$ within the same Kähler class as
$\omega$, and a unique hyperkähler structure $(I,J,K,g)$, with
$\omega_{J}=\operatorname{Re}\Omega$, $\omega_{K}={\rm Im}~{}\Omega$.
Every hyperkähler structure induces a whole 2-dimensional sphere of complex
structures on $M$, as follows. Consider a triple $a,b,c\in\mathbb{R}$,
$a^{2}+b^{2}+c^{2}=1$, and let $L:=aI+bJ+cK$ be the corresponging quaternion.
Quaternionic relations imply immediately that $L^{2}=-1$, hence $L$ is an
almost complex structure. Since $I,J,K$ are Kähler, they are parallel with
respect to the Levi-Civita connection. Therefore, $L$ is also parallel. Any
parallel complex structure is integrable, and Kähler. We call such a complex
structure $L=aI+bJ+cK$ a _complex structure induced by the hyperkähler
structure_. The corresponding complex manifold is denoted by $(M,L)$. There is
a 2-dimensional holomorphic family of induced complex structures, and the
total space of this family is called the _twistor space_ of a hyperkähler
manifold; it is constructed as follows.
Let $M$ be a hyperkähler manifold. Consider the product
$\operatorname{Tw}(M)=M\times S^{2}$. Embed the sphere
$S^{2}\subset{\mathbb{H}}$ into the quaternion algebra ${\mathbb{H}}$ as the
subset of all quaternions $J$ with $J^{2}=-1$. For every point $x=m\times J\in
X=M\times S^{2}$ the tangent space $T_{x}\operatorname{Tw}(M)$ is canonically
decomposed $T_{x}X=T_{m}M\oplus T_{J}S^{2}$. Identify $S^{2}$ with
$\mathbb{C}{\mathbb{P}^{1}}$, and let $I_{J}:T_{J}S^{2}\to T_{J}S^{2}$ be the
complex structure operator. Consider the complex structure $I_{m}:T_{m}M\to
T_{m}M$ on $M$ induced by $J\in S^{2}\subset{\mathbb{H}}$.
The operator $I_{\operatorname{Tw}}=I_{m}\oplus
I_{J}:T_{x}\operatorname{Tw}(M)\to T_{x}\operatorname{Tw}(M)$ satisfies
$I_{\operatorname{Tw}}\circ I_{\operatorname{Tw}}=-1$. It depends smoothly on
the point $x$, hence it defines an almost complex structure on
$\operatorname{Tw}(M)$. This almost complex structure is known to be
integrable (see e.g. [Sal]).
###### Definition 2.10.
The space $\operatorname{Tw}(M)$ constructed above is called the twistor space
of a hyperkähler manifold.
### 2.3 An example: rational curves on a twistor space
The basic example of holomorphic $SL(2)$-webs comes from hyperkähler geometry.
Let $M$ be a hyperkähler manifold, and $\operatorname{Tw}(M)$ its twistor
space. Denote by ${\operatorname{Sec}}(M)$ is the space of holomorphic
sections of the twistor fibration ${\rm
Tw}(M)\stackrel{{\scriptstyle\pi}}{{{\>\longrightarrow\>}}}\mathbb{C}{\mathbb{P}^{1}}$.
We consider ${\operatorname{Sec}}(M)$ as a complex variety, with the complex
structure induced from the Douady space of rational curves on
$\operatorname{Tw}(M)$. Clearly, for any $C\in{\operatorname{Sec}}(M)$,
$T_{C}{\operatorname{Sec}}(M)$ is a subspace in the space of sections of the
normal bundle $N_{C}$. This normal bundle is naturally identified with
$T_{\pi}\operatorname{Tw}(M){\left|{}_{{\phantom{|}\\!\\!}_{C}}\right.}$,
where $T_{\pi}\operatorname{Tw}(M)$ denotes the vertical tangent space.
For each point $m\in M$, one has a horizontal section
$C_{m}:=\\{m\\}\times\mathbb{C}{\mathbb{P}^{1}}$ of $\pi$. The space of
horizontal sections of $\pi$ is denoted ${\operatorname{Sec}}_{hor}(M)$; it is
naturally identified with $M$. It is easy to check that $NC_{m}={\cal
O}(1)^{\dim M}$, hence some neighbourhood of
${\operatorname{Sec}}_{hor}(M)\subset{\operatorname{Sec}}(M)$ is a smooth
manifold of dimension $2\dim M$. It is easy to see that
${\operatorname{Sec}}(M)$ is a complexification of
$M\simeq{\operatorname{Sec}}_{hor}(M)$, considered as a real analytic manifold
(see [V2]).
###### Definition 2.11.
A twistor section $C\in{\operatorname{Sec}}(M)$ whose normal bundle $N_{C}$ is
isomorphic to ${\cal O}(1)^{\dim M}$ is called regular.
Let ${\operatorname{Sec}}_{0}(M)$ be the subset of ${\operatorname{Sec}}(M)$
consisting of regular sections. Clearly, ${\operatorname{Sec}}_{0}(M)$ is a
smooth, Zariski open subvariety in ${\operatorname{Sec}}(M)$, containing the
set ${\operatorname{Sec}}_{hor}(M)$ of horizontal sections.
The space ${\operatorname{Sec}}_{0}(M)$ admits the structure of a holomorphic
$SL(2)$-web, constructed as follows. For each
$C\in{\operatorname{Sec}}_{0}(M)$ and $t\in\mathbb{C}{\mathbb{P}^{1}}=C$,
define $S_{t}\subset TC=\Gamma_{C}(N_{C})$ as the space of all sections of
$N_{C}$ vanishing at $t\in C$.
It is not difficult to check that this is a holomorphic $SL(2)$-web.
Transversality of $S_{t}$ and $S_{t^{\prime}}$ is obvious, because a section
of ${\cal O}(1)$ vanishing at two points is zero. Integrability of $S_{t}$ is
also clear, since the leaves of $S_{t}$ are fibers of the evaluation map
$ev_{t}:\;{\operatorname{Sec}}(M){\>\longrightarrow\>}\operatorname{Tw}(M)$,
mapping
$C:\;\mathbb{C}{\mathbb{P}^{1}}{\>\longrightarrow\>}\operatorname{Tw}(M)$ to
$C(t)$. The last condition follows from the fact that
$\Gamma_{\mathbb{C}{\mathbb{P}^{1}}}(V\otimes_{\mathbb{C}}{\cal O}(1))\simeq
V\otimes_{\mathbb{C}}\mathbb{C}^{2}$, and the projection maps
$P_{t,t^{\prime}}$ act on $V\otimes_{\mathbb{C}}\mathbb{C}^{2}$ only through
the second component.
The space ${\operatorname{Sec}}_{0}(M)$ is the main example of an $SL(2)$-web
manifold we consider in this paper.
## 3 Trisymplectic structures on vector spaces
### 3.1 Trisymplectic structures and $\operatorname{Mat}(2)$-action
This section is dedicated to the study of the following linear algebraic
objects, which will be the basic ingredient in the new geometric structures we
will introduce later.
###### Definition 3.1.
Let ${\boldsymbol{\boldsymbol{\Omega}}}$ be a 3-dimensional space of complex
linear 2-forms on a complex vector space $V$. Assume that
(i)
${\boldsymbol{\boldsymbol{\Omega}}}$ contains a non-degenerate form;
(ii)
For each non-zero degenerate $\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$,
one has $\operatorname{\sf rk}\Omega=\frac{1}{2}\dim V$.
Then ${\boldsymbol{\boldsymbol{\Omega}}}$ is called a trisymplectic structure
on $V$, and $(V,{\boldsymbol{\boldsymbol{\Omega}}})$ a trisymplectic space.
###### Remark 3.2.
If $V$ is not a complex, but a real vector space, this notion defines either a
quaternionic Hermitian structure, or a structure known as hypersymplectic and
associated with a action of split quaternions, c.f. [Ar].
###### Lemma 3.3.
Let $(V,{\boldsymbol{\boldsymbol{\Omega}}})$ be a be a trisymplectic space,
and $\Omega_{1},\Omega_{2}\in{\boldsymbol{\boldsymbol{\Omega}}}$ two non-zero,
degenerate forms which are not proportional. Then the annihilator
$\operatorname{Ann}(\Omega_{1})$ does not intersect
$\operatorname{Ann}(\Omega_{2})$.
Proof: Indeed, if these two spaces intersect in a subspace $C\subset V$,
strictly contained in $\operatorname{Ann}(\Omega_{1})$, some linear
combination of $\Omega_{1}$ and $\Omega_{2}$ would have annihilator $C$, which
is impossible, because $0<\dim C<\frac{1}{2}\dim V$. If
$\operatorname{Ann}(\Omega_{1})=\operatorname{Ann}(\Omega_{2})$, we could
consider $\Omega_{1},\Omega_{2}$ as non-degenerate forms
$\Omega_{1}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.},\Omega_{2}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$
on $W:=V/\operatorname{Ann}(\Omega_{2})$, which are obviously not
proportional. We interpret
$\Omega_{i}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$ as a bijective map
from $W$ to $W^{*}$. Let $w$ be an eigenvector of an operator
$\Omega_{1}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}\circ\left(\Omega_{2}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}\right)^{-1}\in\operatorname{End}(W)$,
and $\lambda$ its eigenvalue. Then $\Omega_{1}(w,x)=\lambda\Omega_{2}(w,x)$,
for each $x\in W$, hence $w$ lies in the annihilator of
$\Omega_{1}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}-\lambda\Omega_{2}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$.
Then $\Omega_{1}-\lambda\Omega_{2}$ has an annihilator strictly larger than
$\operatorname{Ann}(\Omega_{2})$, which is impossible, unless
$\Omega_{1}=\lambda\Omega_{2}$.
Given two non-proportional, degenerate forms
$\Omega_{1},\Omega_{2}\in{\boldsymbol{\boldsymbol{\Omega}}}$, one has that
$V=\operatorname{Ann}(\Omega_{1})\oplus\operatorname{Ann}(\Omega_{2})$ by the
previous Lemma. Thus one can consider projection operators
$\Pi_{\Omega_{1},\Omega_{2}}$ of $V$ onto $\operatorname{Ann}(\Omega_{1})$
along $\operatorname{Ann}(\Omega_{2})$.
###### Proposition 3.4.
Let $(V,{\boldsymbol{\boldsymbol{\Omega}}})$ be a be a trisymplectic vector
space, and let $H\subset\operatorname{End}(V)$ be the subspace generated by
projections $\Pi_{\Omega_{1},\Omega_{2}}$ for all pairs of non-proportional,
degenerate forms $\Omega_{1},\Omega_{2}\in{\boldsymbol{\boldsymbol{\Omega}}}$.
Then
(A)
$H$ is a subalgebra of $\operatorname{End}(V)$, isomorphic to the matrix
algebra $\operatorname{Mat}(2)$.
(B)
Let ${\mathfrak{g}}\subset\operatorname{End}(V)$ be a Lie algebra generated by
the commutators $[H,H]$, ${\mathfrak{g}}\cong\mathfrak{sl}(2)$. Then the space
${\boldsymbol{\boldsymbol{\Omega}}}\subset\Lambda^{2}(V)$ is
${\mathfrak{g}}$-invariant, under the natural action of the Lie algebra
${\mathfrak{g}}$ on $\Lambda^{2}V$.
(C)
There exists a non-degenerate, ${\mathfrak{g}}$-invariant quadratic form $Q$
on ${\boldsymbol{\boldsymbol{\Omega}}}$, unique up to a constant, such that
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$ is degenerate if and only if
$Q(\Omega,\Omega)=0$.
###### Proof.
We begin by establishing item (A); the proof is divided in three steps.
Step 1: We prove that the space $H\subset\operatorname{End}(V)$ is an algebra,
and satisfies $\dim H\leq 4.$
Let $\Omega_{1},\Omega_{2}\in{\boldsymbol{\boldsymbol{\Omega}}}$ be two forms
which are not proportional, and assume $\Omega_{2}$ is non-degenerate.
Consider an operator $\phi_{\Omega_{1},\Omega_{2}}\in\operatorname{End}(V)$,
$\phi_{\Omega_{1},\Omega_{2}}:=\Omega_{1}\circ\Omega_{2}^{-1}$, where
$\Omega_{1},\Omega_{2}$ are understood as operators from $V$ to $V^{*}$. As in
the proof of Lemma 3.3, consider an eigenvector $v$ of
$\phi_{\Omega_{1},\Omega_{2}}$, with the eigenvalue $\lambda$. Then
$\Omega_{1}(v,x)=\lambda\Omega_{2}(v,x)$, for each $x\in V$, hence $v$ lies in
the annihilator of $\Omega:=\Omega_{1}-\lambda\Omega_{2}$. Since $\Omega_{i}$
are non-proportional, $\Omega$ is non-zero, hence $\operatorname{\sf
rk}\Omega=\frac{1}{2}\dim V$. This implies that each eigenspace of
$\phi_{\Omega_{1},\Omega_{2}}$ has dimension $\frac{1}{2}\dim V$. Choosing
another eigenvalue $\lambda^{\prime}$ and repeating this procedure, we obtain
a $2$-form $\Omega^{\prime}:=\Omega_{1}-\lambda^{\prime}\Omega_{2}$, also
degenerate. Let $S$, $S^{\prime}$ be annihilators of $\Omega$,
$\Omega^{\prime}$, and $\Pi_{S,S^{\prime}},\Pi_{S^{\prime},S}$ be the
projection of $V$ onto $S$ or $S^{\prime}$ along $S^{\prime}$ or $S$,
respectively. It follows that
$\phi_{\Omega_{1},\Omega_{2}}=\lambda\Pi_{S^{\prime},S}+\lambda^{\prime}\Pi_{S,S^{\prime}}.$
(3.1)
and $\phi_{\Omega_{1},\Omega_{2}}$ can be expressed in an appropriate basis by
the matrix
$\phi_{\Omega_{1},\Omega_{2}}=\begin{pmatrix}\lambda&0&0&{1}&0&0&0\\\
0&\lambda&0&{1}&0&0&0\\\ 0&0&\lambda&{1}&0&0&0\\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\\
0&0&0&{1}&\lambda^{\prime}&0&0\\\ 0&0&0&{1}&0&\lambda^{\prime}&0\\\
0&0&0&{1}&0&0&\lambda^{\prime}\end{pmatrix}.$ (3.2)
From (3.1) it is clear that the space $H$ is generated by all
$\phi_{\Omega_{1},\Omega_{2}}$. It is also clear that when $\Omega_{2}$ is
also non-degenerate, the operator $\phi_{\Omega_{2},\Omega_{1}}$ can be
expressed as a linear combination of $\phi_{\Omega_{1},\Omega_{2}}$ and
$\phi_{\Omega_{1},\Omega_{1}}={\mathbf{1}}_{V}$.
Since non-degenerate forms constitute a dense open subset of
${\boldsymbol{\boldsymbol{\Omega}}}$, one can choose a basis
$\Omega_{1},\Omega_{2},\Omega_{3}$ consisting of non-degenerate forms. Since
$\phi_{\Omega_{i},\Omega}$ is expressed as a linear combination of
$\phi_{\Omega,\Omega_{i}}$ and ${\mathbf{1}}_{V}$, and
$\phi_{\Omega,\Omega_{i}}$ is linear in $\Omega$, the vector space $H$ is
generated by $\phi_{\Omega_{i},\Omega_{j}},i<j$, and ${\mathbf{1}}_{V}$.
Therefore, $H$ is at most 4-dimensional. From (3.1) it is clear that for any
non-degenerate $\Omega_{1}$, $\Omega_{2}$, the operator
$\phi_{\Omega_{1},\Omega_{2}}$ can be expressed through
$\phi_{\Omega_{2},\Omega_{1}}=\phi_{\Omega_{1},\Omega_{2}}^{-1}$ and
${\mathbf{1}}$:
$\phi_{\Omega_{2},\Omega_{1}}=a\phi_{\Omega_{2},\Omega_{1}}+b{\mathbf{1}}.$
(3.3)
Since
$\phi_{\Omega_{i},\Omega_{j}}\circ\phi_{\Omega_{j},\Omega_{k}}=\phi_{\Omega_{i},\Omega_{k}},$
(3.4)
the space $H$ is a subalgebra in $\operatorname{End}(V)$ (to multiply some of
$\phi_{\Omega_{i},\Omega_{j}}$ and
$\phi_{\Omega_{i^{\prime}},\Omega_{j^{\prime}}}$, you would have to reverse
the order when necessary, using (3.3), and then apply (3.4)).
Step 2: We prove that any element of $H$ has form
$\phi_{\Omega,\Omega^{\prime}}+c{\mathbf{1}}_{V}$, for some
$\Omega,\Omega^{\prime}\in{\boldsymbol{\boldsymbol{\Omega}}}$. Indeed, as we
have shown, a general element of $H$ has form
$h=a\phi_{\Omega_{1},\Omega_{2}}+b\phi_{\Omega_{1},\Omega_{3}}+c\phi_{\Omega_{2},\Omega_{3}}+d{\mathbf{1}}_{V},$
(3.5)
where $\Omega_{1},\Omega_{2},\Omega_{3}$ is a basis of non-degenerate forms
for ${\boldsymbol{\boldsymbol{\Omega}}}$. Since $\phi$ is linear in the first
argument, this gives
$h=a\phi_{\Omega_{1},\Omega_{2}}+\phi_{b\Omega_{1}+c\Omega_{2},\Omega_{3}}+dc{\mathbf{1}}_{V}.$
(3.6)
If the form $b\Omega_{1}+c\Omega_{2}$ is non-degenerate, we use the reversal
as indicated in (3.3), obtaining
$\phi_{b\Omega_{1}+c\Omega_{2},\Omega_{3}}=\lambda\phi_{\Omega_{3},b\Omega_{1}+c\Omega_{2}}+\lambda^{\prime}{\mathbf{1}}_{V},$
write, similarly,
$a\phi_{\Omega_{1},\Omega_{2}}=\mu\phi_{\Omega_{1},b\Omega_{1}+c\Omega_{2}}+\mu^{\prime}\lambda^{\prime}{\mathbf{1}}_{V},$
then, adding the last two formulae, obtain
$h=(\mu+1)\phi_{\Omega_{1}+\Omega_{3},b\Omega_{1}+c\Omega_{2}}+(\lambda^{\prime}+\mu^{\prime}+d){\mathbf{1}}_{V}.$
We denote the term $b\Omega_{1}+c\Omega_{2}$ in (3.6) as
$\Omega(h,\Omega_{1},\Omega_{2},\Omega_{3})$, considering it as a function of
$h$ and the basis $\Omega_{i}$. If $b\Omega_{1}+c\Omega_{2}$ is degenerate, we
have to make a different choice of the basis
$\Omega_{1},\Omega_{2},\Omega_{3}$, in such a way that
$\Omega(h,\Omega_{1},\Omega_{2},\Omega_{3})$ becomes non-degenerate. This is
done as follows.
If we replace $\Omega_{1}$ by
$\Omega^{\prime}_{1}=\Omega_{1}+\epsilon\Omega_{3}$, in the expression (3.5)
we get
$h=a\phi_{\Omega_{1}^{\prime},\Omega_{2}}+b\phi_{\Omega_{1}^{\prime},\Omega_{3}}+c\phi_{\Omega_{2},\Omega_{3}}+\epsilon\phi_{\Omega_{3},\Omega_{2}}+(d+\epsilon){\mathbf{1}}_{V}.$
Let
$\phi_{\Omega_{3},\Omega_{2}}=\lambda\phi_{\Omega_{2},\Omega_{3}}+\alpha{\mathbf{1}}_{V}$,
as in (3.3). Then
$\Omega(h,\Omega_{1}^{\prime},\Omega_{2},\Omega_{3})=b\Omega^{\prime}_{1}+(c-\epsilon\lambda)\Omega_{2}=b\Omega_{1}+b\epsilon\Omega_{3}+(c-\epsilon)\lambda\Omega_{2}$
The difference of these two terms is expressed as
$\Omega(h,\Omega_{1}^{\prime},\Omega_{2},\Omega_{3})-\Omega(h,\Omega_{1},\Omega_{2},\Omega_{3})=b\epsilon\Omega_{3}-\epsilon\lambda\Omega_{2}.$
If $\Omega(h,\Omega_{1}^{\prime},\Omega_{2},\Omega_{3})$ remains degenerate
for all $\epsilon$, then $b\epsilon\Omega_{3}-\epsilon\lambda\Omega_{2}$ is
proportional to $b\Omega_{1}+c\Omega_{2}$, which is impossible, because
$\Omega_{i}$ are linearly independent. Therefore, $\Omega_{i}$ can be chosen
in such a way that $\Omega(h,\Omega_{1},\Omega_{2},\Omega_{3})$ is non-
degenerate, and for such a basis, $h$ is expressed as above.
Step 3: We prove that the algebra $H$ is isomorphic to
$\operatorname{Mat}(2)$. Consider the form
$B(h_{1},h_{2}):=\operatorname{Tr}(h_{1}h_{2})$ on $H$. From Step 2 and (3.1)
it follows any element of $h$ can be written as
$h=\lambda\Pi_{S^{\prime},S}+\lambda^{\prime}\Pi_{S,S^{\prime}}.$ (3.7)
where $\Pi_{S,S^{\prime}}$ are projection operators. Then,
$\operatorname{Tr}h=B(h,{\mathbf{1}}_{V})=\frac{1}{2}\dim
M(\lambda+\lambda^{\prime}),$
This is non-zero unless $\lambda=-\lambda^{\prime}$, and in the latter case
$B(h,h)=\lambda^{2}\dim M\neq 0$ (unless $h$ vanishes). Therefore, the form
$B$ is non-degenerate. Since the Lie algebra $(H,[\cdot,\cdot])$ admits a non-
degenerate invariant quadratic form, it is reductive. Since $\dim H\leq 4$,
and it has a non-trivial center generated by ${\mathbf{1}}_{V}$, it follows
from the classification of reductive algebras that either
$(H,[\cdot,\cdot])\cong\mathfrak{sl}(2)\oplus\mathbb{C}\cdot{\mathbf{1}}_{V}$
or $H$ is commutative. In the first case, $H$ is obviously isomorphic to
$\operatorname{Mat}(2)$. Therefore, to prove that
$H\cong\operatorname{Mat}(2)$ it suffices to show that $H$ is not commutative.
In the latter case, there exists a basis in $V$ for which all elements of $H$
are upper triangular. However, from (3.2) it would follow that in this case
$\dim H=2$, hence $\dim{\boldsymbol{\boldsymbol{\Omega}}}\leq 2$, which
contradicts our hypothesis. We have therefore proved Proposition 3.4 (A).
We now consider the second part of the Proposition. By definition, the algebra
$H$ is generated as a linear space by idempotents, that is, projection
operators. Consider an idempotent $\Pi\in H$. To prove that the Lie algebra
${\mathfrak{g}}=[H,H]$ preserves ${\boldsymbol{\boldsymbol{\Omega}}}$, it
would suffice to show that for each
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$, the form
$\Pi(\Omega):=\Omega(\Pi(\cdot),\cdot)+\Omega(\cdot,\Pi(\cdot))$
belongs to ${\boldsymbol{\boldsymbol{\Omega}}}$. Since $\Pi(\Omega)$ vanishes
on $\ker\Pi$ and is equal to $\Omega$ on ${\rm Im}~{}\Pi$, one has also
$\Pi(\Omega)=2\Omega(\Pi(\cdot),\Pi(\cdot)).$ (3.8)
Let $\phi_{\Omega_{1},\Omega_{2}}$ be an operator satisfying
$\Pi=\lambda\phi_{\Omega_{1},\Omega_{2}}+\lambda^{\prime}{\mathbf{1}}_{V}$
(the existence of such an operator was shown in Step 2). Then
$\Pi=\lambda\phi_{\frac{\lambda^{\prime}}{\lambda}\Omega_{2}+\Omega_{1},\Omega_{2}}$.
Denote by $\Omega^{\prime}$ the 2-form
$\frac{\lambda^{\prime}}{\lambda}\Omega_{2}+\Omega_{1}$. Clearly,
$\operatorname{Ann}(\Omega^{\prime})=\ker\Pi$. To prove that
$\Pi(\Omega)\in{\boldsymbol{\boldsymbol{\Omega}}}$ it would suffice to show
that $\Pi(\Omega)$ is proportional to $\Omega^{\prime}$.
Since any linear combination of $\Omega$ and $\Omega^{\prime}$ has rank 0,
$\dim V$ or $\frac{1}{2}\dim V$, one has
$\Omega{\left|{}_{{\phantom{|}\\!\\!}_{{\rm
Im}~{}\Pi}}\right.}=\alpha\Omega^{\prime}{\left|{}_{{\phantom{|}\\!\\!}_{{\rm
Im}~{}\Pi}}\right.}$, for some $\alpha\in\mathbb{C}$ (see the proof of Lemma
3.3 for the first use of this argument). Therefore, $\Pi(\Omega)$ is equal to
$\alpha\Omega^{\prime}$ on ${\rm Im}~{}\Pi$. Also, both of these forms vanish
on $\ker\Pi$. Therefore, (3.8) implies that
$\Pi(\Omega)=\frac{\alpha}{2}\Omega^{\prime}$. We proved Proposition 3.4 (B).
To prove item (C), consider the function
$\det:\;\operatorname{Mat}(2){\>\longrightarrow\>}\mathbb{C}$, and let $B$ be
the corresponding quadratic polynomial on $H\cong\operatorname{Mat}(2)$. Since
the isomorphism $H\cong\operatorname{Mat}(2)$ is unique up to a twist with
$SL(2)$, the function $B$ is well defined.
Let $\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$ be a non-degenerate
symplectic form, and
$Q_{\Omega}:\;{\boldsymbol{\boldsymbol{\Omega}}}{\>\longrightarrow\>}\mathbb{C}$
a function mapping $\Omega^{\prime}$ to $B(\phi_{\Omega^{\prime},\Omega})$.
Since $\phi$ is linear in the first argument, $Q_{\Omega}$ is a quadratic
polynomial function, and its zero-set $Z$ coincides with the set of degenerate
forms in ${\boldsymbol{\boldsymbol{\Omega}}}$. Since
${\boldsymbol{\boldsymbol{\Omega}}}$ is generated by degenerate forms (see
Proposition 3.4, Step 1), the set $Z$ can be either a union of two planes or a
zero set a non-degenerate quadratic polynomial. However, since $V$ is equipped
with an action of $SL(2)$ preserving ${\boldsymbol{\boldsymbol{\Omega}}}$
(Proposition 3.4 (B)), $Z$ cannot be a union of two planes. Therefore, there
exists a unique (up to a constant) non-degenerate $SL(2)$-invariant quadratic
form on ${\boldsymbol{\boldsymbol{\Omega}}}$ vanishing on all degenerate
forms.
We proved Proposition 3.4.
In a similar way, we obtain the following useful corollary.
###### Claim 3.5.
Let $(V,{\boldsymbol{\boldsymbol{\Omega}}})$ be a trisymplectic space, and
$W\subset V$ a complex subspace. Then
${\boldsymbol{\boldsymbol{\Omega}}}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$
is a trisymplectic space if and only if the following two assumptions hold.
(i)
The space $W$ is $H$-invariant, where $H\cong\operatorname{Mat}(2)$ is the
subalgebra of $\operatorname{End}(V)$ constructed in Proposition 3.4.
(ii)
A general 2-form $\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$ is non-
degenerate on $W$.
Proof: Let $Z\subset H$ be the set of idempotents in $H$. Consider the
standard action of ${\mathfrak{g}}\cong\mathfrak{sl}(2)$ on $V$ constructed in
Proposition 3.4. Clearly, $V$ is a direct sum of several 2-dimensional
irreducible representations of $\mathfrak{sl}(2)$.
It is easy to see that for every $\Pi\in Z$ there exists a Cartan subalgebra
$\mathfrak{h}\subset{\mathfrak{g}}$ such that $\Pi$ is a projection of $V$
onto one of two weight components of the weight decomposition associated with
$\mathfrak{h}$. If $W\subset V$ is an $H$-submodule, then
$\Pi{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$ is a projection to a weight
component $W_{0}\subset W$ of dimension $\frac{1}{2}\dim W$. From (3.1) it is
also clear that for any degenerate form
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$, an annihilator of a restriction
$\Omega{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$ is equal to the weight
component $W_{0}$, for an appropriate choice of Cartan subalgebra. Therefore,
$\dim\left(\operatorname{Ann}\Omega{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}\right)=\frac{1}{2}\dim
W.$
Similarly, (3.1) implies that a non-degenerate form is restricted to a non-
degenerate form. We obtain that the restriction
${\boldsymbol{\boldsymbol{\Omega}}}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$
to an $H$-submodule is always a trisymplectic structure on $W$.
To obtain the converse statement, take two non-degenerate, non-collinear forms
$\Omega_{1},\Omega_{2}\in{\boldsymbol{\boldsymbol{\Omega}}}$, and notice that
there exist precisely two distinct numbers $t=\lambda,\lambda^{\prime}$ for
which $\Omega_{1}+t\Omega_{2}$ is degenerate (see Proposition 3.4, Step 1).
Let $S,S^{\prime}$ be the corresponding annihilator spaces. As follows from
Proposition 3.4, Step 2, $H$ is generated, as a linear space, by the
projection operators $\Pi_{S,S^{\prime}}$, projecting $V$ to $S$ along
$S^{\prime}$. For any $W\subset V$ such that the restriction
${\boldsymbol{\boldsymbol{\Omega}}}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$
is trisymplectic, one has $W=S\cap W\oplus S^{\prime}\cap W$, hence
$\Pi_{S,S^{\prime}}$ preserves $W$. Therefore, $W$ is an $H$-submodule.
###### Definition 3.6.
Let $(V,{\boldsymbol{\boldsymbol{\Omega}}})$ be a trisymplectic space, and
$W\subset V$ a vector subspace. Consider the action of
$H\simeq\operatorname{Mat}(2)$ on $V$ induced by the trisymplectic structure.
A subspace $W\subset V$ is called non-degenerate if the subspace $H\cdot
W\subset V$ is trisymplectic.
###### Remark 3.7.
By Claim 3.5, $W$ is non-degenerate if and only if the restriction of $\Omega$
to $H\cdot W$ is non-degenerate for some
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$.
### 3.2 Trisymplectic structures and invariant
quadratic forms on vector spaces with $\operatorname{Mat}(2)$-action
Let $V$ be a complex vector space with a standard action of the matrix algebra
$\operatorname{Mat}(2)$, i.e. $V\cong V_{0}\otimes\mathbb{C}^{2}$ and
$\operatorname{Mat}(2)$ acts only through the second factor. An easy way to
obtain the trisymplectic structure is to use non-degenerate, invariant
quadratic forms on $V$.
Consider the natural $SL(2)$-action on $V$ induced by $\operatorname{Mat}(2)$,
and extend it multiplicatively to all tensor powers of $V$. Let
$g\in\operatorname{Sym}^{2}_{\mathbb{C}}(V)$ be an $SL(2)$-invariant, non-
degenerate quadratic form on $V$, and let $\\{I,J,K\\}$ be a quaternionic
basis in $\operatorname{Mat}(2)$, i.e. $\\{{\mathbf{1}}_{V},I,J,K\\}$ is a
basis for $\operatorname{Mat}(2)$ and $I^{2}=J^{2}=K^{2}=IJK=-1$. Then
$g(x,Iy)=g(Ix,I^{2}y)=-g(Ix,y)$
hence the form $\Omega_{I}(\cdot,\cdot):=g(\cdot,I\cdot)$ is a symplectic
form, obviously non-degenerate; similarly, the forms
$\Omega_{J}(\cdot,\cdot):=g(\cdot,J\cdot)$ and
$\Omega_{K}(\cdot,\cdot):=g(\cdot,K\cdot)$ have the same properties. It turns
out that this construction gives a trisymplectic structure, and all
trisymplectic structures can be obtained in this way.
###### Theorem 3.8.
Let $V$ be a vector space equipped with a standard action of the matrix
algebra
$\operatorname{Mat}(2)\stackrel{{\scriptstyle\rho}}{{{\>\longrightarrow\>}}}\operatorname{End}(V)$,
and $\\{I,J,K\\}$ a quaternionic basis in $\operatorname{Mat}(2)$. Consider
the corresponding action of $SL(2)$ on the tensor powers of $V$. Then
(i)
Given a non-degenerate, $SL(2)$-invariant quadratic form
$g\in\operatorname{Sym}^{2}(V)$, consider the space
${\boldsymbol{\boldsymbol{\Omega}}}\subset\Lambda^{2}V$ generated by the
symplectic forms $\Omega_{I},\Omega_{J},\Omega_{K}$ defined as above,
$\Omega_{I}(\cdot,\cdot):=g(\cdot,I\cdot),\ \
\Omega_{J}(\cdot,\cdot):=g(\cdot,J\cdot),\ \
\Omega_{K}(\cdot,\cdot):=g(\cdot,K\cdot).$ (3.9)
Then ${\boldsymbol{\boldsymbol{\Omega}}}$ is a trisymplectic structure on $V$,
with the operators $\Omega_{K}^{-1}\circ\Omega_{J}$ and
$\Omega_{K}^{-1}\circ\Omega_{I}$, generating the algebra
$H\cong\operatorname{Mat}(2):={\rm Im}~{}(\rho)\subset\operatorname{End}(V)$
as in Proposition 3.4.
(ii)
Conversely, for each trisymplectic structure
${\boldsymbol{\boldsymbol{\Omega}}}$ inducing the action of
$H\cong\operatorname{Mat}(2)$ on $V$ given by $\rho$, there exists a unique
(up to a constant) $SL(2)$-invariant non-degenerate quadratic form $g$
inducing ${\boldsymbol{\boldsymbol{\Omega}}}$ as in (3.9).
Proof: First, consider the 3-dimensional subspace of $\Lambda^{2}V$ generated
by $\Omega_{I},\Omega_{J},\Omega_{K}$. Regard $\Omega_{I}$ as an operator from
$V$ to $V^{*}$, $x\mapsto\Omega_{I}(x,\cdot)$, and similarly for $\Omega_{J}$
and $\Omega_{K}$; let
$h:=\Omega_{K}^{-1}\circ\Omega_{J}\in\operatorname{End}(V)$. Then
$h(x)=\Omega_{K}^{-1}(-g(Jx,\cdot))=-KJx=Ix,$
hence $h=I$. Similarly, one concludes that $\Omega_{K}^{-1}\circ\Omega_{I}=J$,
hence $\Omega_{K}^{-1}\circ\Omega_{J}$ and $\Omega_{K}^{-1}\circ\Omega_{I}$
generate $H$ as an algebra.
To complete the proof of the first claim of the Theorem, it remains for us to
show that ${\boldsymbol{\boldsymbol{\Omega}}}$ is a trisymplectic structure.
For this it would suffice to show that any non-zero, degenerate form
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$ has rank $\frac{1}{2}\dim V$.
Consider $V$ as a tensor product $V=V_{0}\otimes\mathbb{C}^{2}$, with
$\operatorname{Mat}(2)$ acting on the second factor. Choose a basis
$\\{x,y\\}$ in $\mathbb{C}^{2}$, so that $V=V_{0}\otimes x\oplus V_{0}\otimes
y$. From $SL(2)$-invariance it is clear that
$g(v_{0}\otimes\zeta)=g(v_{0}\otimes\xi)$ for any non-zero
$\zeta,\xi\in\mathbb{C}^{2}$. Therefore, $V_{0}\otimes x\subset V$ and
$V_{0}\otimes y\subset V$ are isotropic subspaces, dual to each other. Denote
by $\Omega_{V_{0}}$ the corresponding bilinear form on $V_{0}$:
$\Omega_{V_{0}}(v,v^{\prime}):=g(v\otimes x,v^{\prime}\otimes y).$
Since the group $SL(2)$ acts transitively on the set of all
$\zeta,\xi\in\mathbb{C}^{2}$ satisfying $\zeta\wedge\xi=x\wedge y$, we obtain
$\Omega_{V_{0}}(v,v^{\prime})=g(v\otimes x,v^{\prime}\otimes y)=-g(v\otimes
y,v^{\prime}\otimes x)=-\Omega_{V_{0}}(v^{\prime},v).$
Therefore, $\Omega_{V_{0}}$ is skew-symmetric. Conversely, $g$ can be
expressed through $\Omega_{V_{0}}$, as follows. Given
$x^{\prime},y^{\prime}\in\mathbb{C}^{2}$ such that $x^{\prime}\wedge
y^{\prime}\neq 0$, and $v\otimes x_{1},w\otimes y_{1}\in V$, we find $h\in
SL(2)$ such that $h(x^{\prime})=\lambda x$ and $h(y^{\prime})=\lambda y$ with
$\lambda=\frac{x_{1}\wedge y_{1}}{x\wedge y}$. Since $g$ is $SL(2)$-invariant,
one has
$g(v\otimes x^{\prime},w\otimes y^{\prime})=\lambda^{2}g(v\otimes x,w\otimes
y).$
Therefore, for appropriate symplectic form $\Omega_{\mathbb{C}^{2}}$ on
$\mathbb{C}^{2}$, one would have
$g(v\otimes x^{\prime},w\otimes
y^{\prime})=\Omega_{V_{0}}(v,w)\cdot\Omega_{\mathbb{C}^{2}}(x^{\prime},y^{\prime}).$
(3.10)
This gives us a description of the group $\operatorname{\sf
St}(H,g)\subset\operatorname{End}(V)$ which fixes the algebra
$H\subset\operatorname{End}(V)$ and $g$. Indeed, from (3.10), we obtain that
$\operatorname{\sf St}(H,g)\cong\operatorname{Sp}(V_{0},\Omega_{V_{0}})$
acting on $V=V_{0}\otimes\mathbb{C}^{2}$ in a standard way, i.e. trivially on
the second factor.
Since all elements of ${\boldsymbol{\boldsymbol{\Omega}}}$ are by construction
fixed by $\operatorname{\sf
St}(H,g)\cong\operatorname{Sp}(V_{0},\Omega_{V_{0}})$, for any
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$, the annihilator of $\Omega$ is
$\operatorname{Sp}(V_{0},\Omega_{V_{0}})$-invariant. However, $V\cong
V_{0}\oplus V_{0}$ is isomorphic to a sum of two copies of the fundamental
representation of $\operatorname{Sp}(V_{0},\Omega_{V_{0}})$, hence any
$\operatorname{Sp}(V_{0},\Omega_{V_{0}})$-invariant space has dimension
$0,\frac{1}{2}\dim V$, or $\dim V$. We finished the proof of Theorem 3.8 (i).
The proof of the second part of the Theorem is divided into several steps.
Step 1. Let $I\in\operatorname{Mat}(2)$ be such that
$I^{2}=-{\mathbf{1}}_{V}$. Consider the action
$\rho_{I}:\;U(1){\>\longrightarrow\>}\operatorname{End}(V)$ generated by
$t{\>\longrightarrow\>}\cos t{\mathbf{1}}_{V}+\sin t\rho(I).$ As shown in
Proposition 3.4 (B), ${\boldsymbol{\boldsymbol{\Omega}}}$ is an
$SL(2)$-subrepresentation of $\Lambda^{2}V$. This representation is by
construction irreducible. Since it is 3-dimensional, it is isomorphic to the
adjoint representation of $SL(2)$; let
$\phi:\;\mathfrak{sl}(2){\>\longrightarrow\>}{\boldsymbol{\boldsymbol{\Omega}}}$
be an isomorphism. Therefore, there exists a 2-form
$\Omega_{I}\in{\boldsymbol{\boldsymbol{\Omega}}}$ fixed by the action of
$\rho_{I}$, necessarily unique up to a constant multiplier. Write
$g_{I}(x,y):=-\Omega_{I}(x,Iy)$. Then
$g_{I}(y,x)=\Omega_{I}(y,Ix)=-\Omega_{I}(Ix,y)=-\Omega_{I}(I^{2}x,Iy)=\Omega(x,Iy)=g_{I}(x,y),$
hence $g_{I}$ is symmetric, i.e.
$g_{I}\in\operatorname{Sym}^{2}_{\mathbb{C}}(V)$.
Step 2. Now let $\\{I,J,K\\}$ be the quaternionic basis for
$\operatorname{Mat}(2)$. We prove that the symmetric tensor $g_{I}$
constructed in Step 1 is fixed by the subgroup $\\{\pm 1,\pm I,\pm J,\pm
K\\}\subset SL(2)\subset\operatorname{Mat}(2)$, for an appropriate choice of
$\Omega_{I}\in{\boldsymbol{\boldsymbol{\Omega}}}$.
Using the $SL(2)$-invariant isomorphism
$\phi:\;\mathfrak{sl}(2){\>\longrightarrow\>}{\boldsymbol{\boldsymbol{\Omega}}}$
constructed in Step 1, and the identification of $\mathfrak{sl}(2)$ with the
subspace of $\operatorname{Mat}(2)$ generated by $I$, $J$ and $K$, we fix a
choice of $\Omega_{I}$ by requiring that $\phi(I)=\Omega_{I}$. Then, $J$ and
$K$, considered as elements of $SL(2)$, act on $\Omega_{I}$ by $-1$:
$\Omega_{I}(J\cdot,J\cdot)=-\Omega_{I}(\cdot,\cdot),\ \
\Omega_{I}(K\cdot,K\cdot)=-\Omega_{I}(\cdot,\cdot).$
This gives
$g_{I}(J\cdot,J\cdot)=\Omega_{I}(J\cdot,IJ\cdot)=-\Omega_{I}(J\cdot,JI\cdot)=\Omega_{I}(\cdot,I\cdot)=g(\cdot,\cdot).$
We have shown that $J$, considered as an element of $SL(2)$, fixes $g_{I}$.
The same argument applied to $K$ implies that $K$ also fixes $g_{I}$. We have
shown that $g_{I}$ is fixed by the Klein subgroup ${\mathfrak{K}}:=\\{\pm
1,\pm I,\pm J,\pm K\\}\subset SL(2)$.
Step 3. We prove that $g_{I}$ is $SL(2)$-invariant.
Consider $\operatorname{Sym}^{2}V$ as a representation of $SL(2)$. Since $V$
is a direct sum of weight 1 representations, Clebsch-Gordon theorem implies
that $\operatorname{Sym}^{2}V$ is a sum of several weight 2 and trivial
representations. However, no element on a weight 2 representation can be
${\mathfrak{K}}$-invariant. Indeed, a weight 2 representation $W_{2}$ is
isomorphic to an adjoint representation, that is, a complex vector space
generated by the imaginary quaternions: $W_{2}:=\langle
I,J,K\rangle\subset\operatorname{Mat}(2)$ Clearly, no non-zero linear
combination of $I,J,K$. can be ${\mathfrak{K}}$-invariant. Since $g_{I}$ is
${\mathfrak{K}}$-invariant, this implies that $g_{I}$ lies in the
$SL(2)$-invariant part of $\operatorname{Sym}^{2}_{\mathbb{C}}V$.
Step 4. We prove that $g_{I}$ is proportional to $g_{I^{\prime}}$, for any
choice of quaternionic triple
$I^{\prime},J^{\prime},K^{\prime}\in\operatorname{Mat}(2)$. The ambiguity here
is due to the ambiguity of a choice of $\Omega_{I}$ in a centralizer of
$\rho_{I}$. The form $\Omega_{I}$ is defined up to a constant multiplier,
because this centralizer is $1$-dimensional.
The group $SL(2)$ acts transitively on the set of quaternionic triples.
Consider $h\in SL(2)$ which maps $I,J,K$ to
$I^{\prime},J^{\prime},K^{\prime}\in\operatorname{Mat}(2)$. Then $h(g_{I})$ is
proportional to $g_{I^{\prime}}$.
Step 5: To finish the proof of Theorem 3.8 (ii), it remains to show that the
$SL(2)$-invariant quadratic form $g$ defining the trisymplectic structure
${\boldsymbol{\boldsymbol{\Omega}}}$ is unique, up to a constant. Indeed, let
$g$ be such a form; then $g=\Omega(\cdot,I\cdot)$, for some
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$ and $I\in\operatorname{Mat}(2)$,
satisfying $I^{2}=-1$. Since $g$ is $SL(2)$-invariant, the form $\Omega$ is
$\rho_{I}$-invariant, hence $g$ is proportional to the form $g_{I}$
constructed above. We proved Theorem 3.8.
## 4 $SL(2)$-webs and trisymplectic structures
In this section we introduce the notion of trisymplectic structures on
manifolds, study its reduction to quotients, and explain how they are related
to holomorphic $SL(2)$-webs.
The trisymplectic structures and trisymplectic reduction were previously
considered in a context of framed instanton bundles by Hauzer and Langer ([HL,
Sections 7.1 and 7.2]). However, their approach is significantly different
from ours, because they do not consider the associated $SL(2)$-web structures.
### 4.1 Trisymplectic structures on manifolds
###### Definition 4.1.
A trisymplectic structure on a complex manifold is a 3-dimensional space
${\boldsymbol{\boldsymbol{\Omega}}}\subset\Omega^{2}M$ of closed holomorphic
$2$-forms such that at any $x\in M$, the evaluation
${\boldsymbol{\boldsymbol{\Omega}}}(x)$ gives a trisymplectic structure on the
tangent space $T_{x}M$. A complex manifold equipped with a trisymplectic
structure is called a trisymplectic manifold.
Clearly, trisymplectic manifolds must have even complex dimension. Notice also
that Theorem 3.8 implies the equivalence between the Definition above and
Definition 1.1.
A similar notion is called a _hypersymplectic structure_ by Hauzer and Langer
in [HL, Definition 7.1]. A complex manifold $(X,g)$ equipped with a non-
degenerate holomorphic symmetric form is called hypersymplectic in [HL] if
there are three complex structures $I$, $J$ and $K$ satisfying quaternionic
relations and $g(Iv,Iw)=g(Jv,Jw)=g(Kv,Kw)=g(v,w)$. Clearly, one can then
define three nondegerate symplectic forms $\omega_{1}(v,w)=g(Iv,w)$,
$\omega_{2}(v,w)=g(Jv,w)$ and $\omega_{3}(v,w)=g(Kv,w)$ which generate a
$3$-dimensional subspace of holomorphic $2$-forms
${\boldsymbol{\boldsymbol{\Omega}}}\subset\Omega^{2}M$. We require, in
addition, that every nonzero, degenerate linear combination of $\omega_{1}$,
$\omega_{2}$ and $\omega_{3}$ has rank $\dim X/2$.
We prefer, however, to use the term trisymplectic to avoid confusion with the
hypersymplectic structures known in differential geometry (see [AD, DS]),
where a hypersymplectic structure is a $3$-dimensional space $W$ of
differential $2$-forms on a real manifold which satisfy the same rank
assumptions as in Definition 4.1, and, in addition, contain a non-trivial
degenerate $2$-form (for complex-linear 2-forms, this last assumption is
automatic).
###### Definition 4.2.
Let $\eta$ be a $(p,0)$-form on a complex manifold $M$. Consider the set
$\operatorname{Null}_{\eta}$ of all (1,0)-vectors $v\in T^{1,0}M$ satisfying
$\eta\hskip 2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0ptv=0$, where
$\lrcorner$ denotes the contraction. Then $\operatorname{Null}_{\eta}$ is
called the null-space, or an annihilator, of $\eta$.
###### Lemma 4.3.
Let $\eta$ be a closed $(p,0)$-form for which $\operatorname{Null}_{\eta}$ is
a sub-bundle in $T^{1,0}(M)$. Then $\operatorname{Null}_{\eta}$ is holomorphic
and involutive, that is, satisfies
$[\operatorname{Null}_{\eta},\operatorname{Null}_{\eta}]\subset\operatorname{Null}_{\eta}.$
Proof: The form $\eta$ is closed and hence holomorphic, therefore
$\operatorname{Null}_{\eta}$ is a holomorphic bundle. To prove that
$\operatorname{Null}_{\eta}$ is involutive, we use the Cartan’s formula,
expressing de Rham differential in terms of commutators and Lie derivatives.
Let $X\in T^{1,0}(M)$, $Y,Z\in\operatorname{Null}_{\eta}$. Then Cartan’s
formula gives $0=d\eta(X,Y,Z)=\eta(X,[Y,Z])$. This implies that $[Y,Z]$ lies
in $\operatorname{Null}_{\eta}$.
Lemma 4.3 can be used to construct holomorphic $SL(2)$-webs on manifolds, as
follows.
###### Theorem 4.4.
Let $M$ be a complex manifold, and
${\boldsymbol{\boldsymbol{\Omega}}}\subset\Lambda^{2,0}(M)$ a 3-dimensional
space of closed holomorphic (2,0)-forms. Assume that a generic form in
${\boldsymbol{\boldsymbol{\Omega}}}$ is non-degenerate, and for each
degenerate form $\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$, the null-space
$\operatorname{Null}_{\eta}$ is a sub-bundle of $TM$ of dimension
$\frac{1}{2}\dim M$. Then there is a holomorphic $SL(2)$-web $(M,S_{t})$,
$t\in\mathbb{C}{\mathbb{P}^{1}}$ on $M$ such that each sub-bundle $S_{t}$ is a
null-space of a certain $\Omega_{t}\in{\boldsymbol{\boldsymbol{\Omega}}}$.
Proof: Theorem 4.4 follows immediately from Proposition 3.4 and Lemma 4.3.
Indeed, at any point $x\in M$, the 3-dimensional space
${\boldsymbol{\boldsymbol{\Omega}}}(x)\in\Lambda^{2,0}(T_{x}M)$ satisfies
assumptions of Proposition 3.4, hence induces an action of the matrix algebra
$H\cong\operatorname{Mat}(2)$ on $T_{x}M$. Denote by
$Z\subset{\boldsymbol{\boldsymbol{\Omega}}}$ the set of degenerate forms. From
Proposition 3.4 (C) we obtain that the projectivization
${\mathbb{P}}Z\subset{\mathbb{P}}{\boldsymbol{\boldsymbol{\Omega}}}$ is a non-
singular quadric, isomorphic to $\mathbb{C}{\mathbb{P}^{1}}$. For each
$t\in\mathbb{Z}$, the corresponding zero-space $S_{t}\subset TM$ is a sub-
bundle of dimension $\frac{1}{2}\dim M$, and for distinct $t$, the bundles
$S_{t}$ are obviously transversal. Also, Lemma 4.3 implies that the bundles
$S_{t}$ are involutive. Finally, the projection operators associated to
$S_{t},S_{t}^{\prime}$ generate a subalgebra isomorphic to
$\operatorname{Mat}(2)$, as follows from Proposition 3.4. We have shown that
$S_{t},t\in{\mathbb{P}}Z\cong\mathbb{C}{\mathbb{P}^{1}}$ is indeed an
$SL(2)$-web.
In particular, every trysimplectic manifold has an induced a holomorphic
$SL(2)$-web.
###### Definition 4.5.
Let $(M,S_{t})$, $t\in\mathbb{C}{\mathbb{P}^{1}}$ be a complex manifold
equipped with a holomorphic $SL(2)$-web. Assume that there is a 3-dimensional
space ${\boldsymbol{\boldsymbol{\Omega}}}\subset\Lambda^{2,0}(M)$ of closed
holomorphic 2-forms such that for each $t\in\mathbb{C}{\mathbb{P}^{1}}$ there
exists $\Omega_{t}\in{\boldsymbol{\boldsymbol{\Omega}}}$ such that the
$S_{t}=\operatorname{Null}_{\Omega_{t}}$. Then
${\boldsymbol{\boldsymbol{\Omega}}}$ is called a trisymplectic structure
generating the $SL(2)$-web $S_{t},t\in\mathbb{C}{\mathbb{P}^{1}}$.
### 4.2 Chern connection on $SL(2)$-webs
and trisymplectic structures
The following theorem is proven in the same way as one proves that the Kähler
forms on a hyperkähler manifold are preserved by the Obata connection. Indeed,
a trisymplectic structure is a complexification of a hyperkähler structure,
and the Chern connection corresponds to a complexification of the Obata
connection on a hyperkähler manifold.
###### Theorem 4.6.
Let ${\boldsymbol{\boldsymbol{\Omega}}}$ be a trisymplectic structure
generating an $SL(2)$-web on a complex manifold $M$. Denote by $\nabla$ the
corresponding Chern connection. Then $\nabla\Omega=0$, for each
$\omega\in{\boldsymbol{\boldsymbol{\Omega}}}$.
Proof: Let $(M,{\boldsymbol{\boldsymbol{\Omega}}})$ be a trisymplectic
manifold, and
$\rho:\;\mathfrak{sl}(2){\>\longrightarrow\>}\operatorname{End}(\Lambda^{*}M)$
the corresponding multiplicative action of $\mathfrak{sl}(2)$ associated to
${\mathfrak{g}}\cong\mathfrak{sl}(2)\subset\operatorname{End}(TM)$ constructed
in Proposition 3.4. By Proposition 3.4 (B),
${\boldsymbol{\boldsymbol{\Omega}}}$ is an irreducible
$\mathfrak{sl}(2)$-module. Choose a Cartan subalgebra in $\mathfrak{sl}(2)$,
and let $\Omega^{i}(M)=\bigoplus_{p+q=i}\Omega^{p,q}(M)$ be the multiplicative
weight decomposition associated with this Cartan subalgebra, with
$\Omega^{i}(M):=\Lambda^{i,0}(M)$. We write the corresponding weight
decomposition of ${\boldsymbol{\boldsymbol{\Omega}}}$ as
${\boldsymbol{\boldsymbol{\Omega}}}={\boldsymbol{\boldsymbol{\Omega}}}^{2,0}\oplus{\boldsymbol{\boldsymbol{\Omega}}}^{1,1}\oplus{\boldsymbol{\boldsymbol{\Omega}}}^{0,2}.$
Clearly
$\Omega^{i}(M)=\bigoplus_{p+q=i}\Omega^{p,0}(M)\otimes\Omega^{0,q}(M),$ (4.1)
since $\Omega^{p,0}(M)\otimes\Omega^{0,q}(M)=\Omega^{p,q}(M)$.
Consider the Chern connection as an operator
$\Omega^{i}(M)\stackrel{{\scriptstyle\nabla}}{{{\>\longrightarrow\>}}}\Omega^{i}(M)\otimes\Omega^{1}(M)$
(this makes sense, because $\nabla$ is a holomorphic connection), and let
$\Omega^{p,q}(M)\stackrel{{\scriptstyle\nabla^{1,0}}}{{{\>\longrightarrow\>}}}\Omega^{p,q}(M)\otimes\Omega^{1,0}(M),\
\
\Omega^{p,q}(M)\stackrel{{\scriptstyle\nabla^{0,1}}}{{{\>\longrightarrow\>}}}\Omega^{p,q}(M)\otimes\Omega^{0,1}(M)$
be its weight components. Since $\nabla$ is torsion-free, one has
$\partial\eta=\operatorname{Alt}(\nabla\eta),$ (4.2)
where $\partial$ is the holomorphic de Rham differential, and
$\operatorname{Alt}:\;\Omega^{i}(M)\otimes\Omega^{1}(M){\>\longrightarrow\>}\Omega^{i+1}(M)$
the exterior multiplication. Denote by $\Omega_{0,2}$, $\Omega_{2,0}$
generators of the 1-dimensional spaces
${\boldsymbol{\boldsymbol{\Omega}}}^{2,0},{\boldsymbol{\boldsymbol{\Omega}}}^{0,2}\subset{\boldsymbol{\boldsymbol{\Omega}}}$.
Since $\partial\Omega_{2,0}=0$, and the multiplication map
$\Omega^{0,1}(M)\otimes\Omega^{2,0}(M){\>\longrightarrow\>}\Omega^{3}(M)$ is
injective by (4.1), (4.2) implies that $\nabla^{0,1}(\Omega_{2,0})=0$.
Similarly, $\nabla^{1,0}(\Omega_{0,1})=0$. However, since
${\boldsymbol{\boldsymbol{\Omega}}}$ is irreducible as a representation of
$\mathfrak{sl}(2)$, there exist an expression of form
$\Omega_{2,0}=g(\Omega_{0,2})$, where $g\in U_{\mathfrak{g}}$ is a polynomial
in ${\mathfrak{g}}$. Since the Chern connection $\nabla$ commutes with $g$,
this implies that
$0=g(\nabla^{1,0}\Omega_{0,2})=\nabla^{1,0}(g\Omega_{0,2})=\nabla^{1,0}\Omega_{2,0}.$
We have proved that both weight components of $\nabla\Omega_{2,0}$ vanish,
thus $\nabla\Omega_{2,0}=0$. Acting on $\Omega_{2,0}$ by $\mathfrak{sl}(2)$
again, we obtain that $\nabla\Omega=0$ for all
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$.
### 4.3 Trisymplectic reduction
###### Definition 4.7.
Let $G$ be a compact Lie group acting on a complex manifold equipped with a
trisymplectic structure ${\boldsymbol{\boldsymbol{\Omega}}}$ generating an
$SL(2)$-web. Assume that $G$ preserves ${\boldsymbol{\boldsymbol{\Omega}}}$. A
trisymplectic moment map
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}:\;M{\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}{\boldsymbol{\boldsymbol{\Omega}}}^{*}$
takes vectors
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}},g\in{\mathfrak{g}}=\operatorname{Lie}(G)$
and maps them to a holomorphic function $f\in{\cal O}_{M}$, such that
$df=\Omega\hskip 2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0ptg$,
where $\Omega\hskip 2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0ptg$
denotes the contraction of $\Omega$ and the vector field $g$. A moment map is
called equivariant if it equivariant with respect to coadjoint action of $G$
on ${\mathfrak{g}}^{*}$ Further on, we shall always assume that all moment
maps we consider are equivariant.
Since $d\Omega=0$, and $\operatorname{Lie}_{g}\Omega=0$, Cartan’s formula
gives $0=\operatorname{Lie}_{g}(\Omega)=d(\Omega\hskip
2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0ptg)$, hence the
contraction $\Omega\hskip 2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip
2.0ptg$ is closed. Therefore, existence of the moment map is equivalent to
exactness of this closed 1-form for each
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}},g\in{\mathfrak{g}}=\operatorname{Lie}(G)$.
Therefore, the existence of a moment map is assured whenever $M$ is simply
connected. The existence of an equivariant moment map is less immediate, and
depends on certain cohomological properties of $G$ (see e.g. [HKLR]).
###### Definition 4.8.
Let $(M,{\boldsymbol{\boldsymbol{\Omega}}},S_{t})$ be a trisymplectic
structure on a complex manifold $M$. Assume that $M$ is equipped with an
action of a compact Lie group $G$ preserving
${\boldsymbol{\boldsymbol{\Omega}}}$, and an equivariant trisymplectic moment
map
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}:\;M{\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}{\boldsymbol{\boldsymbol{\Omega}}}^{*}.$
Consider a $G$-invariant vector
$c\in{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}{\boldsymbol{\boldsymbol{\Omega}}}^{*}$
(usually, one sets $c=0$), and let
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(c)$ be the corresponding
level set of the moment map. Consider the action of the corresponding complex
Lie group $G_{\mathbb{C}}$ on
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(c)$. Assume that it is
proper and free. Then the quotient
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(c)/G_{\mathbb{C}}$ is a
smooth manifold called the trisymplectic quotient of
$(M,{\boldsymbol{\boldsymbol{\Omega}}},S_{t})$, denoted by
$M{/\\!\\!/\\!\\!/\\!\\!/}G$.
As we shall see, the trisymplectic quotient is related to the usual
hyperkähler quotient in the same way as the hyperkähler quotient (denoted by
${/\\!\\!/\\!\\!/}$) is related to the symplectic quotient, denoted by
${/\\!\\!/}$. In heuristic terms, the hyperkähler quotient can be considered
as a “complexification” of a symplectic quotient; similarly, the trisymplectic
quotient is a “complexification” of a hyperkähler quotient.
The non-degeneracy condition of Theorem 4.9 below is necessary for the
trisymplectic reduction process, in the same way as one would need some non-
degeneracy if one tries to perform the symplectic reduction on a pseudo-Kähler
manifold. On a Kähler (or a hyperkähler) manifold it is automatic because the
metric is positive definite, but otherwise it is easy to obtain
counterexamples (even in the simplest cases, such as $S^{1}$-action on
$\mathbb{C}^{2}$ with an appropriate pseudo-Kähler metric).
###### Theorem 4.9.
Let $(M,{\boldsymbol{\boldsymbol{\Omega}}})$ be a trisymplectic manifold.
Assume that $M$ is equipped with an action of a compact Lie group $G$
preserving ${\boldsymbol{\boldsymbol{\Omega}}}$ and a trisymplectic moment map
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}:\;M{\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}{\boldsymbol{\boldsymbol{\Omega}}}^{*}$.
Assume, moreover, that the image of ${\mathfrak{g}}=\operatorname{Lie}(G)$ in
$TM$ is non-degenerate at any point (in the sense of Definition 3.6). Then the
trisymplectic quotient
$M{/\\!\\!/\\!\\!/\\!\\!/}G:={\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(0)/G_{\mathbb{C}}$
is naturally equipped with a trisymplectic structure.111In the statement of
Theorem 4.9 we implicitly assume that the quotient
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(c)/G_{\mathbb{C}}$ is well-
defined, which is not always the case.
For a real version of this theorem, please see [DS].
The proof of Theorem 4.9 takes the rest of this section. First, we shall use
the following easy definition and an observation.
###### Definition 4.10.
Let $B\subset TM$ be an involutive sub-bundle in a tangent bundle to a smooth
manifold $M$. A form $\eta\in\Lambda^{i}M$ is called basic with respect to $B$
if for any $X\in B$, one has $\eta\hskip
2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0ptX=0$ and
$\operatorname{Lie}_{X}\eta=0$.
The following claim is clear.
###### Claim 4.11.
Let $B\subset TM$ be an involutive sub-bundle in a tangent bundle to a smooth
manifold $M$. Consider the projection
$M\stackrel{{\scriptstyle\pi}}{{{\>\longrightarrow\>}}}M^{\prime}$ onto its
leaf space, which is assumed to be Hausdorff. Let $\eta\in\Lambda^{i}M$ be a
basic form on $M$. Then $\eta=\pi^{*}\eta^{\prime}$, for an appropriate form
$\eta^{\prime}$ on $M^{\prime}$.
Return to the proof of Theorem 4.9. Let $I,J,K$ be a quaternionic basis in
$\operatorname{Mat}(2)$, $\Omega_{I}\in{\boldsymbol{\boldsymbol{\Omega}}}$ a
$\rho_{I}$-invariant form chosen as in Theorem 3.8 (ii), and
$g:=\Omega_{I}(\cdot,I\cdot)$ the corresponding non-degenerate, complex linear
symmetric form on $M$. By its construction, $g$ is holomorphic, and by Theorem
3.8 (ii), $SL(2)$-invariant. Let $N\subset M$ be a level set of the moment map
$N:={\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(c)$. Choose a point
$m\in N$, and let ${\mathfrak{g}}_{m}\subset T_{m}M$ be the image of
${\mathfrak{g}}=\operatorname{Lie}G$ in $T_{m}M$. Then, for each
$v\in{\mathfrak{g}}_{m}$, one has
$d\mu_{I}(v,\cdot)=\Omega_{I}(v,\cdot),$ (4.3)
where $\mu_{I}:M\to{\mathfrak{g}}^{*}$ is the holomorphic moment map
associated with the symplectic form $\Omega_{I}$.
On the other hand, $\Omega_{I}(v,\cdot)=-g(Iv,\cdot)$. Therefore,
$T_{m}N\subset T_{m}M$ is an orthogonal complement (with respect to $g$) to
the space $\langle
I{\mathfrak{g}}_{m},J{\mathfrak{g}}_{m},K{\mathfrak{g}}_{m}\rangle$ generated
by $I({\mathfrak{g}}_{m}),J({\mathfrak{g}}_{m}),K({\mathfrak{g}}_{m})$:
$T_{m}N=\langle
I{\mathfrak{g}}_{m},J{\mathfrak{g}}_{m},K{\mathfrak{g}}_{m}\rangle^{\bot}_{g}.$
(4.4)
By (4.3), for any $v\in{\mathfrak{g}}_{m}$, and $w\in T_{m}N$, one has
$\Omega_{I}(v,w)=0$. Also, $G$ preserves all forms from
${\boldsymbol{\boldsymbol{\Omega}}}$, hence
$\operatorname{Lie}_{v}\Omega_{i}=0$. Therefore, $\Omega_{I}$ is basic with
respect to the distribution $V\subset TN$ generated by the image of Lie
algebra ${\mathfrak{g}}{\>\longrightarrow\>}TN$.
Consider the quotient map
$N\stackrel{{\scriptstyle\pi}}{{{\>\longrightarrow\>}}}N/G_{\mathbb{C}}=M^{\prime}$.
To prove that $M^{\prime}$ is a trisymplectic manifold, we use Claim 4.11,
obtaining a 3-dimensional space of holomorphic 2-forms
${\boldsymbol{\boldsymbol{\Omega}}}^{\prime}\subset\Lambda^{2,0}(M^{\prime})$,
with
${\boldsymbol{\boldsymbol{\Omega}}}{\left|{}_{{\phantom{|}\\!\\!}_{N}}\right.}=\pi^{*}{\boldsymbol{\boldsymbol{\Omega}}}^{\prime}$.
To check that ${\boldsymbol{\boldsymbol{\Omega}}}^{\prime}$ is a trisymplectic
structure, it remains only to establish the rank conditions.
Let $W\subset T_{m}N$ be a subspace complementary to
${\mathfrak{g}}_{m}\subset T_{m}N$. Clearly, for any
$\Omega\in{\boldsymbol{\boldsymbol{\Omega}}}$, the rank of the corresponding
form $\Omega^{\prime}\in{\boldsymbol{\boldsymbol{\Omega}}}^{\prime}$ at the
point $m^{\prime}=\pi(m)$ is equal to the rank of
$\Omega{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.}$.
Let $W_{1}\subset T_{m}M$ be a subspace obtained as
$H\cdot{\mathfrak{g}}_{m}$, where
$H\cong\operatorname{Mat}(2)\subset\operatorname{End}(T_{m}M)$ is the standard
action of the matrix algebra defined as in Subsection 3.1. By the non-
degeneracy assumption of Theorem 4.9, the restriction
$g{\left|{}_{{\phantom{|}\\!\\!}_{W_{1}}}\right.}$ is non-degenerate, hence
the orthogonal complement $W_{1}^{\bot}$ satisfies $T_{m}M=W_{1}\oplus
W_{1}^{\bot}$. From (4.4) we obtain $W_{1}^{\bot}\subset T_{m}N$, with
$W_{1}^{\bot}\oplus{\mathfrak{g}}_{m}=T_{m}N$. Therefore, $W:=W_{1}^{\bot}$ is
complementary to ${\mathfrak{g}}_{m}$ in $T_{m}N$. The space
$(W,{\boldsymbol{\boldsymbol{\Omega}}}{\left|{}_{{\phantom{|}\\!\\!}_{W}}\right.})$
is trisymplectic, as follows from Claim 3.5. Therefore, the forms
${\boldsymbol{\boldsymbol{\Omega}}}^{\prime}\subset\Lambda^{2,0}(M^{\prime})$
define a trisymplectic structure on $M^{\prime}$. We have proved Theorem 4.9.
## 5 Trihyperkähler reduction
### 5.1 Hyperkähler reduction
Let us start by recalling some well-known definitions.
###### Definition 5.1.
Let $G$ be a compact Lie group acting on a hyperkähler manifold $M$ by
hyperkähler isometries. A hyperkähler moment map is a smooth map
$\mu:M\to{\mathfrak{g}}\otimes\mathbb{R}^{3}$ such that:
(1)
$\mu$ is $G$-equivariant, i.e. $\mu(g\cdot m)={\rm Ad}_{g^{-1}}^{*}\mu(m)$;
(2)
$\langle d\mu_{i}(v),\xi\rangle=\omega_{i}(\xi^{*},v)$, for every $v\in TM$,
$\xi\in{\mathfrak{g}}$ and $i=1,2,3$, where $\mu_{i}$ denotes one of the $3$
components of $\mu$, $\omega_{i}$ is one the the Kähler forms associated with
the hyperkähler structure, and $\xi^{*}$ is the vector field generated by
$\xi$.
###### Definition 5.2.
Let $\xi_{i}\in{\mathfrak{g}}^{*}$ ($i=1,2,3$) be such that ${\rm
Ad}_{g}^{*}\xi_{i}=\xi_{i}$, so that $G$ acts on
$\mu^{-1}(\xi_{1},\xi_{2},\xi_{3})$; suppose that this action is free. The
quotient manifold $M{/\\!\\!/\\!\\!/}G:=\mu^{-1}(\xi_{1},\xi_{2},\xi_{3})/G$
is called the hyperkähler quotient of $M$.
###### Theorem 5.3.
Let $M$ be a hyperkähler manifold, and $G$ a compact Lie group acting on $M$
by hyperkähler automorphisms, and admitting a hyperkähler moment map. Then the
hyperkähler quotient $M{/\\!\\!/\\!\\!/}G$ is equipped with a natural
hyperkähler structure.
Proof: See [HKLR], [Nak, Theorem 3.35]
### 5.2 Trisymplectic reduction on the space of twistor sections
Let $M$ be a hyperkähler manifold, $L\in\mathbb{C}{\mathbb{P}^{1}}$ an induced
complex structure and $\operatorname{\sf
ev}_{L}:\;{\operatorname{Sec}}(M){\>\longrightarrow\>}(M,L)$ the corresponding
evaluation map, mapping a section
$s:\;\mathbb{C}{\mathbb{P}^{1}}{\>\longrightarrow\>}\operatorname{Tw}(M)$ to
$s(L)\in(M,L)\subset\operatorname{Tw}(M)$. Consider the holomorphic form
$\Omega_{L}\in\Lambda^{2,0}(M,L)$ constructed from a hyperkähler structure as
in (2.2). Denote by ${\boldsymbol{\boldsymbol{\Omega}}}$ the space of
holomorphic forms on ${\operatorname{Sec}}(M)$ generated by $\operatorname{\sf
ev}_{L}^{*}(\Omega_{L})$ for all $L\in\mathbb{C}{\mathbb{P}^{1}}$.
###### Claim 5.4.
${\boldsymbol{\boldsymbol{\Omega}}}$ is a trisymplectic structure on the space
${\operatorname{Sec}}_{0}(M)$ of regular sections, generating the standard
$SL(2)$-web.
Proof: Consider the bundle ${\cal O}(2)$ on $\mathbb{C}{\mathbb{P}^{1}}$, and
let $\pi^{*}{\cal O}(2)$ be its lift to the twistor space
$\operatorname{Tw}(M)\stackrel{{\scriptstyle\pi}}{{{\>\longrightarrow\>}}}\mathbb{C}{\mathbb{P}^{1}}$.
Denote by $\Omega_{\pi}^{2}\operatorname{Tw}(M)$ the sheaf of fiberwise
2-forms on $\operatorname{Tw}(M)$. The bundle
$\Omega_{\pi}^{2}\operatorname{Tw}(M)$ can be obtained as a quotient
$\Omega_{\pi}^{2}\operatorname{Tw}(M):=\frac{\Omega^{2}\operatorname{Tw}(M)}{\pi^{*}\Omega^{1}\mathbb{C}{\mathbb{P}^{1}}\wedge\Omega^{1}\operatorname{Tw}(M)}.$
It is well known (see e.g. [HKLR]), that the fiberwise symplectic structure
depends on $t\in\mathbb{C}{\mathbb{P}^{1}}$ holomorphically, and, moreover,
$\operatorname{Tw}(M)$ is equipped with a holomorphic 2-form
$\Omega_{tw}\in\pi^{*}{\cal O}(2)\otimes\Omega_{\pi}^{2}\operatorname{Tw}(M)$
inducing the usual holomorphic symplectic forms on the fibers, see [HKLR,
Theorem 3.3(iii)].
Given $S\in{\operatorname{Sec}}(M)$, the tangent space
$T_{s}{\operatorname{Sec}}(M)$ is identified with the space of global sections
of a bundle
$T_{\pi}\operatorname{Tw}(M){\left|{}_{{\phantom{|}\\!\\!}_{S}}\right.}$.
Therefore, any vertical 2-form
$\Omega_{1}\in\Omega_{\pi}^{2}\operatorname{Tw}(M)\otimes\pi^{*}{\cal O}(i)$
defines a holomorphic 2-form on ${\operatorname{Sec}}_{0}(M)$ with values in
the space of global sections $\Gamma(\mathbb{C}{\mathbb{P}^{1}},{\cal O}(i))$.
Denote by $A$ the space $\Gamma(\mathbb{C}{\mathbb{P}^{1}},{\cal O}(2))$. A
fiberwise holomorphic ${\cal O}(2)$-valued 2-form gives a 2-form on $NS$, for
each $S\in{\operatorname{Sec}}(M)$, with values in $A$. Therefore, for each
$\alpha\in A^{*}$, one obtains a 2-form $\Omega_{tw}(\alpha)$ on
${\operatorname{Sec}}(M)$ as explained above. Let
${\boldsymbol{\boldsymbol{\Omega}}}$ be a 3-dimensional space generated by
$\Omega_{tw}(\alpha)$ for all $\alpha\in A^{*}$.
Consider a map $\epsilon_{L}:\;A{\>\longrightarrow\>}{\cal
O}(2){\left|{}_{{\phantom{|}\\!\\!}_{L}}\right.}\cong\mathbb{C}$ evaluating
$\gamma\in\Gamma({\cal O}(2))$ at a point $L\in\mathbb{C}{\mathbb{P}^{1}}$. By
definition, the 2-form $\Omega_{tw}(\epsilon_{L})$ is proportional to
$\operatorname{\sf ev}^{*}_{L}\Omega_{L}$. Therefore,
${\boldsymbol{\boldsymbol{\Omega}}}$ contains $\operatorname{\sf
ev}^{*}_{L}\Omega_{L}$ for all $L\in\mathbb{C}{\mathbb{P}^{1}}$.
Counting parameters, we obtain that any element $x\in A^{*}$ is a sum of two
evaluation maps: $x=a\epsilon_{L_{1}}+b\epsilon_{L_{2}}$. When $a,b\neq 0$ and
$L_{1},L_{2}$ are distinct, the corresponding 2-form $a\operatorname{\sf
ev}^{*}_{L_{1}}\Omega_{L_{1}}+b\operatorname{\sf
ev}^{*}_{L_{2}}\Omega_{L_{2}}$ is clearly non-degenerate on
${\operatorname{Sec}}_{0}(M)$. Indeed, the map
${\operatorname{Sec}}_{0}(M)\ext@arrow
0099\arrowfill@\relbar\relbar\longrightarrow{}{\operatorname{\sf
ev}_{L_{1}}\times\operatorname{\sf ev}_{L_{2}}}(M,L_{1})\times(M,L_{2})$
is etale, and any linear combination $a\Omega_{L_{1}}+b\Omega_{L_{2}}$ with
non-zero $a,b$ is nondegenerate on $(M,L_{1})\times(M,L_{2})$. When $a$ or $b$
vanish, the corresponding form (if non-zero) is proportional to
$\operatorname{\sf ev}^{*}_{L_{i}}\Omega_{L_{i}}$, hence its rank is $\dim
M=\frac{1}{2}\dim{\operatorname{Sec}}(M)$.
We have shown that ${\boldsymbol{\boldsymbol{\Omega}}}$ is a trisymplectic
structure. Clearly, the annihilators of $\operatorname{\sf
ev}_{L}^{*}(\Omega_{L})$ form the standard 3-web on ${\operatorname{Sec}}(M)$.
Therefore, the trisymplectic structure ${\boldsymbol{\boldsymbol{\Omega}}}$
generates the standard $SL(2)$-web, described in Section 2.3 above.
Now let $G$ be a compact Lie group acting on $M$ by hyperkähler isometries;
assume that the hyperkähler moment map for the action of $G$ on $M$ exists.
Let ${\operatorname{Sec}}_{0}(M)$ be the space of regular twistor sections,
considered with the induced $SL(2)$-web and trisymplectic structure. The
previous Claim immediately implies the following Proposition.
###### Proposition 5.5.
Given any $L\in\mathbb{C}{\mathbb{P}^{1}}$, let
$\mu_{L}:\;(M,L){\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}\mathbb{C}$
denote the corresponding holomorphic moment map, and consider the composition
${\boldsymbol{\boldsymbol{\mu}}}_{L}:=\mu_{L}\circ\operatorname{\sf
ev}_{L}:\;{\operatorname{Sec}}(M){\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}\mathbb{C}.$
Then
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}:={\boldsymbol{\boldsymbol{\mu}}}_{I}\oplus{\boldsymbol{\boldsymbol{\mu}}}_{J}\oplus{\boldsymbol{\boldsymbol{\mu}}}_{K}:\;{\operatorname{Sec}}_{0}(M){\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}\mathbb{C}^{3}$
is a trisymplectic moment map on ${\operatorname{Sec}}_{0}(M)$, for an
appropriate identification
$\mathbb{C}^{3}\cong{\boldsymbol{\boldsymbol{\Omega}}}$.
Proof: Clearly, ${\boldsymbol{\boldsymbol{\mu}}}_{L}$ is a moment map for the
action of $G$ on ${\operatorname{Sec}}_{0}(M)$ associated with a degenerate
holomorphic 2-form $\operatorname{\sf ev}_{L}^{*}(\Omega_{L})$. Indeed, for
any $g\in{\mathfrak{g}}=\operatorname{Lie}(G)$, one has
$d\mu_{L}(G)=\Omega_{L}\hskip 2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip
2.0ptg$, because $\mu_{L}$ is a moment map form $G$ acting on $(M,L)$. Then
$d{\boldsymbol{\boldsymbol{\mu}}}_{L}(g)=(\operatorname{\sf
ev}_{L}^{*}\Omega_{L})\hskip 2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip
2.0ptg$.
However, by Claim 5.4, ${\boldsymbol{\boldsymbol{\Omega}}}=\operatorname{\sf
ev}_{I}^{*}\Omega_{I}\oplus\operatorname{\sf
ev}_{J}^{*}\Omega_{J}\oplus\operatorname{\sf ev}_{K}^{*}\Omega_{K}$, hence the
moment map ${\boldsymbol{\boldsymbol{\mu}}}$ associated with
${\boldsymbol{\boldsymbol{\Omega}}}$ is expressed as an appropriate linear
combination of
${\boldsymbol{\boldsymbol{\mu}}}_{I},{\boldsymbol{\boldsymbol{\mu}}}_{J},{\boldsymbol{\boldsymbol{\mu}}}_{K}$.
### 5.3 Trihyperkähler reduction on the space of twistor sections
Let $\operatorname{Tw}(M)=M\times\mathbb{C}{\mathbb{P}^{1}}$ be the twistor
space of the hyperkähler manifold $M$, considered as a Riemannian manifold
with its product metric. We normalize the Fubini-Study metric on the second
component of $\operatorname{Tw}(M)=M\times\mathbb{C}{\mathbb{P}^{1}}$ in such
a way that
$\int_{\mathbb{C}{\mathbb{P}^{1}}}\operatorname{Vol}_{\mathbb{C}{\mathbb{P}^{1}}}$
of the Riemannian volume form is $1$.
###### Claim 5.6.
Let $\phi$ be the area function
${\operatorname{Sec}}(M)\stackrel{{\scriptstyle\phi}}{{{\>\longrightarrow\>}}}\mathbb{R}^{>0}$
mapping a curve $S\in{\operatorname{Sec}}(M)$ to its its Riemannian volume
$\int_{S}\operatorname{Vol}_{S}$. Then $\phi$ is a Kähler potential, that is,
$dd^{c}\phi$ is a Kähler form on ${\operatorname{Sec}}(M)$, where $d^{c}$ is
the usual twisted differential, $d^{c}:=-IdI$.
Proof: See [KV, Proposition 8.15].
Claim 5.6 leads to the following Proposition.
###### Proposition 5.7.
Assume that $G$ is a compact Lie group acting on $M$ by hyperkähler
automorphisms, and admitting a hyperkähler moment map. Consider the
corresponding action of $G$ on ${\operatorname{Sec}}_{0}(M)$, and let
$\omega_{\operatorname{Sec}}=dd^{c}\phi$ be the Kähler form on
${\operatorname{Sec}}_{0}(M)$ constructed in Claim 5.6. Then the corresponding
moment map can be written as
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(x):=\operatorname{Av}\limits_{L\in\mathbb{C}{\mathbb{P}^{1}}}\mu^{\mathbb{R}}_{L}(x),$
where $\operatorname{Av}\limits_{L\in\mathbb{C}{\mathbb{P}^{1}}}$ denotes the
operation of taking average over $\mathbb{C}{\mathbb{P}^{1}}$, and
$\mu^{\mathbb{R}}_{L}:\;(M,L){\>\longrightarrow\>}{\mathfrak{g}}^{*}$ is the
Kähler moment map associated with an action of $G$ on $(M,L)$.
Proof: Let $(X,I,\omega)$ be a Kähler manifold, $\phi$ a Kähler potential on
$X$, and $G$ a real Lie group preserving $\phi$ and acting on $X$
holomorphically. Then an equivariant moment map can be written as
$\mu(g)=-\operatorname{Lie}_{I(g)}\phi,$ (5.1)
where $g\in\operatorname{Lie}(G)$ is an element of the Lie algebra. Indeed,
$\omega=dd^{c}\phi$, hence
$\operatorname{Lie}_{I(g)}\phi=d\phi\hskip
2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0pt(I(g))=(d^{c}\phi)\hskip
2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0ptg,$
where $\lrcorner$ denotes a contraction of a differential form with a vector
field, and
$d\operatorname{Lie}_{I(g)}\phi=d((d^{c}\phi)\hskip
2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip
2.0ptg)=\operatorname{Lie}_{g}(d^{c}\phi)-(dd^{c}\phi)\hskip
2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0ptg=-\omega\hskip
2.0pt\raisebox{1.0pt}{\text{$\lrcorner$}}\hskip 2.0ptg$
by Cartan’s formula. Applying this argument to $X={\operatorname{Sec}}(M)$ and
$\phi={\operatorname{Area}}(S)$, we obtain that
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(S)(g)$ is a Lie derivative of
$\phi$ along $I(g)$.
To prove that ${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(g)$ is equal to an
average of the moment maps $\mu_{L}^{\mathbb{R}}(g)$, we notice that (as
follows from [KV], (8.12) and Lemma 4.4), for any fiberwise tangent vectors
$x,y\in T_{\pi}\operatorname{Tw}(M)$, one has
$dd^{c}\phi(x,Iy)=\int_{S}(x,y)_{H}\operatorname{Vol}_{\mathbb{C}{\mathbb{P}^{1}}},$
where $\operatorname{Vol}_{\mathbb{C}{\mathbb{P}^{1}}}$ is the appropriately
normalized volume form, and $(\cdot,\cdot)_{H}$ the standard Riemannian metric
on $\operatorname{Tw}(M)=M\times S^{2}$. Taking $g=y$, we obtain
$d({\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}g)(x)=\int_{S}(x,g)_{H}\operatorname{Vol}_{\mathbb{C}{\mathbb{P}^{1}}}=\int_{S}d(\mu_{L}^{\mathbb{R}}g)(x)\operatorname{Vol}_{\mathbb{C}{\mathbb{P}^{1}}}.$
The last formula is a derivative of an average of $d\mu^{\mathbb{R}}_{L}(g)$
over $L\in\mathbb{C}{\mathbb{P}^{1}}$.
From Proposition 5.7 it is is apparent that a trisymplectic quotient of the
space ${\operatorname{Sec}}_{0}(M)$ can be obtained using the symplectic
reduction associated with the real moment map
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}$. This procedure is called the
trihyperkähler reduction of ${\operatorname{Sec}}_{0}(M)$.
###### Definition 5.8.
The map
${\boldsymbol{\boldsymbol{\mu}}}:={\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}\oplus{\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}:\;{\operatorname{Sec}}_{0}(M){\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes\mathbb{R}^{7}$,
where ${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}$ is the trisymplectic
moment map constructed in Proposition 5.5 and
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}$ is the Kähler moment map
constructed in Proposition 5.7, is called the trihyperkähler moment map on
${\operatorname{Sec}}_{0}(M)$.
###### Definition 5.9.
Let $c\in{\mathfrak{g}}^{*}\otimes\mathbb{R}^{7}$ be a $G$-invariant vector.
Consider the space ${\operatorname{Sec}}_{0}(M)$ of the regular twistor
sections. Then the quotient
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G:={\boldsymbol{\boldsymbol{\mu}}}^{-1}(c)/G$
is called the trihyperkähler reduction of ${\operatorname{Sec}}_{0}(M)$.
###### Remark 5.10.
Note that the trihyperkähler reduction
${\boldsymbol{\boldsymbol{\mu}}}^{-1}(c)/G$ of ${\operatorname{Sec}}_{0}(M)$
coincides with the trisymplectic quotient
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(c)/G_{\mathbb{C}}$,
provided this last quotient is well-defined, i.e. all $G_{\mathbb{C}}$-orbits
within ${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(c)$ are GIT-stable
with respect to a suitable linearization of the action. Indeed,
$(\mu_{\mathbb{C}}\oplus\mu_{\mathbb{R}})^{-1}(c)/G$ is precisely the space of
stable $G_{\mathbb{C}}$-orbits in $\mu_{\mathbb{C}}^{-1}(c)$.
It follows from Theorem 4.9 that
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ is equipped with an
$SL(2)$-web generated by a natural trisymplectic structure
${\boldsymbol{\boldsymbol{\Omega}}}$, provided that the image of
${\mathfrak{g}}=\operatorname{Lie}(G)$ in $TM$ is non-degenerate at any point,
in the sense of Definition 3.6.
We are finally ready to state the main result of this paper
###### Theorem 5.11.
Let $M$ be flat hyperkähler manifold, and $G$ a compact Lie group acting on
$M$ by hyperkähler automorphisms. Suppose that the hyperkähler moment map
exists, and the hyperkähler quotient $M{/\\!\\!/\\!\\!/}G$ is smooth. Then
there exists an open embedding
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G\stackrel{{\scriptstyle\Psi}}{{{\>\longrightarrow\>}}}{\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$,
which is compatible with the trisymplectic structures on
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ and
${\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$.
In particular, it follows that if $M$ is a flat hyperkähler manifold, then the
trihyperkähler reduction of ${\operatorname{Sec}}_{0}(M)$ is a smooth
trisymplectic manifold whose dimension is twice that of the hyperkähler
quotient of $M$.
The flatness condition is mostly a technical one, but it will suffice for our
main goal, which is a description of the moduli space of instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$ (see Section 8 below).
We do believe that the conclusions of Theorem 5.11 should hold without such a
condition. The crucial point is Proposition 6.1 below, for which we could not
find a proof without assuming flatness; other parts of our proof, which will
be completed at the end of Subsection 7.2, do work without it.
## 6 Moment map on twistor sections
In this Section, we let $M$ be a _flat_ hyperkähler manifold. More precisely,
let $M$ be an open subset of a quaternionic vector space $V$, equipped with a
flat metric; completeness of the metric is not relevant. Thus
$\operatorname{Tw}(M)$ is isomorphic to the corresponding open subset of
$\operatorname{Tw}(V)=V\otimes{\cal O}_{\mathbb{C}\mathbb{P}^{1}}(1)$, and
${\operatorname{Sec}}_{0}(M)={\operatorname{Sec}}(M)$ is the open subset of
${\operatorname{Sec}}(V)=V\otimes_{\mathbb{C}}\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(1))\simeq V\otimes_{\mathbb{R}}\mathbb{C}^{2}$
consisting of those sections of $V\otimes{\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(1)$ that take values in $M\subset V$.
More precisely, let $[z:w]$ be a choice of homogeneous coordinates on
$\mathbb{C}{\mathbb{P}^{1}}$, so that $\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(1))\simeq\mathbb{C}z\oplus\mathbb{C}w$. A
section $\sigma\in Sec_{0}(M)$ will of the the form
$\sigma(z,w)=zX_{1}+wX_{2}$ such that $\sigma(z,w)\in M$ for every
$[z:w]\in\mathbb{C}{\mathbb{P}^{1}}$.
Let $G$ be a compact Lie group acting on $M$ by hyperkähler automorphisms,
with $\mu:M\to{\mathfrak{g}}^{*}\otimes\langle I,J,K\rangle$ being the
corresponding hyperkähler moment map; let $\mu_{I}^{\mathbb{R}}$,
$\mu_{J}^{\mathbb{R}}$, $\mu_{K}^{\mathbb{R}}$ denote its components. By
definition, these components are the real moment maps associated with the
symplectic forms $\omega_{I},\omega_{J},\omega_{K}$, respectively. Given a
complex structure $L=aI+bJ+cK$, $a^{2}+b^{2}+c^{2}=1$, we denote by
$\mu_{L}^{\mathbb{R}}$ the corresponding real moment map,
$\mu_{L}^{\mathbb{R}}=a\mu_{I}^{\mathbb{R}}+b\mu_{J}^{\mathbb{R}}+c\mu_{K}^{\mathbb{R}}.$
(6.1)
The components $\mu_{I},\mu_{J},\mu_{K}$ of the hyperkähler moment map can be
regarded as real-valued, quadratic polynomials on $V$. The corresponding
complex linear polynomial functions
${\boldsymbol{\boldsymbol{\mu}}}_{I},{\boldsymbol{\boldsymbol{\mu}}}_{J},{\boldsymbol{\boldsymbol{\mu}}}_{K}$
generate the trisymplectic moment map for ${\operatorname{Sec}}_{0}(M)$.
Consider the decomposition
$V\otimes_{\mathbb{R}}\mathbb{C}^{2}=V^{1,0}_{I}\oplus V^{0,1}_{I}$, where
$I\in\operatorname{End}V$ acts on $V^{1,0}_{I}\subset
V\otimes_{\mathbb{R}}\mathbb{C}$ as ${\sqrt{-1}}$ and on $V^{0,1}_{I}$ as
$-{\sqrt{-1}}$. We may regard the trisymplectic moment map
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}:{\operatorname{Sec}}_{0}(M){\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes\mathbb{C}^{3}$
as a quadratic form $Q$ on ${\operatorname{Sec}}_{0}(M)\simeq
V^{1,0}_{I}\oplus V^{0,1}_{I}$, and express it as a sum of three components,
$Q^{2,0}:\;V^{1,0}_{I}\otimes
V^{1,0}_{I}{\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes\mathbb{C},\ \ \ \
Q^{1,1}:\;V^{1,0}_{I}\otimes
V^{0,1}_{I}{\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes\mathbb{C}$ ${\rm
and}~{}~{}Q^{0,2}:V^{0,1}_{I}\otimes
V^{0,1}_{I}{\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes\mathbb{C}.$
For each $L\in\mathbb{C}{\mathbb{P}^{1}}$, let $\mu_{L}^{\mathbb{R}}$ be the
real moment map, depending on $L\in\mathbb{C}{\mathbb{P}^{1}}$ as in (6.1),
and consider the evaluation map
${\operatorname{Sec}}_{0}(M)\stackrel{{\scriptstyle\operatorname{\sf
ev}_{L}}}{{{\>\longrightarrow\>}}}(M,L)$ (see Claim 5.4 for definition). Let
also ${\boldsymbol{\boldsymbol{\mu}}}_{L}^{\mathbb{R}}:=\operatorname{\sf
ev}_{L}^{*}\mu_{L}^{\mathbb{R}}$ be the pullback of $\mu_{L}^{\mathbb{R}}$ to
${\operatorname{Sec}}_{0}(M)$.
From Proposition 5.5, the following description of the moment maps on
${\operatorname{Sec}}(M)$ can be obtained. This result will be used later on
in the proof of Theorem 5.11.
###### Proposition 6.1.
Let $G$ be a real Lie group acting on a flat hyperkähler manifold $M$ by
hyperkähler isometries,
${\operatorname{Sec}}(M)\stackrel{{\scriptstyle{\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}}}{{{\>\longrightarrow\>}}}{\mathfrak{g}}^{*}\otimes\mathbb{C}^{3}$
the corresponding trisymplectic moment map. We consider the real moment map
$\mu_{L}^{\mathbb{R}}$ as a ${\mathfrak{g}}^{*}$-valued function on
$\operatorname{Tw}(M)=M\times\mathbb{C}{\mathbb{P}^{1}}$. Let
$S\in{\operatorname{Sec}}(M)$ be a point which satisfies
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(S)=0$. Then
$\mu^{\mathbb{R}}_{L}{\left|{}_{{\phantom{|}\\!\\!}_{S}}\right.}$ is constant.
###### Proof.
We must show that for each $S\in{\operatorname{Sec}}(M)$ satisfying
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(S)=0$, one has
$\frac{d}{dL}{\boldsymbol{\boldsymbol{\mu}}}^{\mathbb{R}}_{L}(S)=0$.
We express $S\in V\otimes_{\mathbb{R}}\mathbb{C}$ as
$S=s^{1,0}_{L}+s^{0,1}_{L}$, with $s^{1,0}_{L}\in V^{1,0}_{L}$ and
$s^{0,1}_{L}\in V^{0,1}_{L}$ Then
${\boldsymbol{\boldsymbol{\mu}}}^{\mathbb{R}}_{L}(S)=Q^{1,1}_{L}(s_{L}^{1,0},\overline{s_{L}^{1,0}})$
(6.2)
where $Q^{1,1}_{L}$ denotes the (1,1)-component of
${\boldsymbol{\boldsymbol{\mu}}}^{\mathbb{C}}$ taken with respect to $L$. This
clear, because $Q^{1,1}$ is obtained by complexifying $\mu_{L}^{\mathbb{R}}$
(this is an $L$-invariant part of the hyperkähler moment map).
For an ease of differentiation, we rewrite (6.2) as
${\boldsymbol{\boldsymbol{\mu}}}^{\mathbb{R}}_{L}(S)=Q(s_{L}^{1,0},\overline{s_{L}^{1,0}})=\operatorname{Re}(Q(s_{L}^{1,0},s_{L}^{1,0})).$
This is possible, because $s_{L}^{1,0}\in V_{L}^{1,0}$ and
$\overline{s_{L}^{1,0}}\in V_{L}^{0,1}$, hence $Q^{1,1}_{L}$ is the only
component of $Q$ which is non-trivial on
$(s_{L}^{1,0},\overline{s_{L}^{1,0}})$. Then
$\frac{d}{dL}{\boldsymbol{\boldsymbol{\mu}}}^{\mathbb{R}}_{L}(S){\left|{}_{{\phantom{|}\\!\\!}_{L=I}}\right.}=\operatorname{Re}\left[Q\left(s_{I}^{1,0},\frac{ds_{L}^{1,0}}{dL}{\left|{}_{{\phantom{|}\\!\\!}_{L=I}}\right.}\right)\right].$
(6.3)
However,
$\frac{ds_{L}^{1,0}}{dL}{\left|{}_{{\phantom{|}\\!\\!}_{L=I}}\right.}$ is
clearly proportional to $s_{I}^{0,1}$ (the coefficient of proportionality
depends on the choice of parametrization on $\mathbb{C}{\mathbb{P}^{1}}\ni
L$), hence (6.3) gives
$\frac{d}{dL}{\boldsymbol{\boldsymbol{\mu}}}^{\mathbb{R}}_{L}(S){\left|{}_{{\phantom{|}\\!\\!}_{L=I}}\right.}=\lambda\operatorname{Re}\left[Q(s_{I}^{1,0},s_{I}^{0,1})\right]$
and this quantity vanishes, because
$Q(s_{L}^{1,0},s_{L}^{0,1})=Q^{1,1}(S)=\mu^{\mathbb{C}}_{L}(S).$
## 7 Trisymplectic reduction and hyperkähler reduction
### 7.1 The tautological map
$\tau:\;{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G{\>\longrightarrow\>}{\operatorname{Sec}}(M{/\\!\\!/\\!\\!/}G)$
Let $M$ be a hyperkähler manifold, and $G$ a compact Lie group acting on $M$
by hyperkähler isometries, and admitting a hyperkähler moment map. A point in
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ is represented by a
section $S\in{\operatorname{Sec}}_{0}(M)$ which satisfies
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(S)=0$ and
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(S)=0$. The first condition, by
Proposition 5.5, implies that for each $L\in\mathbb{C}{\mathbb{P}^{1}}$, the
corresponding point $S(L)\in(M,L)$ belongs to the zero set of the holomorphic
symplectic map
$\mu_{L}^{\mathbb{C}}:\;(M,L){\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes_{\mathbb{R}}\mathbb{C}$.
Using the evaluation map defined in Claim 5.4, this is written as
$\mu_{L}^{\mathbb{C}}(\operatorname{\sf ev}_{L}(S))=0.$
By Proposition 6.1, the real moment map $\mu^{\mathbb{R}}_{L}$ is constant on
$S$:
$\mu_{L}^{\mathbb{C}}(\operatorname{\sf ev}_{L}(S))=const.$
By Proposition 5.7, the real part of the trihyperkähler moment map
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(S)$ is an averace of
$\mu_{L}^{\mathbb{C}}(\operatorname{\sf ev}_{L}(S))$ taken over all
$L\in\mathbb{C}{\mathbb{P}^{1}}$. Therefore,
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(S)=0$ implies
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(S)=0\Leftrightarrow\mu_{L}^{\mathbb{C}}(\operatorname{\sf
ev}_{L}(S))=0\ \ \forall L\in\mathbb{C}{\mathbb{P}^{1}}.$
We obtain that for each $S\in{\operatorname{Sec}}_{0}(M)$ which satisfies
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(S)=0,{\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(S)=0$,
and each $l\in\mathbb{C}{\mathbb{P}^{1}}$, one has
$\mu_{L}^{\mathbb{C}}(x)=0,\mu_{L}^{\mathbb{R}}(x)=0,$ (7.1)
where $x=\operatorname{\sf ev}_{L}(S)$. A point $x\in(M,L)$ satisfying (7.1)
belongs to the zero set of the hyperkähler moment map
$\mu:\;M{\>\longrightarrow\>}{\mathfrak{g}}^{*}\otimes\mathbb{R}^{3}$. Taking
a quotient over $G$, we obtain a map
$S/G:\;\mathbb{C}{\mathbb{P}^{1}}{\>\longrightarrow\>}\operatorname{Tw}(M{/\\!\\!/\\!\\!/}G)$,
because $M{/\\!\\!/\\!\\!/}G$ is a quotient of $\mu^{-1}(0)$ by $G$. This
gives a map
$\tau:\;{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G{\>\longrightarrow\>}{\operatorname{Sec}}(M{/\\!\\!/\\!\\!/}G)$
which is called a tautological map. Note that
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ has a trisymplectic
structure by Theorem 4.9, outside of the set of its degenerate points (in the
sense of Definition 3.6), and ${\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$
is a trisymplectic manifold by Proposition 5.5.
###### Proposition 7.1.
Let $M$ be a flat hyperkähler manifold, and $G$ a compact Lie group acting on
$M$ by hyperkähler isometries, and admitting a hyperkähler moment map.
Consider the tautological map
$\tau:\;{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G{\>\longrightarrow\>}{\operatorname{Sec}}(M{/\\!\\!/\\!\\!/}G)$
(7.2)
defined above. Then
$\tau({\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G)$ belongs to the
set ${\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$ of regular twistor
sections in $\operatorname{Tw}(M{/\\!\\!/\\!\\!/}G)$. Moreover, the image of
${\mathfrak{g}}$ is non-degenerate, in the sense of Definition 3.6, and $\tau$
is a local diffeomorphism, compatible with the trisymplectic structure.
Proof. Step 0: We prove that for all points
$S\in{\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(0)$, the image of
${\mathfrak{g}}$ is non-degenerate, in the sense of Definition 3.6. This is
the only step of the proof where the flatness assumption is used. We have to
show that the image ${\mathfrak{g}}_{S}$ of ${\mathfrak{g}}$ in
$T_{S}{\operatorname{Sec}}(M)$ is non-degenerate for all
$S\in{\boldsymbol{\boldsymbol{\mu}}}^{-1}_{\mathbb{C}}(0)$. This is equivalent
to
$T_{S}{\boldsymbol{\boldsymbol{\mu}}}^{-1}_{\mathbb{C}}(0)\cap\operatorname{Mat}(2){\mathfrak{g}}_{S}={\mathfrak{g}}_{S}.$
(7.3)
Indeed, ${\mathfrak{g}}_{S}$ is non-degenerate if and only if the quotient
$T_{S}{\operatorname{Sec}}(M)/\operatorname{Mat}(2){\mathfrak{g}}_{S}$ is
trisymplectic (Claim 3.5). By (4.4),
$T_{S}{\boldsymbol{\boldsymbol{\mu}}}^{-1}_{\mathbb{C}}(0)$ is an orthogonal
complement of $I{\mathfrak{g}}_{S}+J{\mathfrak{g}}_{S}+K{\mathfrak{g}}_{S}$
with respect to the holomorphic Riemannian form $B$ associated with the
trisymplectic structure, where $I,J,K$ is some quaternionic basis in
$\operatorname{Mat}(2)$. If ${\mathfrak{g}}_{S}$ is non-degenerate, the
orthogonal complement of $\operatorname{Mat}(2){\mathfrak{g}}_{S}$ is
isomorphic to
$T_{S}{\boldsymbol{\boldsymbol{\mu}}}^{-1}_{\mathbb{C}}(0)/{\mathfrak{g}}_{S}$,
which gives (7.3). Conversely, if (7.3) holds, the orthogonal complement of
$I{\mathfrak{g}}_{S}+J{\mathfrak{g}}_{S}+K{\mathfrak{g}}_{S}$ does not
intersect $I{\mathfrak{g}}_{S}+J{\mathfrak{g}}_{S}+K{\mathfrak{g}}_{S}$, hence
the restriction of $B$ to
$I{\mathfrak{g}}_{S}+J{\mathfrak{g}}_{S}+K{\mathfrak{g}}_{S}$ is non-
degenerate. Therefore, non-degeneracy of ${\mathfrak{g}}_{S}$ is implied by
Remark 3.7.
Now, let $S\in{\operatorname{Sec}}_{0}(M)$ be a twistor section which
satisfies
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(S)={\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(S)=0$.
By Proposition 6.1, for each $L\in\mathbb{C}P^{1}$, the corresponding point
$(L,S_{L})$ of $S$ satisfies $\mu_{hk}(S_{L})=0$, where
$\mu_{hk}\;M{\>\longrightarrow\>}\mathbb{R}^{3}\otimes{\mathfrak{g}}^{*}$
denotes the hyperkähler moment map. Let
$g\in\operatorname{Mat}(2){\mathfrak{g}}_{S}\cap
T_{S}{\boldsymbol{\boldsymbol{\mu}}}^{-1}_{\mathbb{C}}(0)$ be a vector
obtained as a linear combination $\sum H_{i}g_{i}$, with
$g_{i}\in{\mathfrak{g}}_{S}$ and $H_{i}\in\operatorname{Mat}(2,\mathbb{C})$.
At each point $(L,S_{L})\in S$, $g$ is evaluated to a linear combination $\sum
H_{i}^{L}g_{i}^{L}$ with quaternionic coefficients, tangent to
$\mu_{hk}^{-1}(0)$. However, a quaternionic linear combination of this form
can be tangent to $\mu_{hk}^{-1}(0)$ only if all $H_{i}^{L}$ are real, because
for each hyperkähler manifold $Z$ one has a decomposition
$T_{x}(\mu_{hk}^{-1}(0))\oplus I{\mathfrak{g}}\oplus J{\mathfrak{g}}\oplus
K{\mathfrak{g}}=TxZ$. We have proved that any
$g\in\operatorname{Mat}(2){\mathfrak{g}}_{S}\cap
T_{S}{\boldsymbol{\boldsymbol{\mu}}}^{-1}_{\mathbb{C}}(0)$ belongs to the
image of ${\mathfrak{g}}$ at each point $(L,S_{L})\in S$. This proves (7.3),
hence, non-degeneracy of ${\mathfrak{g}}_{S}$.
Step 1: We prove that the image
$\tau({\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G)$ belongs to
${\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)\subset{\operatorname{Sec}}(M{/\\!\\!/\\!\\!/}G)$.
Given $S\in{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$, consider its
image $\tau(S)$ as a curve in $\operatorname{Tw}(M{/\\!\\!/\\!\\!/}G)$, and
let $N(\tau(S))$ be its normal bundle. Denote by
$\tilde{S}\in{\operatorname{Sec}}_{0}(M)$ the twistor section which satisfies
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(\tilde{S})=0,{\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(\tilde{S})=0$
and gives $S$ after taking a quotient. Then
$N(\tau(S)){\left|{}_{{\phantom{|}\\!\\!}_{L}}\right.}=\frac{T_{\operatorname{\sf
ev}_{L}(\tilde{S})}(M,L)}{\langle{\mathfrak{g}}+I{\mathfrak{g}}+J{\mathfrak{g}}+K{\mathfrak{g}}\rangle}$
(7.4)
where $\operatorname{\sf
ev}_{L}:\;{\operatorname{Sec}}(M){\>\longrightarrow\>}(M,L)$ is the standard
evaluation map.
A bundle $B\cong\bigoplus_{2n}{\cal O}(1)$ can be constructed from a
quaternionic vector space $W$ as follows. For any
$L\in\mathbb{C}{\mathbb{P}^{1}}$, considered as a quaternion satisfying
$L^{2}=-1$, one takes the complex vector space $(W,L)$ as a fiber of $B$ at
$L$. Denote this bundle as $B(W)$. Now, (7.4) gives
$N(\tau(S))=\frac{N(\tilde{S})}{B(\langle{\mathfrak{g}}+I{\mathfrak{g}}+J{\mathfrak{g}}+K{\mathfrak{g}}\rangle)}$
giving a quotient of $\bigoplus_{2i}{\cal O}(1)$ by $\bigoplus_{2j}{\cal
O}(1)$, which is also a direct sum of ${\cal O}(1)$. Therefore, $\tau(S)$ is
regular.
Step 2: The tautological map
$\tau:\;{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G{\>\longrightarrow\>}{\operatorname{Sec}}(M{/\\!\\!/\\!\\!/}G)$
is a local diffeomorphism. This follows from the implicit function theorem.
Indeed, let $S\in{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ be a
point associated with $\tilde{S}\in{\operatorname{Sec}}_{0}(M)$, satisfying
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(\tilde{S})=0,{\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(\tilde{S})=0$
as in Step 1. Then the differential of $\tau$ is a map
$d\tau:\;\frac{\Gamma(N\tilde{S})}{\operatorname{Mat}(2,\mathbb{C})\cdot{\mathfrak{g}}}{\>\longrightarrow\>}\Gamma(N\tau(S)),$
(7.5)
where ${\mathfrak{g}}=\operatorname{Lie}(G)\subset T\operatorname{Tw}(M)$. Let
$N_{g}\tilde{S}$ be a sub-bundle of $N\tilde{S}$ spanned by the image of
$\langle{\mathfrak{g}}+I{\mathfrak{g}}+J{\mathfrak{g}}+K{\mathfrak{g}}\rangle$
By Step 1, $N_{g}S\cong{\cal O}(1)^{k}$, and, indeed, a subspace of
$\Gamma(N\tilde{S})$ generated by
$\operatorname{Mat}(2,\mathbb{C})\cdot{\mathfrak{g}}$ coincides with
$\Gamma(N_{g}\tilde{S})$. Similarly,
$\Gamma(N\tau(S))\cong\Gamma(N\tilde{S}/N_{g}\tilde{S})$. We have shown that
the map (7.5) is equivalent to
$\frac{\Gamma(N\tilde{S})}{\Gamma(N_{g}\tilde{S})}{\>\longrightarrow\>}\Gamma(N\tilde{S}/N_{g}\tilde{S}).$
By step 1, the bundles $N\tilde{S}$ and $N_{g}\tilde{S}$ are sums of several
copies of ${\cal O}(1)$, hence this map is an isomorphism.
Step 3: We prove that $\tau$ is compatible with the trisymplectic structure.
The trisymplectic structure on ${\operatorname{Sec}}(M{/\\!\\!/\\!\\!/}G)$ is
induced by a triple of holomorphic symplectic forms $\langle\operatorname{\sf
ev}_{I}^{*}(\Omega_{I}),\operatorname{\sf
ev}_{J}^{*}(\Omega_{J}),\operatorname{\sf ev}_{K}^{*}(\Omega_{K})\rangle$
(Claim 5.4). From the construction in Theorem 4.9 it is apparent that the same
triple generates the trisymplectic structure on
${\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$. Therefore, $\tau$ is
compatible with the trisymplectic structure. We proved Proposition 7.1.
### 7.2 Trihyperkähler reduction and homogeneous bundles on
$\mathbb{C}{\mathbb{P}^{1}}$
Let $M$ be a hyperkähler manifold, and $G$ a compact Lie group acting on $M$
by hyperkähler isometries, and admitting a hyperkähler moment map. Consider
the set $Z\subset\operatorname{Tw}(M)$ consisting of all points
$(m,L)\in\operatorname{Tw}(M)$ such that the corresponding holomorphic moment
map vanishes on $m$: $\mu_{L}^{\mathbb{C}}(m)=0$. By construction, $Z$ is a
complex subvariety of $\operatorname{Tw}(M)$. Let $G_{\mathbb{C}}$ be a
complexificaion of $G$, acting on $\operatorname{Tw}(M)$ in a natural way, and
$G_{\mathbb{C}}\cdot(m,L)$ its orbit. This orbit is called stable, if
$G_{\mathbb{C}}\cdot m\subset(M,L)$ intersects the zero set of the real moment
map,
$G_{\mathbb{C}}\cdot
m\cap\left(\mu_{L}^{\mathbb{R}}\right)^{-1}(0)\neq\emptyset.$
As follows from the standard results about Kähler reduction, the union
$Z_{0}\subset Z$ of stable orbits is open in $Z$, and the quotient
$Z_{0}/G_{\mathbb{C}}$ is isomorphic, as a complex manifold, to
$\operatorname{Tw}(M{/\\!\\!/\\!\\!/}G)$. Consider the corresponding quotient
map,
$P:\;Z_{0}{\>\longrightarrow\>}Z_{0}/G_{\mathbb{C}}=\operatorname{Tw}(M{/\\!\\!/\\!\\!/}G).$
(7.6)
For any twistor section $S\in{\operatorname{Sec}}(M{/\\!\\!/\\!\\!/}G)$,
consider its preimage $P^{-1}(S)$. Clearly, $P^{-1}(S)$ is a holomorphic
homogeneous vector bundle over $S\cong\mathbb{C}{\mathbb{P}^{1}}$. We denote
this bundle by $P_{S}$.
###### Proposition 7.2.
Let $M$ be a flat hyperkähler manifold, and $G$ a compact Lie group acting on
$M$ by hyperkähler isometries, and admitting a hyperkähler moment map.
Consider the tautological map
$\tau:\;{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G{\>\longrightarrow\>}{\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$
constructed in Proposition 7.1. Given a twistor section
$S\in{\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$, let $P_{S}$ be a
holomorphic homogeneous bundle constructed above. Then
(i)
The point $S$ lies in ${\rm Im}~{}\tau$ if and only if the bundle $P_{S}$
admits a holomorphic section (this is equivalent to $P_{S}$ being trivial).
(ii)
The map
$\tau:\;{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G{\>\longrightarrow\>}{\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$
is an open embedding.
Proof: A holomorphic section $S_{1}$ of $P_{S}$ can be understood as a point
in ${\operatorname{Sec}}(M)$. Since $S_{1}$ lies in the unio n of all stable
orbits, denoted earlier as $Z_{0}\subset Z\subset\operatorname{Tw}(M)$, the
real moment map $\mu_{L}^{\mathbb{R}}$ is constant on $S_{1}$ (Proposition
6.1). By definition of $Z_{0}$, for each $(z,L)\in Z_{0}$, there exists $g\in
G_{\mathbb{C}}$ such that $\mu_{L}^{\mathbb{R}}(gz)=0$.
Therefore, ${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(gS_{1})=0$ for
appropriate $g\in G_{\mathbb{C}}$. This gives $\tau(S_{2})=S$, where
$S_{2}\in{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$ is a point
corresponding to $gS_{1}$. Conversely, consider a point
$S_{2}\in{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G$, such that
$\tau(S_{2})=S$, and let $S_{1}\in S_{0}(M)$ be the corresponding twistor
section. Then $S_{1}$ gives a section of $P_{S}$. We proved Proposition 7.2
(i).
To prove Proposition 7.2 (ii), it would suffice to show the following. Take
$S\in{\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$, and let
$S_{1},S_{2}\in{\operatorname{Sec}}(M)$ be twistor sections which lie in
$Z_{0}$ and satisfy ${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{R}}(S_{i})=0$.
Then there exists $g\in G$ such that $g(S_{1})=S_{2}$. Indeed, $\tau^{-1}(S)$
is the set of all such $S_{i}$ considered up to an action of $G$.
Let $P_{S}\stackrel{{\scriptstyle P}}{{{\>\longrightarrow\>}}}S$ be the
homogeneous bundle constructed above, and ${\cal P}$ its fiber, which is a
complex manifold with transitive action of $G_{\mathbb{C}}$. Using $S_{1}$, we
trivialize $P_{S}={\cal P}\times S$ in such a way that $S_{1}=\\{p\\}\times S$
for some $p\in{\cal P}$. Then $S_{2}$ is a graph of a holomorphic map
$\mathbb{C}{\mathbb{P}^{1}}\stackrel{{\scriptstyle\phi}}{{{\>\longrightarrow\>}}}{\cal
P}$; to prove Proposition 7.2 (ii) it remains to show that $\phi$ is constant.
Since all points of $\left(\mu_{L}^{\mathbb{R}}\right)^{-1}(0)$ lie on the
same orbit of $G$, the image $\phi(\mathbb{C}{\mathbb{P}^{1}})$ belongs to
$G_{p}:=G\cdot\\{p\\}\subset{\cal P}$. However, $G_{p}$ is a totally real
subvariety in ${\cal P}=G_{\mathbb{C}}/\operatorname{\sf St}(p)$. Indeed,
$G_{p}$ is fixed by a complex involution which exchanges the complex structure
on $G_{\mathbb{C}}$ with its opposite. Therefore, all complex subvarieties of
$G_{p}$ are 0-dimensional, and
$\phi:\;\mathbb{C}{\mathbb{P}^{1}}{\>\longrightarrow\>}G_{p}\subset{\cal P}$
is constant. We finished a proof of Proposition 7.2.
The proof of Theorem 5.11 follows. Indeed, by Proposition 7.1, the
tautological map
$\tau:\;{\operatorname{Sec}}_{0}(M){/\\!\\!/\\!\\!/\\!\\!/}G{\>\longrightarrow\>}{\operatorname{Sec}}_{0}(M{/\\!\\!/\\!\\!/}G)$
is a local diffeomorphism compatible with the trisymplectic structures, and by
Proposition 7.2 it is injective.
In particular, we have:
###### Corollary 7.3.
Let $M$ be a flat hyperkähler manifold equipped with the action of a compact
Lie group by hyperkähler isometries and admitting a hyperkähler moment map.
Then the trihyperkähler reduction of ${\operatorname{Sec}}_{0}(M)$ is a smooth
trisymplectic manifold of dimension $2\dim M$.
## 8 Case study: moduli spaces of instantons
In this Section, we give an application of the previous geometric
constructions to the study of the moduli space of framed instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$. Our goal is to establish the smoothness of the
moduli space of such objects, and show how that proves the smoothness of the
moduli space of mathematical instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$. That partially settles the long standing
conjecture in algebraic geometry mentioned at the Introduction: moduli space
of mathematical instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$ of charge
$c$ is a smooth manifold of dimension $8c-3$, c. f. [CTT, Conjecture 1.2].
### 8.1 Moduli space of framed instantons on $\mathbb{R}^{4}$
We begin by recalling the celebrated ADHM construction of instantons, which
gives a description of the moduli space of framed instantons on
$\mathbb{R}^{4}$ in terms of a finite-dimensional hyperkähler quotient.
Let $V$ and $W$ be complex vector spaces of dimension $c$ and $r$,
respectively.
$\mathbf{B}=\mathbf{B}(r,c):={\rm End}(V)\oplus{\rm End}(V)\oplus{\rm
Hom}(W,V)\oplus{\rm Hom}(V,W).$
An point of $\mathbf{B}$ is a quadruple $X=(A,B,I,J)$ with $A,B\in{\rm
End}(V)$, $I\in{\rm Hom}(W,V)$ and $J\in{\rm Hom}(V,W)$; it is said to be
1. (i)
stable if there is no subspace $S\subsetneqq V$ with $A(S),B(S),I(W)\subset
S$;
2. (ii)
costable if there is no subspace $0\neq S\subset V$ with $A(S),B(S)\subset
S\subset\ker J$;
3. (iii)
regular if it is both stable and costable.
Let $\mathbf{B}^{\rm reg}$ denote the (open) subset of regular data. The group
$G=U(V)$ acts on $\mathbf{B}^{\rm reg}$ in the following way:
$g\cdot(A,B,I,J):=(gAg^{-1},gBg^{-1},gI,Jg^{-1}).$ (8.1)
It is not difficult to see that this action is free. The hyperkähler moment
map $\mu:\mathbf{B}^{\rm reg}\to{\mathfrak{u}}(V)^{*}\otimes\mathbb{R}^{3}$
can then be written in the following manner. Using the decomposition
$\mathbb{R}^{3}\simeq\mathbb{C}\oplus\mathbb{R}$ (as real vector spaces), we
decompose $\mu=(\mu_{\mathbb{C}},\mu_{\mathbb{R}})$ with $\mu_{\mathbb{C}}$
and $\mu_{\mathbb{R}}$ given by
$\mu_{\mathbb{C}}(A,B,I,J)=[A,B]+IJ~{}~{}{\rm and}$ (8.2)
$\mu_{\mathbb{R}}(A,B,I,J)=[A,A^{\dagger}]+[B,B^{\dagger}]+II^{\dagger}-J^{\dagger}J.$
(8.3)
The first component $\mu_{\mathbb{C}}$ is the holomorphic moment map
$\mathbf{B}\to{\mathfrak{g}}{\mathfrak{l}}(V)^{*}\otimes_{\mathbb{R}}\mathbb{C}$
corresponding to the natural complex structure on $\mathbf{B}$.
The so-called _ADHM construction_ , named after Atiyah, Drinfeld, Hitchin and
Manin [ADHM], provides a bijection between the hyperkähler quotient ${\cal
M}(r,c):=\mathbf{B}^{\rm reg}(r,c){/\\!\\!/\\!\\!/}U(V)$ and framed instantons
on the Euclidian 4-dimensi-onal space $\mathbb{R}^{4}$; see [D] or [Nak,
Theorem 3.48], and the references therein for details.
Let us now consider the trisymplectic reduction of
${\operatorname{Sec}}_{0}(\mathbf{B}^{\rm reg})$. As noted in the first few
paragraphs of Section 6,
${\operatorname{Sec}}_{0}(\mathbf{B})={\operatorname{Sec}}(\mathbf{B})\simeq\mathbf{B}\otimes\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(1))$, and
${\operatorname{Sec}}_{0}(\mathbf{B}^{\rm reg})$ is the (open) subset of
${\operatorname{Sec}}_{0}(\mathbf{B})$ consisting of those sections $\sigma$
such that $\sigma(p)$ is regular for every $p\in\mathbb{C}{\mathbb{P}^{1}}$.
###### Definition 8.1.
A section $\sigma\in\mathbf{B}\otimes\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(1))$ is _globally regular_ if
$\sigma(p)\in\mathbf{B}$ is regular for every $p\in\mathbb{C}{\mathbb{P}^{1}}$
(cf. [FJ, p. 2916-7], where such sections are called _$\mathbb{C}$ -regular_).
To be more precise, let $[z:w]$ be homogeneous coordinates on
$\mathbb{C}{\mathbb{P}^{1}}$; such choice leads to identifications
$\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(1))\simeq\mathbb{C}z\oplus\mathbb{C}w\simeq\mathbb{C}^{2}~{}~{}{\rm
and}~{}~{}\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(2))\simeq\mathbb{C}z^{2}\oplus\mathbb{C}w^{2}\oplus\mathbb{C}zw\simeq\mathbb{C}^{3}.$
(8.4)
It follows that
${\operatorname{Sec}}_{0}(\mathbf{B})\simeq\mathbf{B}\oplus\mathbf{B}$, so a
point $\widetilde{X}\in{\operatorname{Sec}}_{0}(\mathbf{B})$ can regarded as a
pair $(X_{1},X_{2})$ of ADHM data; $\widetilde{X}$ is globally regular (i.e.
$\widetilde{X}\in{\operatorname{Sec}}_{0}(\mathbf{B}^{\rm reg})$) if any
linear combination $zX_{1}+wX_{2}$ is regular.
The action (8.1) of $GL(V)$ (hence also of $U(V)$) on $\mathbf{B}^{\rm reg}$
extends to ${\operatorname{Sec}}_{0}(\mathbf{B}^{\rm reg})$ by acting
trivially on the $\Gamma({\cal O}_{\mathbb{C}\mathbb{P}^{1}}(1))$ factor, i.e.
$g\cdot(X_{1},X_{2})=(g\cdot X_{1},g\cdot X_{2})$.
Using the identification $\mathbb{C}^{3}\simeq\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(2))$ above, it follows that the trisymplectic
moment map
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}:{\operatorname{Sec}}_{0}(\mathbf{B}^{\rm
reg})\to{\mathfrak{u}}(V)^{*}\otimes_{\mathbb{R}}\Gamma({\cal
O}_{\mathbb{C}\mathbb{P}^{1}}(2))$ (constructed in Proposition 5.5) satisfies
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(\sigma)(p)=\mu_{\mathbb{C}}(\sigma(p))$
for $\sigma\in{\operatorname{Sec}}_{0}(\mathbf{B}^{\rm reg})$ and
$p\in\mathbb{C}{\mathbb{P}^{1}}$.
More precisely, let $X_{1}=(A_{1},B_{1},I_{1},J_{1})$ and
$X_{2}=(A_{2},B_{2},I_{2},J_{2})$; consider the section
$\sigma(z,w)=zX_{1}+wX_{2}\in{\operatorname{Sec}}_{0}(\mathbf{B}^{\rm reg})$.
The identity
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(\sigma)(p)=\mu_{\mathbb{C}}(\sigma(p))$
means that ${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}(\sigma)=0$ iff
$\mu_{\mathbb{C}}(zX_{1}+wX_{2})=0$ for every
$[z:w]\in\mathbb{C}{\mathbb{P}^{1}}$. Note that
$\mu_{\mathbb{C}}(zX_{1}+wX_{2})=0\Leftrightarrow\left\\{\begin{array}[]{l}~{}[A_{1},B_{1}]+I_{1}J_{1}=0\\\
~{}[A_{2},B_{2}]+I_{2}J_{2}=0\\\
~{}[A_{1},B_{2}]+[A_{2},B_{1}]+I_{1}J_{2}+I_{2}J_{1}=0\end{array}\right.$
(8.5)
The three equations on the right hand side of equation (8.5) are known as the
_1-dimensional ADHM equations_ ; they were first considered by Donaldson in
[D] (cf. equations (a-c) in [D, p. 456]) and further studied in [FJ] (cf.
equations (7-9) in [FJ, p. 2917]) and generalized in [J2, equation (3)].
One can show that globally regular solutions of the 1-dimensional ADHM
equations are GIT-stable with respect to the $GL(V)$-action, see [HL, Section
3] and [HJV, Section 2.3]. Therefore, according to Remark 5.10, the
trihyperkähler quotient ${\operatorname{Sec}}_{0}(\mathbf{B}^{\rm
reg}){/\\!\\!/\\!\\!/\\!\\!/}U(V)$ is well defined and coincides with
${\boldsymbol{\boldsymbol{\mu}}}_{\mathbb{C}}^{-1}(0)/GL(V)$.
### 8.2 Moduli space of framed instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$
Recall that an instanton bundle on $\mathbb{C}{\mathbb{P}^{3}}$ is a locally
free coherent sheaf $E$ on $\mathbb{C}{\mathbb{P}^{3}}$ satisfying the
following conditions
* •
$c_{1}(E)=0$;
* •
$H^{0}(E(-1))=H^{1}(E(-2))=H^{2}(E(-2))=H^{3}(E(-3))=0$.
The integer $c:=c_{2}(E)$ is called the charge of $E$. One can show that if
$E$ is an instanton bundle on $\mathbb{C}{\mathbb{P}^{3}}$, then $c_{3}(E)=0$.
Moreover, a locally free coherent sheaf $E$ on $\mathbb{C}{\mathbb{P}^{3}}$ is
said to be _of trivial splitting type_ if there is a line
$\ell\subset\mathbb{C}{\mathbb{P}^{3}}$ such that the restriction $E|_{\ell}$
is the free sheaf, i.e. $E|_{\ell}\simeq{\cal O}_{\ell}^{\oplus{\rm rk}E}$. A
framing on $E$ at the line $\ell$ is the choice of an isomorphism
$\phi:E|_{\ell}\to{\cal O}_{\ell}^{\oplus{\rm rk}E}$. A framed bundle (at
$\ell$) is a pair $(E,\phi)$ consisting of a locally free coherent sheaf $E$
of trivial splitting type and a framing $\phi$ at $\ell$. Two framed bundles
$(E,\phi)$ and $(E^{\prime},\phi^{\prime})$ are isomorphic if there exists a
bundle isomorphism $\Psi:E\to E^{\prime}$ such that
$\phi^{\prime}=\phi\circ(\Psi|_{\ell})$.
The following linear algebraic description of framed instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$ was first established in [FJ, Main Theorem], and
further generalized in [J2, Theorem 3.1].
###### Theorem 8.2.
The exists a 1-1 correspondence between equivalence classes of globally
regular solutions of the $1$-dimensional ADHM equations and isomorphism
classes of instanton bundles on $\mathbb{C}\mathbb{P}^{3}$ framed at a fixed
line $\ell$, where $\dim W={\rm rk}(E)\geq 2$ and $\dim V=c_{2}(E)\geq 1$.
###### Corollary 8.3.
The moduli space $\mathcal{F}_{\ell}(r,c)$ of rank $r$, charge $c$ instanton
bundles on $\mathbb{C}\mathbb{P}^{3}$ framed at a fixed line $\ell$ is
naturally identified with the trihyperkähler reduction
${\operatorname{Sec}}_{0}(\mathbf{B}^{\rm
reg}(r,c)){/\\!\\!/\\!\\!/\\!\\!/}U(V)$.
###### Proof.
As we have shown above, the space $\mathcal{F}_{\ell}(r,c)$ is identified with
the space of globally regular solutions of the 1-dimensional ADHM equation. In
Subsection 8.1, we identified this space with
${\operatorname{Sec}}_{0}(\mathbf{B}^{\rm
reg}(r,c)){/\\!\\!/\\!\\!/\\!\\!/}U(V)$.
We are finally in position to use Theorem 5.11 to obtain the second main
result of this paper.
###### Theorem 8.4.
The moduli space $\mathcal{F}_{\ell}(r,c)$ of rank $r$, charge $c$ instanton
bundles on $\mathbb{C}\mathbb{P}^{3}$ framed at a fixed line $\ell$, is a
smooth trisymplectic manifold of complex dimension $4rc$.
###### Proof.
The moduli space ${\cal M}(r,c):=\mathbf{B}^{\rm
reg}(r,c){/\\!\\!/\\!\\!/}U(V)$ of framed instantons of rank $r$ and charge
$c$ is known to be a smooth, connected, hyperkähler manifold of complex
dimension $2rc$; it follows that ${\operatorname{Sec}}_{0}({\cal M}(r,c))$ is
a smooth, trisymplectic manifold of complex dimension $4rc$. From [JV, Thm
3.8], we also know that the standard map
$\mathcal{F}_{\ell}(r,c){\>\longrightarrow\>}{\operatorname{Sec}}({\cal
M}(r,c))$ (8.6)
is an isomorphism (without the condition of regularity, which implies
smoothness). From Theorem 8.3 it follows that $\mathcal{F}_{\ell}(r,c)$ is a
trihyperkähler reduction of ${\operatorname{Sec}}_{0}(\mathbf{B}^{\rm
reg}(r,c))$. ¿From its construction it is clear that the map (8.6) coincides
with the map
$\mathcal{F}_{\ell}(r,c)={\operatorname{Sec}}_{0}(\mathbf{B}^{\rm
reg}(r,c)){/\\!\\!/\\!\\!/\\!\\!/}U(V){\>\longrightarrow\>}{\operatorname{Sec}}(\mathbf{B}^{\rm
reg}(r,c){/\\!\\!/\\!\\!/}U(V)={\operatorname{Sec}}({\cal M}(r,c))$ (8.7)
constructed in Theorem 5.11. ¿From this theorem it follows that (8.7) is in
fact an open embedding to ${\operatorname{Sec}}_{0}({\cal M}(r,c))$. Since
(8.6) is an isomorphism,
${\operatorname{Sec}}_{0}({\cal M}(r,c))={\operatorname{Sec}}({\cal M}(r,c)).$
This latter space is smooth, which proves smoothness of
$\mathcal{F}_{\ell}(r,c)$.
###### Remark 8.5.
Notice that Theorem 5.11 in itself only shows that the space
$\mathcal{F}_{\ell}(r,c)$, which is a trihyperkähler reduction of
${\operatorname{Sec}}_{0}(\mathbf{B}^{\rm reg}(r,c))$, is openly embedded to
${\operatorname{Sec}}_{0}({\cal M}(r,c))$. This already proves that
$\mathcal{F}_{\ell}(r,c)$ is smooth, but to prove that this map is an
isomorphism, we use [JV, Thm 3.8].
### 8.3 Moduli space of rank $\mathbf{2}$ instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$
Let us now focus on the case of rank $2$ instanton bundles, which is rather
special. Recall that a mathematical instanton bundle on
$\mathbb{C}{\mathbb{P}^{3}}$ is a rank $2$ stable bundle
$E\to\mathbb{C}{\mathbb{P}^{3}}$ with $c_{1}(E)=0$ and $H^{1}(E(-2))=0$.
###### Proposition 8.6.
Rank $2$ instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$ are precisely
mathematical instanton bundles.
###### Proof.
if $E$ is a mathematical instanton bundle, then $H^{0}(E(-1))=0$ by stability.
Since $\Lambda^{2}E={\cal O}_{\mathbb{C}\mathbb{P}^{3}}$, there is a (unique
up to a scalar) symplectic isomorphism between $E$ and its dual $E^{*}$; one
can then use Serre duality to show that $H^{2}(E(-2))=H^{3}(E(-3))=0$, thus
$E$ is a rank $2$ instanton bundle.
Conversely, every instanton bundle can be presented as the cohomology of a
linear monad on $\mathbb{C}{\mathbb{P}^{3}}$ [J1, Theorem 3]. It is then a
classical fact that if $E$ is a rank $2$ bundle obtained as the cohomology of
a linear monad on $\mathbb{C}{\mathbb{P}^{3}}$ then $E$ is stable. It is then
clear that every rank $2$ instanton bundle is a mathematical instanton bundle.
Let $\mathcal{I}(c)$ denote the moduli space of mathematical instanton bundles
and $\mathcal{I}_{\ell}(c)$ the open subset of $\mathcal{I}(c)$ consisting of
instanton bundles restricting trivially to a fixed line
$\ell\subset\mathbb{C}{\mathbb{P}^{3}}$.
Let also $\mathcal{G}(c)$ denote the moduli space of S-equivalence classes of
semistable torsion-free sheaves $E$ of rank $2$ on ${\mathbb{P}^{3}}$ with
$c_{1}(E)=0$, $c_{2}(E)=c$ and $c_{3}(E)=0$; it is a projective variety.
$\mathcal{I}(c)$ can be regarded as the open subset of $\mathcal{G}(c)$
consisting of those locally free sheaves satisfying $H^{1}(E(-1))=0$.
For any fixed line $\ell\subset\mathbb{C}{\mathbb{P}^{3}}$, $\mathcal{I}(c)$
is contained in $\overline{\mathcal{I}_{\ell}(c)}$, where the closure is taken
within $\mathcal{G}(c)$. Thus $\mathcal{I}(c)$ is irreducible if and only if
there is a line $\ell$ such that $\mathcal{I}_{\ell}(c)$ is irreducible.
Using a theorem due to Grauert and Mullich we can conclude that every
mathematical instanton bundle must restrict trivially at some line
$\ell\subset\mathbb{C}\mathbb{P}^{3}$ (see [JV, Lemma 3.12]). Therefore,
$\mathcal{I}(c)$ is covered by open subsets of the form
$\mathcal{I}_{\ell}(c)$, but it is not contained within any such sets, since
for any nontrivial bundle over $\mathbb{C}{\mathbb{P}^{3}}$ there must exist a
line $\ell^{\prime}$ such that the restricted sheaf $E|_{\ell^{\prime}}$ is
nontrivial. Thus $\mathcal{I}(c)$ and $\mathcal{I}_{\ell}(c)$ must have the
same dimension, and one is nonsingular if and only if the other is as well.
We are now ready to prove the smoothness of the moduli space of mathematical
instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$.
###### Theorem 8.7.
The moduli space $\mathcal{I}(c)$ of mathematical instanton bundles on
$\mathbb{C}{\mathbb{P}^{3}}$ of charge $c$ is a smooth complex manifold of
dimension $8c-3$.
###### Proof.
The forgetful map $\mathcal{F}_{l}(2,c)\to\mathcal{I}_{\ell}(c)$ that takes
the pair $(E,\phi)$ simply to $E$ has as fibers the the set of all possible
framings at $\ell$ (up to equivalence). Since $E|_{\ell}\simeq
W\otimes\mathcal{O}_{\ell}$ [FJ, Proposition 13], a choice of framing
correspond to a choice of basis for the $2$-dimensional space $W$, thus all
fibers of the forgetfull map are isomorphic to $SL(W)$. Since
$\mathcal{F}_{l}(2,c)$ is smooth of dimension $8c$, we conclude that
$\mathcal{I}_{\ell}(c)$ is also smooth and its dimension is $8c-3$. The
Theorem follows from our previous discussion.
The irreducibility of $\mathcal{I}(c)$ for arbitrary $c$ remains an open
problem; it is only known to hold for $c$ odd or $c=2,4$. Clearly, if
$\mathcal{F}_{l}(2,c)$ is connected, then it must be irreducible, from which
one concludes that $\mathcal{I}_{\ell}(c)$, and hence $\mathcal{I}(c)$, are
irreducible. Since $\mathcal{F}_{l}(2,c)$ is a quotient of the set of globally
regular solutions of the 1-dimensional ADHM equations, it is actually enough
to prove that the latter is connected.
It is also worth mentioning a recent preprint of Markushevich and Tikhomirov
[MT], in which the authours prove that $\mathcal{I}(c)$ is rational whenever
it is irreducible. Thus, one also concludes immediately that
$\mathcal{F}_{l}(2,c)$ is also rational whenever it is irreducible.
## References
* [AD] A. Andrada, I. G. Dotti, Double products and hypersymplectic structures on $\mathbb{R}^{4n}$. Comm. Math. Phys. 262 (2006), 1–16.
* [ADHM] M. F. Atiyah, V. G. Drinfel’d, N. J. Hitchin, Yu. I. Manin, Construction of Instantons. Phys. Lett. A 65 (1978), 185–187.
* [Ar] V. I. Arnold, The Lagrangian Grassmannian of a quaternion hypersymplectic space. Funct. Anal. Appl. 35 (2001), 61–63
* [A] M. F. Atiyah, Complex analytic connections in fibre bundles. Trans. Amer. Math. Soc. 85 (1957), 181–207.
* [B1] W. Barth, Some properties of stable rank-$2$ vector bundles on ${\bf P}_{n}$. Math. Ann. 226 (1977), 125–150.
* [B2] W. Barth, Irreducibility of the space of mathematical instanton bundles with rank $2$ and $c_{2}=4$. Math. Ann. 258 (1981/82), 81–106.
* [Bea] A. Beauville, Varietes Kähleriennes dont la première classe de Chern est nulle. J. Diff. Geom. 18 (1983), 755–782.
* [Bes] A. Besse, Einstein Manifolds. Springer-Verlag, New York (1987).
* [C] S. S. Chern, Eine Invariantentheorie der Dreigewebe aus r-dimensionalen Mannigfaltigkeiten im $R_{2r}$, Abhandl. Math. Semin. Univ. Hamburg, 1936.
* [CTT] I. Coandă, A. S. Tikhomirov, G. Trautmann, Irreducibility and smoothness of the moduli space of mathematical 5-instantons over ${\mathbb{P}^{3}}$. Internat. J. Math. 14 (2003), 1–45.
* [DS] A. Dancer, A. Swann, Hypersymplectic manifolds. In: Recent developments in pseudo-Riemannian geometry, 97–111, ESI Lect. Math. Phys., Eur. Math. Soc., Zürich, 2008.
* [D] S. Donaldson, Instantons and Geometric Invariant Theory. Commun. Math. Phys. 93 (1984), 453–460.
* [ES] G. Ellingsrud and S.A. Stromme, Stable rank $2$ vector bundles on ${\mathbb{P}^{3}}$ with $c_{1}=0$ and $c_{2}=3$. Math. Ann. 255 (1981), 123–135.
* [F1] B. Feix, Hyperkähler metrics on cotangent bundles, J. Reine Angew. Math. 532 (2001), 33–46.
* [FJ] I. B. Frenkel, M. Jardim, Complex ADHM equations, and sheaves on ${\mathbb{P}^{3}}$. J. Algebra 319 (2008), 2913–2937.
* [GM] H. Grauert, G. Mülich, Vectorbundel von Rang 2 uber dem $n$-dimensionalen komplex projective Raum. Manuscripta Math. 16 (1975), 75–100.
* [H] R. Hartshorne, Stable vector bundles of rank $2$ on ${\mathbb{P}^{3}}$. Math. Ann. 238 (1978), 229–280.
* [HL] M. Hauzer, A. Langer, Moduli spaces of framed perverse instantons on ${\mathbb{P}^{3}}$. Glasgow Math. J. 53 (2011), 51–96.
* [HJV] A. A. Henni, M. Jardim, R. V. Martins, ADHM construction of perverse instanton sheaves. Preprint arXiv:1201:5657.
* [HKLR] N. J. Hitchin, A. Karlhede, U. Lindström, M. Roček, Hyperkähler metrics and supersymmetry. Comm. Math. Phys. 108, (1987) 535–589.
* [J1] M. Jardim, Instanton sheaves on complex projective spaces. Collec. Math. 57 (2006), 69–91.
* [J2] M. Jardim, Atiyah–Drinfeld–Hitchin–Manin construction of framed instanton sheaves. C. R. Acad. Sci. Paris, Ser. I 346 (2008), 427–430.
* [JV] M. Jardim, M. Verbitsky, Moduli spaces of framed instanton bundles on $\mathbb{C}{\mathbb{P}^{3}}$ and twistor sections of moduli spaces of instantons on $\mathbb{R}^{4}$. Adv. Math. 227 (2011),1526–1538.
* [K] D. Kaledin, Hyperkähler structures on total spaces of holomorphic cotangent bundles, in: D. Kaledin, M. Verbitsky (Eds.), Hyperkähler manifolds. International Press, Boston, 2001.
* [KV] D. Kaledin, M. Verbitsky, Non-Hermitian Yang-Mills connections. Selecta Math. (N.S.) 4 (1998), 279–320.
* [MT] D. Markushevich, A. S. Tikhomirov, Rationality of instanton moduli. Preprint arXiv:1012.4132.
* [M] M. Maruyama, The Theorem of Grauert–Mülich–Spindler. Math. Ann. 255 (1981), 317–333.
* [Nak] H. Nakajima, Lectures on Hilbert schemes of points on surfaces. Providence: American Mathematical Society, 1999
* [LeP] J. Le Potier, Sur l’espace de modules des fibrés de Yang et Mills. In: Mathematics and physics, 65–137. Progr. Math. 37, Birkhäuser Boston, 1983.
* [Sal] S. Salamon, Quaternionic Kähler manifolds. Inv. Math. 67 (1982), 143–171.
* [T] A. S. Tikhomirov. Moduli of mathematical instanton vector bundles with odd $c_{2}$ on projective space. Preprint arXiv:1101.3016.
* [V1] M. Verbitsky, Hyperholomorphic bundles over a hyperkähler manifold. Journ. of Alg. Geom., 5 (1996), 633-669.
* [V2] M. Verbitsky, Hypercomplex Varieties. Comm. Anal. Geom. 7 (1999), no. 2, 355–396.
|
arxiv-papers
| 2011-03-23T01:45:23 |
2024-09-04T02:49:17.883156
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Marcos Jardim, Misha Verbitsky",
"submitter": "Misha Verbitsky",
"url": "https://arxiv.org/abs/1103.4431"
}
|
1103.4507
|
# Zeckendorf family identities generalized
Darij Grinberg
(22 March 2011 (version 7, brief version, arXiv))
###### Abstract
In [1], Philip Matchett Wood and Doron Zeilberger have constructed identities
for the Fibonacci numbers $f_{n}$ of the form
$\displaystyle 1f_{n}$ $\displaystyle=f_{n}\text{ for all }n\geq 1;$
$\displaystyle 2f_{n}$ $\displaystyle=f_{n-2}+f_{n+1}\text{ for all }n\geq 3;$
$\displaystyle 3f_{n}$ $\displaystyle=f_{n-2}+f_{n+2}\text{ for all }n\geq 3;$
$\displaystyle 4f_{n}$ $\displaystyle=f_{n-2}+f_{n}+f_{n+2}\text{ for all
}n\geq 3;$ etc.; $\displaystyle kf_{n}$ $\displaystyle=\sum_{s\in
S_{k}}f_{n+s}\text{ for all }n>\max\left\\{-s\mid s\in S_{k}\right\\}\text{,}$
where $S_{k}$ is a fixed ”holey” set of integers (”holey” means that no two
elements of this set are consecutive integers) depending only on $k$. (The
condition $n>\max\left\\{-s\mid s\in S_{k}\right\\}$ is only to make sure that
all addends $f_{n+s}$ are well-defined. If the Fibonacci sequence is properly
continued to the negative, this condition drops out.)
In this note we prove a generalization of these identities: For any family
$\left(a_{1},a_{2},...,a_{p}\right)$ of integers, there exists one and only
one finite holey set $S$ of integers such that every $n$ (high enough to make
the Fibonacci numbers in the equation below well-defined) satisfies
$f_{n+a_{1}}+f_{n+a_{2}}+...+f_{n+a_{p}}=\sum\limits_{s\in S}f_{n+s}.$
The proof uses the Fibonacci-approximating properties of the golden ratio. It
would be interesting to find a purely combinatorial proof.
This is a brief version of the note, optimized for readability. A more
detailed account of the proof can be found in [2].
The purpose of this note is to establish a generalization of the so-called
Zeckendorf family identities which were discussed in [1]. First, some
definitions:
> Definitions. 1) A subset $S$ of $\mathbb{Z}$ is called holey if it satisfies
> $\left(s+1\notin S\text{ for every }s\in S\right)$.
>
> 2) Let $\left(f_{1},f_{2},f_{3},...\right)$ be the Fibonacci sequence
> (defined by $f_{1}=f_{2}=1$ and the recurrence relation
> $\left(f_{n}=f_{n-1}+f_{n-2}\text{ for all }n\in\mathbb{N}\text{ satisfying
> }n\geq 3\right)$).
Our main theorem is:
> Theorem 1 (generalized Zeckendorf family identities). Let $T$ be a finite
> set, and $a_{t}$ be an integer for every $t\in T$.
>
> Then, there exists one and only one finite holey subset $S$ of $\mathbb{Z}$
> such that
>
> $\left(\sum\limits_{t\in T}f_{n+a_{t}}=\sum\limits_{s\in S}f_{n+s}\text{ for
> every }n\in\mathbb{Z}\text{ which satisfies }n>\max\left(\left\\{-a_{t}\mid
> t\in T\right\\}\cup\left\\{-s\mid s\in S\right\\}\right)\right).$
Remarks.
1) The Zeckendorf family identities111The first seven of these identities are
$\displaystyle 1f_{n}$ $\displaystyle=f_{n}\text{ for all }n\geq 1;$
$\displaystyle 2f_{n}$ $\displaystyle=f_{n-2}+f_{n+1}\text{ for all }n\geq 3;$
$\displaystyle 3f_{n}$ $\displaystyle=f_{n-2}+f_{n+2}\text{ for all }n\geq 3;$
$\displaystyle 4f_{n}$ $\displaystyle=f_{n-2}+f_{n}+f_{n+2}\text{ for all
}n\geq 3;$ $\displaystyle 5f_{n}$ $\displaystyle=f_{n-4}+f_{n-1}+f_{n+3}\text{
for all }n\geq 5;$ $\displaystyle 6f_{n}$
$\displaystyle=f_{n-4}+f_{n+1}+f_{n+3}\text{ for all }n\geq 5;$ $\displaystyle
7f_{n}$ $\displaystyle=f_{n-4}+f_{n+4}\text{ for all }n\geq 5.$ from [1] are
the result of applying Theorem 1 to the case when all $a_{t}$ are $=0$.
2) The condition $n>\max\left(\left\\{-a_{t}\mid t\in
T\right\\}\cup\left\\{-s\mid s\in S\right\\}\right)$ in Theorem 1 is just a
technical condition made in order to ensure that the Fibonacci numbers
$f_{n+a_{t}}$ for all $t\in T$ and $f_{n+s}$ for all $s\in S$ are well-
defined. (If we would define the Fibonacci numbers $f_{n}$ for integers $n\leq
0$ by extending the recurrence relation $f_{n}=f_{n-1}+f_{n-2}$ ”to the left”,
then we could drop this condition.)
The following is my proof of Theorem 1. It does not even try to be
combinatorial - it is pretty much the opposite. Technically, it is completely
elementary and does not resort to any theorems from analysis; but the method
used (choosing a ”large enough” $N$ to make an estimate work) is an analytic
one.
First, some lemmas and notations:
We denote by $\mathbb{N}$ the set $\left\\{0,1,2,...\right\\}$ (and not the
set $\left\\{1,2,3,...\right\\}$, like some other authors do). Also, we denote
by $\mathbb{N}_{2}$ the set
$\left\\{2,3,4,...\right\\}=\mathbb{N}\setminus\left\\{0,1\right\\}$.
Also, let $\phi=\dfrac{1+\sqrt{5}}{2}$. We notice that $\phi\approx 1.618...$
and that $\phi^{2}=\phi+1$.
First, some basic (and known) lemmas on the Fibonacci sequence:
> Lemma 2. If $S$ is a finite holey subset of $\mathbb{N}_{2}$, then
> $\sum\limits_{t\in S}f_{t}<f_{\max S+1}$.
Proof. This is rather clear either by a telescoping sum argument (write the
set $S$ in the form $\left\\{s_{1},s_{2},...,s_{k}\right\\}$ with
$s_{1}<s_{2}<...<s_{k}$, notice that
$\sum\limits_{t\in
S}f_{t}=\sum\limits_{i=1}^{k}f_{s_{i}}=\sum\limits_{i=1}^{k}\left(f_{s_{i}+1}-f_{s_{i}-1}\right)=\sum\limits_{i=1}^{k-1}\left(f_{s_{i}+1}-f_{s_{i+1}-1}\right)+\underbrace{f_{s_{k}+1}}_{=f_{\max
S+1}}-\underbrace{f_{s_{1}-1}}_{>0},$
and use $s_{i}+1\leq s_{i+1}-1$ since the set $S$ is holey) or by induction
over $\max S$ (use $f_{\max S+1}=f_{\max S}+f_{\max S-1}$ here).
> Lemma 3 (existence part of the Zeckendorf theorem). For every nonnegative
> integer $n$, there exists a finite holey subset $T$ of $\mathbb{N}_{2}$ such
> that $n=\sum\limits_{t\in T}f_{t}$.
Proof. Induction over $n$. The main idea here is to let $t_{1}$ be the maximal
$\tau\in\mathbb{N}_{2}$ satisfying $f_{\tau}\leq n$, and apply Lemma 3 to
$n-t_{1}$ instead of $n$. The details are left to the reader (and can be found
in [2]).
> Lemma 4 (uniqueness part of the Zeckendorf theorem). Let $n\in\mathbb{N}$,
> and let $T$ and $T^{\prime}$ be two finite holey subsets of $\mathbb{N}_{2}$
> such that $n=\sum\limits_{t\in T}f_{t}$ and $n=\sum\limits_{t\in
> T^{\prime}}f_{t}$. Then, $T=T^{\prime}$.
Proof. Induction over $n$. Use Lemma 2 to show that $\max T<\max T^{\prime}+1$
and $\max T^{\prime}<\max T+1$, resulting in $\max T=\max T^{\prime}$. Hence,
the sets $T$ and $T^{\prime}$ have an element in common, and we can reduce the
situation to one with a smaller $n$ by removing this common element from both
sets.
Lemmata 1 and 2 together yield:
> Theorem 5 (Zeckendorf theorem). For every nonnegative integer $n$, there
> exists one and only one finite holey subset $T$ of $\mathbb{N}_{2}$ such
> that $n=\sum\limits_{t\in T}f_{t}$.
>
> We will denote this set $T$ by $Z_{n}$. Thus, $n=\sum\limits_{t\in
> Z_{n}}f_{t}$.
Now for something completely trivial:
> Theorem 6. For every $n\in\mathbb{N}_{2}$, we have $\left|f_{n+1}-\phi
> f_{n}\right|=\dfrac{1}{\sqrt{5}}\left(\phi-1\right)^{n}$.
Proof. Binet’s formula yields $f_{n}=\dfrac{\phi^{n}-\phi^{-n}}{\sqrt{5}}$ and
$f_{n+1}=\dfrac{\phi^{n+1}-\phi^{-\left(n+1\right)}}{\sqrt{5}}$; the rest is
computation.
Yet another lemma:
> Theorem 7. If $S$ is a finite holey subset of $\mathbb{N}_{2}$, then
> $\sum\limits_{s\in S}\left(\phi-1\right)^{s}\leq\phi-1$.
Proof of Theorem 7. Since $S$ is a holey subset of $\mathbb{N}_{2}$, the
smallest element of $S$ is at least $2$, the second smallest element of $S$ is
at least $4$ (since it is larger than the smallest element by at least $2$),
the third smallest element of $S$ is at least $6$ (since it is larger than the
second smallest element by at least $2$), and so on. Since
$\mathbb{N}\rightarrow\mathbb{R}$, $s\mapsto\left(\phi-1\right)^{s}$ is a
monotonically decreasing function (as $0\leq\phi-1\leq 1$), we thus have
$\sum_{s\in
S}\left(\phi-1\right)^{s}\leq\sum_{s\in\left\\{2,4,6,...\right\\}}\left(\phi-1\right)^{s}=\sum_{t\in\left\\{1,2,3,...\right\\}}\left(\phi-1\right)^{2t}=\phi-1$
(by the formula for the sum of the geometric series, along with some
computations). This proves Theorem 7.
Let us now come to the proof of Theorem 1. First, we formulate the existence
part of this theorem:
> Theorem 8 (existence part of the generalized Zeckendorf family identities).
> Let $T$ be a finite set, and $a_{t}$ be an integer for every $t\in T$.
>
> Then, there exists a finite holey subset $S$ of $\mathbb{Z}$ such that
>
> $\left(\sum\limits_{t\in T}f_{n+a_{t}}=\sum\limits_{s\in S}f_{n+s}\text{ for
> every }n\in\mathbb{Z}\text{ which satisfies }n>\max\left(\left\\{-a_{t}\mid
> t\in T\right\\}\cup\left\\{-s\mid s\in S\right\\}\right)\right).$
Before we start proving this, we need a new notation:
> Definition. Let $K$ be a subset of $\mathbb{Z}$, and $a\in\mathbb{Z}$. Then,
> $K+a$ will denote the subset $\left\\{k+a\ \mid\ k\in K\right\\}$ of
> $\mathbb{Z}$.
Clearly, $\left(K+a\right)+b=K+\left(a+b\right)$ for any two integers $a$ and
$b$. Also, $K+0=K$. Finally, if $K$ is a holey subset of $\mathbb{Z}$, and if
$a\in\mathbb{Z}$, then $K+a$ is holey as well.
Proof of Theorem 8. Choose a high enough integer $N$. What exactly ”high
enough” means we will see later; at the moment, we only require
$N\in\mathbb{N}_{2}$ and $N>\max\left\\{-a_{t}\mid t\in T\right\\}$. We might
later want $N$ to be even higher, however.
Let $\nu=\sum\limits_{t\in T}f_{N+a_{t}}$. Then, Lemma 3 yields
$\nu=\sum\limits_{t\in Z_{\nu}}f_{t}$ for a finite holey subset $Z_{\nu}$ of
$\mathbb{N}_{2}$. Let $S=\left\\{t-N\ \mid\ t\in Z_{\nu}\right\\}$. Then,
$S=Z_{\nu}+\left(-N\right)$ is a finite holey subset of $\mathbb{Z}$, and
$\nu=\sum\limits_{t\in Z_{\nu}}f_{t}$ becomes $\nu=\sum\limits_{s\in
S}f_{N+s}$. So now we know that $\sum\limits_{t\in
T}f_{N+a_{t}}=\sum\limits_{s\in S}f_{N+s}$ (because both sides of this
equation equal $\nu$).
So, we have chosen a high $N$ and found a finite holey subset $S$ of
$\mathbb{Z}$ which satisfies $\sum\limits_{t\in
T}f_{N+a_{t}}=\sum\limits_{s\in S}f_{N+s}$. But Theorem 8 is not proven yet:
Theorem 8 requires us to prove that there exists one finite holey subset $S$
of $\mathbb{Z}$ which works for every $N$, while at the moment we cannot be
sure yet whether different $N$’s wouldn’t produce different sets $S$. And, in
fact, different $N$’s can produce different sets $S$, but (fortunately!) only
if the $N$’s are too small. If we take $N$ high enough, the set $S$ that we
obtained turns out to be universal, i. e. it satisfies
$\sum\limits_{t\in T}f_{n+a_{t}}=\sum\limits_{s\in S}f_{n+s}\ \ \ \ \ \ \ \ \
\ \text{for every }n\in\mathbb{Z}\text{ which satisfies
}n>\max\left(\left\\{-a_{t}\mid t\in T\right\\}\cup\left\\{-s\mid s\in
S\right\\}\right).$ (1)
We are now going to prove this.
In order to prove (1), we need two assertions:
Assertion 1: If some $n\in\mathbb{Z}$ satisfies $n\geq N$ and
$\sum\limits_{t\in T}f_{n+a_{t}}=\sum\limits_{s\in S}f_{n+s}$, then
$\sum\limits_{t\in T}f_{\left(n+1\right)+a_{t}}=\sum\limits_{s\in
S}f_{\left(n+1\right)+s}$.
Assertion 2: If some $n\in\mathbb{Z}$ satisfies $\sum\limits_{t\in
T}f_{n+a_{t}}=\sum\limits_{s\in S}f_{n+s}$ and $\sum\limits_{t\in
T}f_{\left(n+1\right)+a_{t}}=\sum\limits_{s\in S}f_{\left(n+1\right)+s}$, then
$\sum\limits_{t\in T}f_{\left(n-1\right)+a_{t}}=\sum\limits_{s\in
S}f_{\left(n-1\right)+s}$ (if $n-1>\max\left(\left\\{-a_{t}\mid t\in
T\right\\}\cup\left\\{-s\mid s\in S\right\\}\right)$).
Obviously, Assertion 1 yields (by induction) that $\sum\limits_{t\in
T}f_{n+a_{t}}=\sum\limits_{s\in S}f_{n+s}$ for every $n\geq N$, and Assertion
2 then finishes off the remaining $n$’s (by backwards induction, or, to be
more precise, by an induction step from $n+1$ and $n$ to $n-1$). Thus, once
both Assertions 1 and 2 are proven, (1) will follow and thus Theorem 8 will be
proven.
Assertion 2 is almost trivial (just notice that
$\sum\limits_{t\in T}f_{\left(n-1\right)+a_{t}}=\sum\limits_{t\in
T}\underbrace{f_{n+a_{t}-1}}_{=f_{n+a_{t}+1}-f_{n+a_{t}}}=\sum\limits_{t\in
T}f_{n+a_{t}+1}-\sum\limits_{t\in T}f_{n+a_{t}}=\sum\limits_{t\in
T}f_{\left(n+1\right)+a_{t}}-\sum\limits_{t\in T}f_{n+a_{t}}$
and
$\sum\limits_{s\in S}f_{\left(n-1\right)+s}=\sum\limits_{s\in
S}\underbrace{f_{n+s-1}}_{=f_{n+s+1}-f_{n+s}}=\sum\limits_{s\in
S}f_{n+s+1}-\sum\limits_{s\in S}f_{n+s}=\sum\limits_{s\in
S}f_{\left(n+1\right)+s}-\sum\limits_{s\in S}f_{n+s}$
), so it only remains to prove Assertion 1.
So let us prove Assertion 1. Here we are going to use that $N$ is high enough
(because otherwise, Assertion 1 wouldn’t hold). We have $\sum\limits_{t\in
T}f_{n+a_{t}}=\sum\limits_{s\in S}f_{n+s}$ by assumption, so that
$\sum\limits_{t\in T}f_{n+a_{t}}-\sum\limits_{s\in S}f_{n+s}=0$. Thus,
$\displaystyle\sum\limits_{t\in T}f_{\left(n+1\right)+a_{t}}-\sum\limits_{s\in
S}f_{\left(n+1\right)+s}$ $\displaystyle=\sum\limits_{t\in
T}f_{\left(n+1\right)+a_{t}}-\sum\limits_{s\in
S}f_{\left(n+1\right)+s}-\phi\left(\sum\limits_{t\in
T}f_{n+a_{t}}-\sum\limits_{s\in S}f_{n+s}\right)$
$\displaystyle=\sum\limits_{t\in T}\left(f_{\left(n+1\right)+a_{t}}-\phi
f_{n+a_{t}}\right)-\sum\limits_{s\in S}\left(f_{\left(n+1\right)+s}-\phi
f_{n+s}\right)$ $\displaystyle=\sum\limits_{t\in T}\left(f_{n+a_{t}+1}-\phi
f_{n+a_{t}}\right)-\sum\limits_{s\in S}\left(f_{n+s+1}-\phi f_{n+s}\right),$
so that
$\displaystyle\left|\sum\limits_{t\in
T}f_{\left(n+1\right)+a_{t}}-\sum\limits_{s\in
S}f_{\left(n+1\right)+s}\right|=\left|\sum\limits_{t\in
T}\left(f_{n+a_{t}+1}-\phi f_{n+a_{t}}\right)-\sum\limits_{s\in
S}\left(f_{n+s+1}-\phi f_{n+s}\right)\right|$
$\displaystyle\leq\sum\limits_{t\in T}\left|f_{n+a_{t}+1}-\phi
f_{n+a_{t}}\right|+\sum\limits_{s\in S}\left|f_{n+s+1}-\phi f_{n+s}\right|\ \
\ \ \ \ \ \ \ \ \left(\text{by the triangle inequality}\right)$
$\displaystyle=\dfrac{1}{\sqrt{5}}\sum\limits_{t\in
T}\left(\phi-1\right)^{n+a_{t}}+\dfrac{1}{\sqrt{5}}\sum\limits_{s\in
S}\left(\phi-1\right)^{n+s}\ \ \ \ \ \ \ \ \ \ \left(\text{by Theorem
6}\right)$ $\displaystyle<\sum\limits_{t\in
T}\left(\phi-1\right)^{n+a_{t}}+\sum\limits_{s\in S}\left(\phi-1\right)^{n+s}\
\ \ \ \ \ \ \ \ \ \left(\text{since }\dfrac{1}{\sqrt{5}}<1\right)$
$\displaystyle\leq\sum\limits_{t\in
T}\left(\phi-1\right)^{N+a_{t}}+\sum\limits_{s\in S}\left(\phi-1\right)^{N+s}\
\ \ \ \ \ \ \ \ \ \left(\begin{array}[c]{c}\text{since
}\left(\phi-1\right)^{n+a_{t}}\leq\left(\phi-1\right)^{N+a_{t}}\text{ and}\\\
\left(\phi-1\right)^{n+s}\leq\left(\phi-1\right)^{N+s}\text{, because}\\\
n\geq N\text{ and }0\leq\phi-1\leq 1\end{array}\right)$ (5)
$\displaystyle=\sum\limits_{t\in
T}\left(\phi-1\right)^{N+a_{t}}+\sum\limits_{t\in
Z_{\nu}}\left(\phi-1\right)^{t}\ \ \ \ \ \ \ \ \ \ \left(\text{since
}S=\left\\{t-N\ \mid\ t\in Z_{\nu}\right\\}\right)$
$\displaystyle=\sum\limits_{t\in
T}\left(\phi-1\right)^{N+a_{t}}+\sum\limits_{s\in
Z_{\nu}}\left(\phi-1\right)^{s}=\left(\phi-1\right)^{N}\sum\limits_{t\in
T}\left(\phi-1\right)^{a_{t}}+\sum\limits_{s\in
Z_{\nu}}\left(\phi-1\right)^{s}$
$\displaystyle\leq\left(\phi-1\right)^{N}\sum\limits_{t\in
T}\left(\phi-1\right)^{a_{t}}+\left(\phi-1\right)$ (6)
(since $\sum\limits_{s\in Z_{\nu}}\left(\phi-1\right)^{s}\leq\phi-1$ by
Theorem 7, because $Z_{\nu}$ is a holey subset of $\mathbb{N}_{2}$).
Now, $\sum\limits_{t\in T}\left(\phi-1\right)^{a_{t}}$ is a constant, while
$\left(\phi-1\right)^{N}\rightarrow 0$ for $N\rightarrow\infty$. Hence, we can
make the product $\left(\phi-1\right)^{N}\sum\limits_{t\in
T}\left(\phi-1\right)^{a_{t}}$ arbitrarily close to $0$ if we choose $N$ high
enough. Since $\phi-1<1$, we have
$\left(\phi-1\right)^{N}\sum\limits_{t\in
T}\left(\phi-1\right)^{a_{t}}+\left(\phi-1\right)<1$ (7)
if $\left(\phi-1\right)^{N}\sum\limits_{t\in T}\left(\phi-1\right)^{a_{t}}$ is
sufficiently close to $0$, what we can enforce by taking a high enough $N$.
This is exactly the point where we require $N$ to be high enough.
So let us take $N$ high enough so that (7) holds. Combined with (6), it then
yields
$\left|\sum\limits_{t\in T}f_{\left(n+1\right)+a_{t}}-\sum\limits_{s\in
S}f_{\left(n+1\right)+s}\right|<1,$
which leads to $\left|\sum\limits_{t\in
T}f_{\left(n+1\right)+a_{t}}-\sum\limits_{s\in
S}f_{\left(n+1\right)+s}\right|=0$ (since $\left|\sum\limits_{t\in
T}f_{\left(n+1\right)+a_{t}}-\sum\limits_{s\in
S}f_{\left(n+1\right)+s}\right|$ is a nonnegative integer). In other words,
$\sum\limits_{t\in T}f_{\left(n+1\right)+a_{t}}=\sum\limits_{s\in
S}f_{\left(n+1\right)+s}$. This completes the proof of Assertion 1, and, with
it, the proof of Theorem 8.
All that remains now is the (rather trivial) uniqueness part of Theorem 1:
> Lemma 9 (uniqueness part of the generalized Zeckendorf family identities).
> Let $T$ be a finite set, and $a_{t}$ be an integer for every $t\in T$.
>
> Let $S$ be a finite holey subset of $\mathbb{Z}$ such that
>
> $\left(\sum\limits_{t\in T}f_{n+a_{t}}=\sum\limits_{s\in S}f_{n+s}\text{ for
> every }n\in\mathbb{Z}\text{ which satisfies }n>\max\left(\left\\{-a_{t}\mid
> t\in T\right\\}\cup\left\\{-s\mid s\in S\right\\}\right)\right).$
>
> Let $S^{\prime}$ be a finite holey subset of $\mathbb{Z}$ such that
>
> $\left(\sum\limits_{t\in T}f_{n+a_{t}}=\sum\limits_{s\in
> S^{\prime}}f_{n+s}\text{ for every }n\in\mathbb{Z}\text{ which satisfies
> }n>\max\left(\left\\{-a_{t}\mid t\in T\right\\}\cup\left\\{-s\mid s\in
> S^{\prime}\right\\}\right)\right).$
>
> Then, $S=S^{\prime}$.
Proof of Lemma 9. Let
$n=\max\left(\left\\{-a_{t}\mid t\in T\right\\}\cup\left\\{-s\mid s\in
S\right\\}\cup\left\\{-s\mid s\in S^{\prime}\right\\}\right)+2.$ (8)
Then, $n$ satisfies $n>\max\left(\left\\{-a_{t}\mid t\in
T\right\\}\cup\left\\{-s\mid s\in S\right\\}\right)$, so that
$\displaystyle\sum\limits_{t\in T}f_{n+a_{t}}$
$\displaystyle=\sum\limits_{s\in S}f_{n+s}\ \ \ \ \ \ \ \ \ \ \left(\text{by
the condition of Lemma 9}\right)$ $\displaystyle=\sum\limits_{t\in S+n}f_{t}\
\ \ \ \ \ \ \ \ \ \left(\begin{array}[c]{c}\text{here, we substituted }t\text{
for }n+s\text{, since the map}\\\ S\rightarrow S+n,\ s\mapsto n+s\text{ is a
bijection}\end{array}\right).$
Similarly, $\sum\limits_{t\in T}f_{n+a_{t}}=\sum\limits_{t\in
S^{\prime}+n}f_{t}$. Hence, $\sum\limits_{t\in S+n}f_{t}=\sum\limits_{t\in
S^{\prime}+n}f_{t}$. Since the sets $S+n$ and $S^{\prime}+n$ are both holey
(since so are $S$ and $S^{\prime}$) and finite (since so are $S$ and
$S^{\prime}$), and are subsets of $\mathbb{N}_{2}$ (here is where we use (8)),
we can now conclude from Lemma 4 that $S+n=S^{\prime}+n$, so that
$S=S^{\prime}$, proving Lemma 9.
Now, Theorem 1 is clear, since the existence follows from Theorem 8 and the
uniqueness from Lemma 9.
References
[1] Philip Matchett Wood, Doron Zeilberger, A translation method for finding
combinatorial bijections, to appear in Annals of Combinatorics.
http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/trans-method.html
[2] Darij Grinberg, Zeckendorf family identities generalized, version 7, 18
March 2011 *long version*.
http://www.cip.ifi.lmu.de/~grinberg/zeckendorfLONG.pdf
|
arxiv-papers
| 2011-03-23T13:21:45 |
2024-09-04T02:49:17.895688
|
{
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"authors": "Darij Grinberg",
"submitter": "Darij Grinberg",
"url": "https://arxiv.org/abs/1103.4507"
}
|
1103.4521
|
# An implementation of range trees with fractional cascading in C++
Vissarion Fisikopoulos
###### Abstract
Range trees are multidimensional binary trees which are used to perform
d-dimensional orthogonal range searching. In this technical report we study
the implementation issues of range trees with fractional cascading, named
layered range trees. We also document our implementation of range trees with
fractional cascading in C++ using STL and generic programming techniques.
## 1 Introduction
This project is an implementation of range trees with fractional cascading,
named layered range trees in C++ using STL and generic programming techniques.
Range trees are multidimensional binary trees which are used to perform
d-dimensional orthogonal range searching. Range trees were discovered
independently by several people including Bentley[1], who also discovered kd-
trees and Lueker, who introduces the technique of fractional cascading for
range trees [7]. An introduction to orthogonal range searching, range trees
and fractional cascading can be found in [6, 9]. In [2] there is a
presentation of a project of efficient implementations of range trees in 2-3
dimensions including layered ones and some experimental results.
## 2 Complexity issues
The range trees answer a d-dimensional range query in time $O(\log^{d}n+k)$,
where $n$ is the whole set of points and $k$ is the set of reported points.
The construction time and the space the tree consume are $O(n\log^{d-1}n)$.
Using fractional cascading we can be benefited by a $\log n$ factor in the
last level of the tree and the resulting time complexity become
$O(\log^{d-1}n+k)$. Intuitively, fractional cascading perform one binary
search instead of two in the last level. The optimal solution to the
orthogonal range search problem is due to Chazelle [5, 4] who propose a
structure with time complexity $O(\log^{c}n+k)$ and $O(n(\log n/\log\log
n)^{d-1})$ space consumption, where $c$ is a constant.
## 3 Range trees in CGAL
Although, CGAL library [3] provides some classes for range trees there is
space for optimizations in that package [10]. Firstly, there is a lack of
recursive construction of d-dimensional range tree and the only way to
construct a range tree of dimension d is to build a tree of dimension 1 and
then make this an associative range tree of a new one which will have
dimension 2. Then one must build a tree of dimension 3 with this tree as an
associative tree and this technique continues until the construction of the
whole d-dimensional tree.
In addition to that, the package uses virtual functions, which increases the
run time and finally there is no fractional cascading.
The proposed approach uses nested templates for the representation of the
$d$-dimensional range tree which is defined in compilation time. The dimension
of the tree must be a constant and defined in the compilation time. In the
last level a fractional cascading structure is constructed.
For example a 4-dimensional range tree of size $n$ with different kind of data
at each layer is given by the following nested templated definition.
⬇
Layered_range_tree <DataClass,
Layered_range_tree <DataClass,
Last_range_tree <DataClass>
>
> tree(n);
Note that for each layer $i<d-1$ the same class Layered_range_tree is used.
The last two layers, in which the fractional cascading is implemented, use the
Last_range_tree class. The DataClass has the definitions of each layer’s own
data along with the comparison operators.
## 4 Software implementation
Essentially, the project was implemented using the C++ language and the STL
library[11]. Concisely, the design uses methods from object oriented as well
as the generic programming style.
### Representation.
The trees are represented as STL vectors. The tree traversals are implemented
using index arithmetic i.e. node’s $i$ parent is $\lfloor i/2\rfloor$, the
left, right child of $i$ is $2i+1$ and $2i+2$ respectively. This method is
optimal for a full, static, binary tree and in our case the third is always
hold. In order to have a full binary tree we replicate the last (biggest in
the fist dimension) point and in the worst case we have a tree the half of
which is useless with no effect to the time complexity (the replicated nodes
would not be visited). In this project we are interested in the static case of
range trees but the design is sufficient for a dynamic implementation in which
the tree nodes must also have some extra pointers. On the other hand,
dynamization of the fractional cascading structure is not trivial [8].
### Construction.
For the construction of the tree we need to sort the input data with respect
to the first coordinate and build recursively (top-down) the main tree in
linear time. For the associative trees we don’t have to sort the input data
again. We build the associative trees in bottom-up manner. Every node merge
the sorted lists of its children in linear time starting from the leaves which
are trivially sorted. Note that this is essentially the same algorithm as
merge-sort.
### Memory consumption.
Even the asymptotic complexity of space stated above ensures that range tree
needs a lot of memory. The only constraint in the number of dimensions of data
is memory. Moreover, from the asymptotic complexity follows that with fixed
memory there is a trade of between the number of data and number of
dimensions. For example see
http://users.uoa.gr/$\sim$vfisikop/compgeom/Layered_Range_trees/examples/Layer
ed_Range_tree_10.cpp
for an example of a range tree over 10-dimensional data.
## 5 Future work
The most important feature for improvement is in the case that the tree is not
full. The solution proposed in the previous section, seams very suboptimal and
is not implemented yet. Another point is that the implementation don’t handle
yet the case that two points have equal coordinates.
## 6 Acknowledgements
This project started as a course project in the graduate course of
Computational Geometry, National and Kapodistrian University of Athens
2008-2009 under the supervision of professor Ioannis Z. Emiris. The C++ code
can be found in:
http://cgi.di.uoa.gr/$\sim$vfisikop/compgeom/Layered_Range_trees
## References
* [1] J. L. Bentley. Multidimensional binary search trees used for associative searching. Commun. ACM, 18(9):509–517, 1975.
* [2] R. Berinde. Efficient implementations of range trees. 2007\.
* [3] C. E. Board. CGAL User and Reference Manual, 3.4 edition, 2008.
* [4] B. Chazelle. Lower bounds for orthogonal range searching: part i. the reporting case. J. ACM, 37(2):200–212, 1990.
* [5] B. Chazelle. Lower bounds for orthogonal range searching: part ii. the arithmetic model. J. ACM, 37(3):439–463, 1990.
* [6] M. de Berg, M. van Kreveld, M. Overmars, and O. Schwarzkopf. Computational Geometry: Algorithms and Applications. Springer-Verlag, second edition, 2000.
* [7] G. S. Lueker. A data structure for orthogonal range queries. In SFCS ’78: Proceedings of the 19th Annual Symposium on Foundations of Computer Science (sfcs 1978), pages 28–34, Washington, DC, USA, 1978. IEEE Computer Society.
* [8] K. Mehlhorn and S. Näher. Dynamic fractional cascading. Algorithmica, 5:215–241, 1990.
* [9] D. M. Mount. Lecture notes: Cmsc 754 computational geometry. lecture 18: Orthogonal range trees. pages 102–104, 2007.
* [10] G. Neyer. Algorithms, complexity, and software engineering in computational geometry. PhD Thesis, 2000.
* [11] B. Stroustrup. The C++ Programming Language (Special 3rd Edition). Addison-Wesley Professional, February 2000.
|
arxiv-papers
| 2011-03-23T13:57:18 |
2024-09-04T02:49:17.899637
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vissarion Fisikopoulos",
"submitter": "Vissarion Fisikopoulos",
"url": "https://arxiv.org/abs/1103.4521"
}
|
1103.4577
|
# Logical, Metric, and Algorithmic
Characterisations of Probabilistic Bisimulation
Yuxin Deng1 Wenjie Du2
1 Shanghai Jiao Tong University, China
2 Shanghai Normal University, China
###### Abstract
Many behavioural equivalences or preorders for probabilistic processes involve
a lifting operation that turns a relation on states into a relation on
distributions of states. We show that several existing proposals for lifting
relations can be reconciled to be different presentations of essentially the
same lifting operation. More interestingly, this lifting operation nicely
corresponds to the Kantorovich metric, a fundamental concept used in
mathematics to lift a metric on states to a metric on distributions of states,
besides the fact the lifting operation is related to the maximum flow problem
in optimisation theory.
The lifting operation yields a neat notion of probabilistic bisimulation, for
which we provide logical, metric, and algorithmic characterisations.
Specifically, we extend the Hennessy-Milner logic and the modal mu-calculus
with a new modality, resulting in an adequate and an expressive logic for
probabilistic bisimilarity, respectively. The correspondence of the lifting
operation and the Kantorovich metric leads to a natural characterisation of
bisimulations as pseudometrics which are post-fixed points of a monotone
function. We also present an “on the fly” algorithm to check if two states in
a finitary system are related by probabilistic bisimilarity, exploiting the
close relationship between the lifting operation and the maximum flow problem.
## 1 Introduction
In the last three decades a wealth of behavioural equivalences have been
proposed in concurrency theory. Among them, _bisimilarity_ [43, 48] is
probably the most studied one as it admits a suitable semantics, an elegant
co-inductive proof technique, as well as efficient decision algorithms.
In recent years, probabilistic constructs have been proven useful for giving
quantitative specifications of system behaviour. The first papers on
probabilistic concurrency theory [25, 5, 38] proceed by _replacing_
nondeterministic with probabilistic constructs. The reconciliation of
nondeterministic and probabilistic constructs starts with [27] and has
received a lot of attention in the literature [67, 54, 40, 53, 29, 41, 3, 32,
44, 6, 57, 42, 14, 15, 13, 12].
We shall also work in a framework that features the co-existence of
probability and nondeterminism. More specifically, we deal with _probabilistic
labelled transition systems (pLTSs)_ [14] which are an extension of the usual
labelled transition systems (LTSs) so that a step of transition is in the form
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$, meaning that state $s$ can perform action $a$ and
evolve into a distribution $\Delta$ over some successor states. In this
setting state $s$ is related to state $t$ by a relation
$\mathrel{{\mathcal{R}}}$, say probabilistic simulation, written
$s\mathrel{{\mathcal{R}}}t$, if for each transition
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ from $s$ there exists a transition
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ from $t$ such that $\Theta$ can somehow simulate the
behaviour of $\Delta$ according to $\mathrel{{\mathcal{R}}}$. To formalise the
mimicking of $\Delta$ by $\Theta$, we have to _lift_ $\mathrel{{\mathcal{R}}}$
to be a relation $\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}$ between
distributions over states and require
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$.
Various approaches of lifting relations have appeared in the literature; see
e.g. [37, 54, 14, 8, 12]. We will show that although those approaches appear
different, they can be reconciled. Essentially, there is only one lifting
operation, which has been presented in different forms. Moreover, we argue
that the lifting operation is interesting itself. This is justified by its
intrinsic connection with some fundamental concepts in mathematics, notably
_the Kantorovich metric_ [34]. For example, it turns out that our lifting of
binary relations from states to distributions nicely corresponds to the
lifting of metrics from states to distributions by using the Kantorovich
metric. In addition, the lifting operation is closely related to _the maximum
flow problem_ in optimisation theory, as observed by Baier et al. [2].
A good scientific concept is often elegant, even seen from many different
perspectives. Bisimulation is one of such concepts in the traditional
concurrency theory, as it can be characterised in a great many ways such as
fixed point theory, modal logics, game theory, coalgebras etc. We believe that
probabilistic bisimulation is also one of such concepts in probabilistic
concurrency theory. As an evidence, we will provide in this paper three
characterisations, from the perspectives of modal logics, metrics, and
decision algorithms.
1. 1.
Our logical characterisation of probabilistic bisimulation consists of two
aspects: _adequacy_ and _expressivity_ [50]. A logic ${\cal L}$ is adequate
when two states are bisimilar if and only if they satisfy exactly the same set
of formulae in ${\cal L}$. The logic is expressive when each state $s$ has a
characteristic formula $\varphi_{s}$ in ${\cal L}$ such that $t$ is bisimilar
to $s$ if and only if $t$ satisfies $\varphi_{s}$. We will introduce a
probabilistic choice modality to capture the behaviour of distributions.
Intuitively, distribution $\Delta$ satisfies the formula $\bigoplus_{i\in
I}p_{i}\cdot\varphi_{i}$ if there is a decomposition of $\Delta$ into a convex
combination some distributions, $\Delta=\sum_{i\in I}p_{i}\cdot\Delta_{i}$,
and each $\Delta_{i}$ confirms to the property specified by $\varphi_{i}$.
When the new modality is added to the Hennessy-Milner logic [28] we obtain an
adequate logic for probabilistic bisimilarity; when it is added to the modal
mu-calculus [36] we obtain an expressive logic.
2. 2.
By metric characterisation of probabilistic bisimulation, we mean to give a
pseudometric such that two states are bisimilar if and only if their distance
is $0$ when measured by the pseudometric. More specifically, we show that
bisimulations correspond to pseudometrics which are post-fixed points of a
monotone function, and in particular bisimilarity corresponds to a
pseudometric which is the greatest fixed point of the monotone function.
3. 3.
As to the algorithmic characterisation, we propose an “on the fly” algorithm
that checks if two states are related by probabilistic bisimilarity. The
schema of the algorithm is to approximate probabilistic bisimilarity by
iteratively accumulating information about state pairs $(s,t)$ where $s$ and
$t$ are not bisimilar. In each iteration we dynamically constructs a relation
$\mathrel{{\mathcal{R}}}$ as an approximant. Then we verify if every
transition from one state can be matched up by a transition from the other
state, and their resulting distributions are related by the lifted relation
$\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}$, which involves solving the
maximum flow problem of an appropriately constructed network, by taking
advantage of the close relation between our lifting operation and the above
mentioned maximum flow problem.
##### Related work
Probabilistic bisimulation was first introduced by Larsen and Skou [37]. Later
on, it was investigated in a great many probabilistic models. An adequate
logic for probabilistic bisimulation in a setting similar to our pLTSs has
been studied in [33, 49]. It is also based on an probabilistic extension of
the Hennessy-Milner logic. The main difference from our logic in Section 5.1
is the introduction of the operator $[\mathord{\cdot}]_{p}$. Intuitively, a
distribution $\Delta$ satisfies the formula $[\varphi]_{p}$ when the set of
states satisfying $\varphi$ is measured by $\Delta$ with probability at least
$p$. So the formula $[\varphi]_{p}$ can be expressed by our logic in terms of
the probabilistic choice $\bigoplus_{i\in I}p_{i}\mathord{\cdot}\varphi_{i}$
by setting $I=\\{1,2\\}$, $p_{1}=p$, $p_{2}=1-p$, $\varphi_{1}=\varphi$, and
$\varphi_{2}=true$. When restricted to deterministic pLTSs (i.e., for each
state and for each action, there exists at most one outgoing transition from
the state), probabilistic bisimulations can be characterised by simpler forms
of logics, as observed in [37, 16, 49].
An expressive logic for nonprobabilistic bisimulation has been proposed in
[55]. In this paper we partially extend the results of [55] to a probabilistic
setting that admits both probabilistic and nondeterministic choice. We present
a probabilistic extension of the modal mu-calculus [36], where a formula is
interpreted as the set of states satisfying it. This is in contrast to the
probabilistic semantics of the mu-calculus as studied in [29, 41, 42] where
formulae denote lower bounds of probabilistic evidence of properties, and the
semantics of the generalised probabilistic logic of [6] where a mu-calculus
formula is interpreted as a set of deterministic trees that satisfy it.
The Kantorovich metric has been used by van Breugel _et al._ for defining
behavioural pseudometrics on fully probabilistic systems [61, 64, 60] and
reactive probabilistic systems [62, 63, 58, 59]; and by Desharnais et al. for
labelled Markov chains [17, 19] and labelled concurrent Markov chains [18];
and later on by Ferns _et al._ for Markov decision processes [23, 24]; and by
Deng _et al._ for action-labelled quantitative transition systems [7]. One
exception is [20], which proposes a pseudometric for labelled Markov chains
without using the Kantorovich metric. Instead, it is based on a notition of
$\epsilon$-bisimulation, which relaxes the definition of probabilistic
bisimulation by allowing small perturbation of probabilities. In this paper we
are mainly interested in the correspondence of our lifting operation to the
Kantorovich metric. The metric characterisation of probabilistic bisimulation
in Section 6 is merely a direct consequence of this correspondence.
Decision algorithms for probabilistic bisimilarity and similarity have been
considered by Baier et al. in [2] and Zhang et al. in [68]. Their algorithms
are global in the sense that a whole state space has to be fully generated in
advance. In contrast, “on the fly” algorithms are local in the sense that the
state space is dynamically generated which is often more efficient to
determine that one state fails to be related to another. Our algorithm in
Section 7 is inspired by [2] because we also reduce the problem of checking if
two distributions are related by a lifted relation to the maximum flow problem
of a suitable network. We generalise the local algorithm of checking
nonprobabilistic bisimilarity [22, 39] to the probabilistic setting.
This paper provides a relatively comprehensive account of probabilistic
bisimulation. Some of the results or their variants were mentioned previously
in [7, 9, 10, 11]. Here they are presented in a uniform way and equipped with
detailed proofs.
##### Outline of the paper
The paper proceeds by recalling a way of lifting binary relations from states
to distributions, and showing its coincidence with a few other ways in Section
2. The lifting operation is justified in Section 3 in terms of its
correspondence to the Kantorovich metric and the maximum flow problem. In
Section 4 we define probabilistic bisimulation and show its infinite
approximation. In Section 5 we introduce a probabilistic choice modality, then
extend the Hennessy-Milner logic and the modal mu-calculus so to obtain two
logics that are adequate and expressive, respectively. In Section 6 we
characterise probabilistic bisimulations as pseudometrics. In Section 7 we
exploit the correspondence of our lifting operation to the maximum flow
problem, and present a polynomial time decision algorithm. Finally, Section 8
concludes the paper.
## 2 Lifting relations
In the probabilistic setting, formal systems are usually modelled as
distributions over states. To compare two systems involves the comparison of
two distributions. So we need a way of lifting relations on states to
relations on distributions. This is used, for example, to define probabilistic
bisimulation as we shall see in Section 4. A few approaches of lifting
relations have appeared in the literature. We will take the one from [12], and
show its coincidence with two other approaches.
We first fix some notation. A (discrete) probability distribution over a set
$S$ is a mapping $\Delta:S\rightarrow[0,1]$ with $\sum_{s\in S}\Delta(s)=1$.
The _support_ of $\Delta$ is given by $\lceil{\Delta}\rceil:=\\{\,s\in
S\,\mid\,\Delta(s)>0\,\\}$. In this paper we only consider finite state
systems, so it suffices to use distributions with finite support; let
$\mathop{\mbox{$\mathcal{D}$}}({S})$, ranged over by $\Delta,\Theta$, denote
the collection of all such distributions over $S$. We use $\overline{s}$ to
denote the point distribution, satisfying $\overline{s}(t)=1$ if $t=s$, and
$0$ otherwise. If $p_{i}\geq 0$ and $\Delta_{i}$ is a distribution for each
$i$ in some finite index set $I$, then $\sum_{i\in I}p_{i}\cdot\Delta_{i}$ is
given by
$(\sum_{i\in I}p_{i}\cdot\Delta_{i})(s)~{}~{}~{}=~{}~{}~{}\sum_{i\in
I}p_{i}\cdot\Delta_{i}(s)$
If $\sum_{i\in I}p_{i}=1$ then this is easily seen to be a distribution in
$\mathop{\mbox{$\mathcal{D}$}}({S})$. Finally, the _product_ of two
probability distributions $\Delta,\Theta$ over $S,T$ is the distribution
$\Delta\times\Theta$ over $S\times T$ defined by
$(\Delta\times\Theta)(s,t):=\Delta(s)\cdot\Theta(t)$.
###### Definition 2.1
Given two sets $S$ and $T$ and a relation
$\mathord{\mathrel{{\mathcal{R}}}}\subseteq S\mathop{\times}T$. Then
$\mathord{\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}}\subseteq\mathop{\mbox{$\mathcal{D}$}}({S})\mathop{\times}\mathop{\mbox{$\mathcal{D}$}}({T})$
is the smallest relation that satisfies:
1. 1.
$s\mathrel{{\mathcal{R}}}t$ implies
$\overline{s}\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\overline{t}$
2. 2.
$\Delta_{i}\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta_{i}$ implies
$(\sum_{i\in
I}p_{i}\cdot\Delta_{i})\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}(\sum_{i\in
I}p_{i}\cdot\Theta_{i})$, where $I$ is a finite index set and $\sum_{i\in
I}{p_{i}}=1$.
The lifting construction satisfies the following useful property whose proof
is straightforward thus omitted.
###### Proposition 2.2
Suppose $\mathord{\mathrel{{\mathcal{R}}}}\subseteq S\times S$ and $\sum_{i\in
I}p_{i}=1$. If $(\sum_{i\in
I}{p_{i}\cdot\Delta_{i}})\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$
then $\Theta=\sum_{i\in I}{p_{i}\cdot\Theta_{i}}$ for some set of
distributions $\Theta_{i}$ such that
$\Delta_{i}\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta_{i}$.
$\sqcap$$\sqcup$
We now look at alternative presentations of Definition 2.1. The proposition
below is immediate.
###### Proposition 2.3
Let $\Delta$ and $\Theta$ be distributions over $S$ and $T$, respectively, and
$\mathrel{{\mathcal{R}}}\subseteq S\times T$. Then
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$ if and only if
$\Delta,\Theta$ can be decomposed as follows:
1. 1.
$\Delta=\sum_{i\in I}{p_{i}\cdot\overline{s_{i}}}$, where $I$ is a finite
index set and $\sum_{i\in I}{p_{i}}=1$
2. 2.
For each $i\in I$ there is a state $t_{i}$ such that
$s_{i}\mathrel{{\mathcal{R}}}t_{i}$
3. 3.
$\Theta=\sum_{i\in I}{p_{i}\cdot\overline{t_{i}}}$. $\sqcap$$\sqcup$
An important point here is that in the decomposition of $\Delta$ into
$\sum_{i\in I}{p_{i}\cdot\overline{s_{i}}}$, the states $s_{i}$ are _not
necessarily distinct_ : that is, the decomposition is not in general unique.
Thus when establishing the relationship between $\Delta$ and $\Theta$, a given
state $s$ in $\Delta$ may play a number of different roles.
From Definition 2.1, the next two properties follows. In fact, they are
sometimes used in the literature as definitions of lifting relations instead
of being properties (see e.g. [54, 37]).
###### Theorem 2.4
1. 1.
Let $\Delta$ and $\Theta$ be distributions over $S$ and $T$, respectively.
Then $\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$ if and only if
there exists a weight function $w:S\times T\rightarrow[0,1]$ such that
1. (a)
$\forall s\in S:\sum_{t\in T}w(s,t)=\Delta(s)$
2. (b)
$\forall t\in T:\sum_{s\in S}w(s,t)=\Theta(t)$
3. (c)
$\forall(s,t)\in S\times T:w(s,t)>0\Rightarrow s\mathrel{{\mathcal{R}}}t$.
2. 2.
Let $\Delta,\Theta$ be distributions over $S$ and $\mathrel{{\mathcal{R}}}$ is
an equivalence relation. Then
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$ if and only if
$\Delta(C)=\Theta(C)$ for all equivalence class $C\in S/{\cal R}$, where
$\Delta(C)$ stands for the accumulation probability $\sum_{s\in C}\Delta(s)$.
* Proof:
1. 1.
($\Rightarrow$) Suppose
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$. By Proposition 2.3,
we can decompose $\Delta$ and $\Theta$ such that $\Delta=\sum_{i\in
I}p_{i}\cdot\overline{s_{i}}$, $\Theta=\sum_{i\in
I}p_{i}\cdot\overline{t_{i}}$, and $s_{i}\mathrel{{\mathcal{R}}}t_{i}$ for all
$i\in I$. We define the weight function $w$ by letting
$w(s,t)=\sum\\{{p_{i}\mid s_{i}=s,t_{i}=t,i\in I}\\}$ for any $s\in S,t\in T$.
This weight function can be checked to meet our requirements.
1. (a)
For any $s\in S$, it holds that
$\begin{array}[]{rcl}\sum_{t\in T}w(s,t)&=&\sum_{t\in T}\sum\\{{p_{i}\mid
s_{i}=s,t_{i}=t,i\in I}\\}\\\ &=&\sum\\{{p_{i}\mid s_{i}=s,i\in I}\\}\\\
&=&\Delta(s)\end{array}$
2. (b)
Similarly, we have $\sum_{s\in S}w(s,t)=\Theta(t)$.
3. (c)
For any $s\in S,t\in T$, if $w(s,t)>0$ then there is some $i\in I$ such that
$p_{i}>0$, $s_{i}=s$, and $t_{i}=t$. It follows from
$s_{i}\mathrel{{\mathcal{R}}}t_{i}$ that $s\mathrel{{\mathcal{R}}}t$.
($\Leftarrow$) Suppose there is a weight function $w$ satisfying the three
conditions in the hypothesis. We construct the index set $I=\\{{(s,t)\mid
w(s,t)>0,s\in S,t\in T}\\}$ and probabilities $p_{(s,t)}=w(s,t)$ for each
$(s,t)\in I$.
1. (a)
It holds that $\Delta=\sum_{(s,t)\in I}p_{(s,t)}\cdot\overline{s}$ because,
for any $s\in S$,
$\begin{array}[]{rcl}(\sum_{(s,t)\in
I}p_{(s,t)}\cdot\overline{s})(s)&=&\sum_{(s,t)\in I}w(s,t)\\\
&=&\sum\\{{w(s,t)\mid w(s,t)>0,t\in T}\\}\\\ &=&\sum\\{{w(s,t)\mid t\in
T}\\}\\\ &=&\Delta(s)\end{array}$
2. (b)
Similarly, we have $\Theta=\sum_{(s,t)\in I}w(s,t)\cdot\overline{t}$.
3. (c)
For each $(s,t)\in I$, we have $w(s,t)>0$, which implies
$s\mathrel{{\mathcal{R}}}t$.
Hence, the above decompositions of $\Delta$ and $\Theta$ meet the requirement
of the lifting $\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$.
2. 2.
($\Rightarrow$) Suppose
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$. By Proposition 2.3,
we can decompose $\Delta$ and $\Theta$ such that $\Delta=\sum_{i\in
I}p_{i}\cdot\overline{s_{i}}$, $\Theta=\sum_{i\in
I}p_{i}\cdot\overline{t_{i}}$, and $s_{i}\mathrel{{\mathcal{R}}}t_{i}$ for all
$i\in I$. For any equivalence class $C\in S/{\cal R}$, we have that
$\begin{array}[]{rcl}\Delta(C)=\sum_{s\in C}\Delta(s)&=&\sum_{s\in
C}\sum\\{{p_{i}\mid i\in I,s_{i}=s}\\}\\\ &=&\sum\\{{p_{i}\mid i\in I,s_{i}\in
C}\\}\\\ &=&\sum\\{{p_{i}\mid i\in I,t_{i}\in C}\\}\\\
&=&\Theta(C)\end{array}$
where the equality in the third line is justified by the fact that $s_{i}\in
C$ iff $t_{i}\in C$ since $s_{i}\mathrel{{\mathcal{R}}}t_{i}$ and $C\in
S/{\cal R}$.
($\Leftarrow$) Suppose, for each equivalence class $C\in S/{\cal R}$, it holds
that $\Delta(C)=\Theta(C)$. We construct the index set $I=\\{{(s,t)\mid
s\mathrel{{\mathcal{R}}}t\mbox{ and }s,t\in S}\\}$ and probabilities
$p_{(s,t)}=\frac{\Delta(s)\Theta(t)}{\Delta([s]_{\mathrel{{\mathcal{R}}}})}$
for each $(s,t)\in I$, where $[s]_{\mathrel{{\mathcal{R}}}}$ stands for the
equivalence class that contains $s$.
1. (a)
It holds that $\Delta=\sum_{(s,t)\in I}p_{(s,t)}\cdot\overline{s}$ because,
for any $s^{\prime}\in S$,
$\begin{array}[]{rcl}(\sum_{(s,t)\in
I}p_{(s,t)}\cdot\overline{s})(s^{\prime})&=&\sum_{(s^{\prime},t)\in
I}p_{(s^{\prime},t)}\\\
&=&\sum\\{{\frac{\Delta(s^{\prime})\Theta(t)}{\Delta([s^{\prime}]_{\mathrel{{\mathcal{R}}}})}\mid
s^{\prime}\mathrel{{\mathcal{R}}}t,\ t\in S}\\}\\\
&=&\sum\\{{\frac{\Delta(s^{\prime})\Theta(t)}{\Delta([s^{\prime}]_{\mathrel{{\mathcal{R}}}})}\mid
t\in[s^{\prime}]_{\mathrel{{\mathcal{R}}}}}\\}\\\
&=&\frac{\Delta(s^{\prime})}{\Delta([s^{\prime}]_{\mathrel{{\mathcal{R}}}})}\sum\\{{\Theta(t)\mid
t\in[s^{\prime}]_{\mathrel{{\mathcal{R}}}}}\\}\\\
&=&\frac{\Delta(s^{\prime})}{\Delta([s^{\prime}]_{\mathrel{{\mathcal{R}}}})}\Theta([s^{\prime}]_{\mathrel{{\mathcal{R}}}})\\\
&=&\frac{\Delta(s^{\prime})}{\Delta([s^{\prime}]_{\mathrel{{\mathcal{R}}}})}\Delta([s^{\prime}]_{\mathrel{{\mathcal{R}}}})\\\
&=&\Delta(s^{\prime})\end{array}$
2. (b)
Similarly, we have $\Theta=\sum_{(s,t)\in I}p_{(s,t)}\cdot\overline{t}$.
3. (c)
For each $(s,t)\in I$, we have $s\mathrel{{\mathcal{R}}}t$.
Hence, the above decompositions of $\Delta$ and $\Theta$ meet the requirement
of the lifting $\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$.
$\Box$
## 3 Justifying the lifting operation
In our opinion, the lifting operation given in Definition 2.1 is not only
concise but also on the right track. This is justified by its intrinsic
connection with some fundamental concepts in mathematics, notably the
Kantorovich metric.
### 3.1 Justification by the Kantorovich metric
We begin with some historical notes. The _transportation problem_ has been
playing an important role in linear programming due to its general formulation
and methods of solution. The original transportation problem, formulated by
the French mathematician G. Monge in 1781 [45], consists of finding an optimal
way of shovelling a pile of sand into a hole of the same volume. In the 1940s,
the Russian mathematician and economist L.V. Kantorovich, who was awarded a
Nobel prize in economics in 1975 for the theory of optimal allocation of
resources, gave a relaxed formulation of the problem and proposed a
variational principle for solving the problem [34]. Unfortunately,
Kantorovich’s work went unrecognized during a long period of time. The later
known _Kantorovich metric_ has appeared in the literature under different
names, because it has been rediscovered historically several times from
different perspectives. Many metrics known in measure theory, ergodic theory,
functional analysis, statistics, etc. are special cases of the general
definition of the Kantorovich metric [65]. The elegance of the formulation,
the fundamental character of the optimality criterion, as well as the wealth
of applications, which keep arising, place the Kantorovich metric in a
prominent position among the mathematical works of the 20th century. In
addition, this formulation can be computed in polynomial time [47], which is
an appealing feature for its use in solving applied problems. For example, it
is widely used to solve a variety of problems in business and economy such as
market distribution, plant location, scheduling problems etc. In recent years
the metric attracted the attention of computer scientists [9]: it has been
used in various different areas in computer science such as probabilistic
concurrency, image retrieval, data mining, bioinformatics, etc.
Roughly speaking, the Kantorovich metric provides a way of measuring the
distance between two distributions. Of course, this requires first a notion of
distance between the basic elements that are aggregated into the
distributions, which is often referred to as the _ground distance_. In other
words, the Kantorovich metric defines a “lifted” distance between two
distributions of mass in a space that is itself endowed with a ground
distance. There are a host of metrics available in the literature (see e.g.
[26]) to quantify the distance between probability measures; see [52] for a
comprehensive review of metrics in the space of probability measures. The
Kantorovich metric has an elegant formulation and a natural interpretation in
terms of the transportation problem.
We now recall the mathematical definition of the Kantorovich metric. Let
$(X,m)$ be a separable metric space. (This condition will be used by Theorem
3.4 below.)
###### Definition 3.1
Given any two Borel probability measures $\Delta$ and $\Theta$ on $X$, the
_Kantorovich distance_ between $\Delta$ and $\Theta$ is defined by
$K(\Delta,\Theta)={\rm sup}\left\\{\left|\int fd\Delta-\int
fd\Theta\right|:||f||\leq 1\right\\}.$
where $||\cdot||$ is the _Lipschitz semi-norm_ defined by $||f||={\rm
sup}_{x\not=y}\frac{|f(x)-f(y)|}{m(x,y)}$ for a function
$f:X\rightarrow{\mathbb{R}}$ with ${\mathbb{R}}$ being the set of all real
numbers.
The Kantorovich metric has an alternative characterisation. We denote by
$\textbf{P}(X)$ the set of all Borel probability measures on $X$ such that for
all $z\in X$, if $\Delta\in\textbf{P}(X)$ then
$\int_{X}m(x,z)\Delta(x)<\infty$. We write $M(\Delta,\Theta)$ for the set of
all Borel probability measures on the product space $X\times X$ with marginal
measures $\Delta$ and $\Theta$, i.e. if $\Gamma\in M(\Delta,\Theta)$ then
$\int_{y\in X}d\Gamma(x,y)=d\Delta(x)$ and $\int_{x\in
X}d\Gamma(x,y)=d\Theta(y)$ hold.
###### Definition 3.2
For $\Delta,\Theta\in\textbf{P}(X)$, we define the metric $L$ as follows:
$L(\Delta,\Theta)={\rm inf}\left\\{\int m(x,y)d\Gamma(x,y):\Gamma\in
M(\Delta,\Theta)\right\\}.$
###### Lemma 3.3
If $(X,m)$ is a separable metric space then $K$ and $L$ are metrics on
$\textbf{P}(X)$. $\sqcap$$\sqcup$
The famous Kantorovich-Rubinstein duality theorem gives a dual representation
of $K$ in terms of $L$.
###### Theorem 3.4
[Kantorovich-Rubinstein [35]] If $(X,m)$ is a separable metric space then for
any two distributions $\Delta,\Theta\in\textbf{P}(X)$ we have
$K(\Delta,\Theta)=L(\Delta,\Theta)$. $\sqcap$$\sqcup$
In view of the above theorem, many papers in the literature directly take
Definition 3.2 as the definition of the Kantorovich metric. Here we keep the
original definition, but it is helpful to understand $K$ by using $L$.
Intuitively, a probability measure $\Gamma\in M(\Delta,\Theta)$ can be
understood as a _transportation_ from one unit mass distribution $\Delta$ to
another unit mass distribution $\Theta$. If the distance $m(x,y)$ represents
the cost of moving one unit of mass from location $x$ to location $y$ then the
Kantorovich distance gives the optimal total cost of transporting the mass of
$\Delta$ to $\Theta$. We refer the reader to [66] for an excellent exposition
on the Kantorovich metric and the duality theorem.
Many problems in computer science only involve finite state spaces, so
discrete distributions with finite supports are sometimes more interesting
than continuous distributions. For two discrete distributions $\Delta$ and
$\Theta$ with finite supports $\\{x_{1},...,x_{n}\\}$ and
$\\{y_{1},...,y_{l}\\}$, respectively, minimizing the total cost of a
discretised version of the transportation problem reduces to the following
linear programming problem:
$\begin{array}[]{ll}\mbox{minimize}&\sum_{i=1}^{n}\sum_{j=1}^{l}\Gamma(x_{i},y_{j})m(x_{i},y_{j})\\\
\mbox{subject to}&\bullet\ \forall 1\leq i\leq
n:\sum_{j=1}^{l}\Gamma(x_{i},y_{j})=\Delta(x_{i})\\\ &\bullet\ \forall 1\leq
j\leq l:\sum_{i=1}^{n}\Gamma(x_{i},y_{j})=\Theta(y_{j})\\\ &\bullet\ \forall
1\leq i\leq n,1\leq j\leq l:\Gamma(x_{i},y_{j})\geq 0.\end{array}$ (1)
Since (1) is a special case of the discrete mass transportation problem, some
well-known polynomial time algorithm like [47] can be employed to solve it,
which is an attractive feature for computer scientists.
Recall that a pseudometric is a function that yields a non-negative real
number for each pair of elements and satisfies the following: $m(s,s)=0$,
$m(s,t)=m(t,s)$, and $m(s,t)\leq m(s,u)+m(u,t)$, for any $s,t\in S$. We say a
pseudometric $m$ is $1$-bounded if $m(s,t)\leq 1$ for any $s$ and $t$. Let
$\Delta$ and $\Theta$ be distributions over a finite set $S$ of states. In
[61] a $1$-bounded pseudometric $m$ on $S$ is lifted to be a $1$-bounded
pseudometric $\hat{m}$ on $\mathop{\mbox{$\mathcal{D}$}}({S})$ by setting the
distance $\hat{m}(\Delta,\Theta)$ to be the value of the following linear
programming problem:
$\begin{array}[]{ll}\mbox{maximize}&\sum_{s\in S}(\Delta(s)-\Theta(s))x_{s}\\\
\mbox{subject to}&\bullet\ \forall s,t\in S:x_{s}-x_{t}\leq m(s,t)\\\
&\bullet\ \forall s\in S:0\leq x_{s}\leq 1.\end{array}$ (2)
This problem can be dualised and then simplified to yield the following
problem:
$\begin{array}[]{ll}\mbox{minimize}&\sum_{s,t\in S}y_{st}m(s,t)\\\
\mbox{subject to}&\bullet\ \forall s\in S:\sum_{t\in S}y_{st}=\Delta(s)\\\
&\bullet\ \forall t\in S:\sum_{s\in S}y_{st}=\Theta(t)\\\ &\bullet\ \forall
s,t\in S:y_{st}\geq 0.\end{array}$ (3)
Now (3) is in exactly the same form as (1).
This way of lifting pseudometrics via the Kantorovich metric as given in (3)
has an interesting connection with the lifting of binary relations given in
Definition 2.1.
###### Theorem 3.5
Let $R$ be a binary relation and $m$ a pseudometric on a state space $S$
satisfying
$s\ R\ t\quad\mbox{iff}\quad m(s,t)=0$ (4)
for any $s,t\in S$. Then it holds that
$\Delta\ \mathrel{R^{\dagger}}\ \Theta\quad\mbox{ iff
}\quad\hat{m}(\Delta,\Theta)=0$
for any distributions $\Delta,\Theta\in\mathop{\mbox{$\mathcal{D}$}}({S})$.
* Proof:
Suppose $\Delta\ \mathrel{R^{\dagger}}\ \Theta$. From Theorem 2.4(1) we know
there is a weight function $w$ such that
1. 1.
$\forall s\in S:\sum_{t\in S}w(s,t)=\Delta(s)$
2. 2.
$\forall t\in S:\sum_{s\in S}w(s,t)=\Theta(t)$
3. 3.
$\forall s,t\in S:w(s,t)>0\Rightarrow s\ R\ t$.
By substituting $w(s,t)$ for $y_{s,t}$ in (3), the three constraints there can
be satisfied. For any $s,t\in S$ we distinguish two cases:
1. 1.
either $w(s,t)=0$
2. 2.
or $w(s,t)>0$. In this case we have $s\ R\ t$, which implies $m(s,t)=0$ by
(4).
Therefore, we always have $w(s,t)m(s,t)=0$ for any $s,t\in S$. Consequently,
$\sum_{s,t\in S}w(s,t)m(s,t)=0$ and the optimal value of the problem in (3)
must be $0$, i.e. $\hat{m}(\Delta,\Theta)=0$, and the optimal solution is
determined by $w$.
The above reasoning can be reversed to show that the optimal solution of (3)
determines a weight function, thus $\hat{m}(\Delta,\Theta)=0$ implies $\Delta\
\mathrel{R^{\dagger}}\ \Theta$. $\Box$
The above property will be used in Section 6 to give a metric characterisation
of probabilistic bisimulation (cf. Theorem 6.9).
### 3.2 Justification by network flow
The lifting operation discussed in Section 2 is also related to the maximum
flow problem in optimisation theory. This was already observed by Baier et al.
in [2].
We briefly recall the basic definitions of networks. More details can be found
in e.g. [21]. A _network_ is a tuple ${\cal N}=(N,E,\bot,\top,c)$ where
$(N,E)$ is a finite directed graph (i.e. $N$ is a set of nodes and $E\subseteq
N\times N$ is a set of edges) with two special nodes $\bot$ (the _source_) and
$\top$ (the _sink_) and a _capability_ $c$, i.e. a function that assigns to
each edge $(v,w)\in E$ a non-negative number $c(v,w)$. A _flow function_ $f$
for ${\cal N}$ is a function that assigns to edge $e$ a real number $f(e)$
such that
* •
$0\leq f(e)\leq c(e)$ for all edges $e$.
* •
Let $\it in(v)$ be the set of incoming edges to node $v$ and $\it out(v)$ the
set of outgoing edges from node $v$. Then, for each node $v\in
N\backslash\\{{\bot,\top}\\}$,
$\sum_{e\in\it in(v)}f(e)~{}=\sum_{e\in\it out(v)}f(e).$
The _flow_ $F(f)$ of $f$ is given by
$F(f)~{}=\sum_{e\in\it out(\bot)}f(e)-\sum_{e\in\it in(\bot)}f(e).$
The _maximum flow_ in ${\cal N}$ is the supremum (maximum) over the flows
$F(f)$, where $f$ is a flow function in ${\cal N}$.
We will see that the question whether
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$ can be reduced to a
maximum flow problem in a suitably chosen network. Suppose
$\mathrel{{\mathcal{R}}}\subseteq S\times S$ and
$\Delta,\Theta\in\mathop{\mbox{$\mathcal{D}$}}({S})$. Let
$S^{\prime}=\\{{s^{\prime}\mid s\in S}\\}$ where $s^{\prime}$ are pairwise
distinct new states, i.e. $s^{\prime}\in S$ for all $s\in S$. We create two
states $\bot$ and $\top$ not contained in $S\cup S^{\prime}$ with
$\bot\not=\top$. We associate with the pair $(\Delta,\Theta)$ the following
network ${\cal N}(\Delta,\Theta,\mathrel{{\mathcal{R}}})$.
* •
The nodes are $N=S\cup S^{\prime}\cup\\{{\bot,\top}\\}$.
* •
The edges are
$E=\\{{(s,t^{\prime})\mid(s,t)\in\mathrel{{\mathcal{R}}}}\\}\cup\\{{(\bot,s)\mid
s\in S}\\}\cup\\{{(s^{\prime},\top)\mid s\in S}\\}$.
* •
The capability $c$ is defined by $c(\bot,s)=\Delta(s)$,
$c(t^{\prime},\top)=\Theta(t)$ and $c(s,t^{\prime})=1$ for all $s,t\in S$.
The next lemma appeared as Lemma 5.1 in [2].
###### Lemma 3.6
Let $S$ be a finite set, $\Delta,\Theta\in\mathop{\mbox{$\mathcal{D}$}}({S})$
and $\mathrel{{\mathcal{R}}}\subseteq S\times S$. The following statements are
equivalent.
1. 1.
There exists a weight function $w$ for $(\Delta,\Theta)$ with respect to
$\mathrel{{\mathcal{R}}}$.
2. 2.
The maximum flow in ${\cal N}(\Delta,\Theta,\mathrel{{\mathcal{R}}})$ is $1$.
$\sqcap$$\sqcup$
Since the lifting operation given in Definition 2.1 can also be stated in
terms of weight functions, we obtain the following characterisation using
network flow.
###### Theorem 3.7
Let $S$ be a finite set, $\Delta,\Theta\in\mathop{\mbox{$\mathcal{D}$}}({S})$
and $\mathrel{{\mathcal{R}}}\subseteq S\times S$. Then
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$ if and only if the
maximum flow in ${\cal N}(\Delta,\Theta,\mathrel{{\mathcal{R}}})$ is $1$.
* Proof:
Combining Theorem 2.4(1) and Lemma 3.6. $\Box$
The above property will play an important role in Section 7 to give an “on the
fly” algorithm for checking probabilistic bisimilarity.
## 4 Probabilistic bisimulation
With a solid base of the lifting operation, we can proceed to define a
probabilistic version of bisimulation. We start with a probabilistic
generalisation of labelled transition systems (LTSs).
###### Definition 4.1
A _probabilistic labelled transition system_ (pLTS)111Essentially the same
model has appeared in the literature under different names such as _NP-
systems_ [30], _probabilistic processes_ [31], _simple probabilistic automata_
[53], _probabilistic transition systems_ [32] etc. Furthermore, there are
strong structural similarities with _Markov Decision Processes_ [51, 15]. is a
triple
$\langle S,\mathsf{Act},\rightarrow\rangle$, where
1. 1.
$S$ is a set of states;
2. 2.
$\mathsf{Act}$ is a set of actions;
3. 3.
$\rightarrow\;\;\subseteq\;\;S\times\mathsf{Act}\times\mathop{\mbox{$\mathcal{D}$}}({S})$
is the transition relation.
As with LTSs, we usually write $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ in place of $(s,a,\Delta)\in\;\rightarrow$. A pLTS is
_finitely branching_ if for each state $s\in S$ the set
$\\{{\langle\alpha,\Delta\rangle\mid s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip
2.5pt$\scriptstyle\alpha$\hskip
2.5pt}\hfil}}$}}}\Delta,\alpha\in\mathsf{Act},\Delta\in\mathop{\mbox{$\mathcal{D}$}}({S})}\\}$
is finite; if moreover $S$ is finite, then the pLTS is _finitary_.
In a pLTS, one step of transition leaves a single state but might end up in a
set of states; each of them can be reached with certain probability. An LTS
may be viewed as a degenerate pLTS, one in which only point distributions are
used.
Let $s$ and $t$ are two states in a pLTS, we say $t$ can simulate the
behaviour of $s$ if the latter can exhibit action $a$ and lead to distribution
$\Delta$ then the former can also perform $a$ and lead to a distribution, say
$\Theta$, which can mimic $\Delta$ in successor states. We are interested in a
relation between two states, but it is expressed by invoking a relation
between two distributions. To formalise the mimicking of one distribution by
the other, we make use of the lifting operation investigated in Section 2.
###### Definition 4.2
A relation $\mathrel{{\mathcal{R}}}\subseteq S\times S$ is a probabilistic
simulation if $s\ \mathrel{{\mathcal{R}}}\ t$ implies
* •
if $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ then there exists some $\Theta$ such that
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ and
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$.
If both $\mathrel{{\mathcal{R}}}$ and $\mathrel{{\mathcal{R}}}^{-1}$ are
probabilistic simulations, then $\mathrel{{\mathcal{R}}}$ is a probabilistic
bisimulation. The largest probabilistic bisimulation, denoted by $\sim$, is
called _probabilistic bisimilarity_.
As in the nonprobabilistic setting, probabilistic bisimilarity can be
approximated by a family of inductively defined relations.
###### Definition 4.3
Let $S$ be the state set of a pLTS. We define:
* •
$\sim_{0}:=S\times S$
* •
$s\sim_{n+1}t$, for $n\geq 0$, if
1. 1.
whenever $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$, there exists some $\Theta$ such that
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ and $\Delta\mathrel{\sim_{n}^{\dagger}}\Theta$;
2. 2.
whenever $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$, there exists some $\Delta$ such that
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ and $\Delta\mathrel{\sim_{n}^{\dagger}}\Theta$.
* •
$\sim_{\omega}:=\bigcap_{n\geq 0}\sim_{n}$
In general, $\sim$ is a strictly finer relation than $\sim_{\omega}$. However,
the two relations coincide when limited to finitely branching pLTSs.
###### Proposition 4.4
On finitely branching pLTSs, $\sim_{\omega}$ coincides with $\sim$.
* Proof:
It is trivial to show by induction that $s\sim t$ implies $s\sim_{n}t$ for all
$n\geq 0$, thus $s\sim_{\omega}t$.
Now we show that $\sim_{\omega}$ is a bisimulation. Suppose $s\sim_{\omega}t$
and $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$. We have to show that there is some $\Theta$ with
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ and $\Delta\mathrel{\sim_{\omega}^{\dagger}}\Theta$.
Consider the set
$T:=\\{\Theta\mid t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta\wedge\Delta\not\mathrel{\sim_{\omega}^{\dagger}}\Theta\\}.$
For each $\Theta\in T$, we have
$\Delta\not\mathrel{\sim_{\omega}^{\dagger}}\Theta$, which means that there is
some $n_{\Theta}>0$ with
$\Delta\not\mathrel{\sim_{n_{\Theta}}^{\dagger}}\Theta$. Since $t$ is finitely
branching, $T$ is a finite set. Let $N=max\\{n_{\Theta}\mid\Theta\in T\\}$. It
holds that $\Delta\not\mathrel{\sim_{N}^{\dagger}}\Theta$ for all $\Theta\in
T$, since by a straightforward induction on $m$ we can show that $s\sim_{n}t$
implies $s\sim_{m}t$ for all $m,n\geq 0$ with $n>m$. By the assumption
$s\sim_{\omega}t$ we know that $s\sim_{N+1}t$. It follows that there is some
$\Theta$ with $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ and $\Delta\mathrel{\sim_{N}^{\dagger}}\Theta$, so
$\Theta\not\in T$ and hence $\Delta\mathrel{\sim_{\omega}^{\dagger}}\Theta$.
By symmetry we also have that if $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ then there is some $\Delta$ with
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ and $\Delta\mathrel{\sim_{\omega}^{\dagger}}\Theta$.
$\Box$
Proposition 4.4 has appeared in [1]; here we have given a simpler proof.
## 5 Logical characterisation
Let ${\cal L}$ be a logic. We use the notation ${\cal L}(s)$ to stand for the
set of formulae that state $s$ satisfies. This induces an equivalence relation
on states: $s\ =^{\cal L}\ t$ iff ${\cal L}(s)={\cal L}(t)$. Thus, two states
are equivalent when they satisfy exactly the same set of formulae.
In this section we consider two kinds of logical characterisations of
probabilistic bisimilarity.
###### Definition 5.1
[Adequacy and expressivity]
1. 1.
${\cal L}$ is _adequate_ w.r.t. $\sim$ if for any states $s$ and $t$,
$s=^{\cal L}t~{}~{}\mbox{iff}~{}~{}s\sim t.$
2. 2.
${\cal L}$ is _expressive_ w.r.t. $\sim$ if for each state $s$ there exists a
_characteristic formula_ $\varphi_{s}\in{\cal L}$ such that, for any states
$s$ and $t$,
$t\models\varphi_{s}~{}~{}\mbox{iff}~{}~{}s\sim t.$
We will propose a probabilistic extension of the Hennessy-Milner logic,
showing its adequacy, and then a probabilistic extension of the modal mu-
calculus, showing its expressivity.
### 5.1 An adequate logic
We extend the Hennessy-Milner logic by adding a probabilistic choice modality
to express the bebaviour of distributions.
###### Definition 5.2
The class ${\cal L}$ of modal formulae over $\mathsf{Act}$, ranged over by
$\varphi$, is defined by the following grammar:
$\begin{array}[]{rcl}\varphi&:=&\top\mid\varphi_{1}\wedge\varphi_{2}\mid\langle
a\rangle\psi\mid\neg\varphi\\\ \psi&:=&\bigoplus_{i\in
I}p_{i}\cdot\varphi_{i}\end{array}$
We call $\varphi$ a _state formula_ and $\psi$ a _distribution formula_. Note
that a distribution formula $\psi$ only appears as the continuation of a
diamond modality $\langle a\rangle\psi$. We sometimes use the finite
conjunction $\bigwedge_{i\in I}\varphi_{i}$ as a syntactic sugar.
The _satisfaction relation_ $\models\subseteq S\times{\cal L}$ is defined by
* •
$s\models\top$ for all $s\in S$.
* •
$s\models\varphi_{1}\wedge\varphi_{2}$ if $s\models\varphi_{i}$ for $i=1,2$.
* •
$s\models\langle a\rangle\psi$ if for some
$\Delta\in\mathop{\mbox{$\mathcal{D}$}}({S})$,
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ and $\Delta\models\psi$.
* •
$s\models\neg\varphi$ if it is not the case that $s\models\varphi$.
* •
$\Delta\models\bigoplus_{i\in I}p_{i}\cdot\varphi_{i}$ if there are
$\Delta_{i}\in\mathop{\mbox{$\mathcal{D}$}}({S})$, for all $i\in
I,t\in\lceil{\Delta_{i}}\rceil$, with $t\models\varphi_{i}$, such that
$\Delta=\sum_{i\in I}p_{i}\cdot\Delta_{i}$.
With a slight abuse of notation, we write $\Delta\models\psi$ above to mean
that $\Delta$ satisfies the distribution formula $\psi$. The introduction of
distribution formula distinguishes ${\cal L}$ from other probabilistic modal
logics e.g. [33, 49].
It turns out that ${\cal L}$ is adequate w.r.t. probabilistic bisimilarity.
###### Theorem 5.3
[Adequacy] Let $s$ and $t$ be any two states in a finitely branching pLTS.
Then $s\sim t$ if and only if $s=^{\cal L}t$.
* Proof:
($\Rightarrow$) Suppose $s\sim t$, we show that
$s\models\varphi\Leftrightarrow t\models\varphi$ by structural induction on
$\varphi$.
* –
Let $s\models\top$, we clearly have $t\models\top$.
* –
Let $s\models\varphi_{1}\wedge\varphi_{2}$. Then $s\models\varphi_{i}$ for
$i=1,2$. So by induction $t\models\varphi_{i}$, and we have
$t\models\varphi_{1}\wedge\varphi_{2}$. By symmetry we also have
$t\models\varphi_{1}\wedge\varphi_{2}$ implies
$s\models\varphi_{1}\wedge\varphi_{2}$.
* –
Let $s\models\neg\varphi$. So $s\not\models\varphi$, and by induction we have
$t\not\models\varphi$. Thus $t\models\neg\varphi$. By symmetry we also have
$t\not\models\varphi$ implies $s\not\models\varphi$.
* –
Let $s\models\langle a\rangle\bigoplus_{i\in I}p_{i}\cdot\varphi_{i}$. Then
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ and $\Delta\models\bigoplus_{i\in
I}p_{i}\cdot\varphi_{i}$ for some $\Delta$. So $\Delta=\sum_{i\in
i}p_{i}\cdot\Delta_{i}$ and for all $i\in I$ and
$s^{\prime}\in\lceil{\Delta_{i}}\rceil$ we have
$s^{\prime}\models\varphi_{i}$. Since $s\sim t$, there is some $\Theta$ with
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ and $\Delta\mathrel{\sim^{\dagger}}\Theta$. By
Proposition 2.2 we have that $\Theta=\sum_{i\in I}p_{i}\cdot\Theta_{i}$ and
$\Delta_{i}\mathrel{\sim^{\dagger}}\Theta_{i}$. It follows that for each
$t^{\prime}\in\lceil{\Theta_{i}}\rceil$ there is some
$s^{\prime}\in\lceil{\Delta_{i}}\rceil$ with $s^{\prime}\sim t^{\prime}$. So
by induction we have $t^{\prime}\models\varphi_{i}$ for all
$t^{\prime}\in\lceil{\Theta_{i}}\rceil$ with $i\in I$. Therefore, we have
$\Theta\models\bigoplus_{i\in I}p_{i}\cdot\varphi_{i}$. It follows that
$t\models\langle a\rangle\bigoplus_{i\in I}p_{i}\cdot\varphi_{i}$. By symmetry
we also have $t\models\langle a\rangle\bigoplus_{i\in
I}p_{i}\cdot\varphi_{i}\Rightarrow s\models\langle a\rangle\bigoplus_{i\in
I}p_{i}\cdot\varphi_{i}$.
($\Leftarrow$) We show that the relation $=^{\cal L}$ is a probabilistic
bisimulation. Suppose $s=^{\cal L}t$ and
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$. We have to show that there is some $\Theta$ with
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ and $\Delta\mathrel{(=^{\cal L})^{\dagger}}\Theta$.
Consider the set
$T:=\\{\Theta\mid t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta\wedge\Theta=\sum_{s^{\prime}\in\lceil{\Delta}\rceil}\Delta(s^{\prime})\cdot\Theta_{s^{\prime}}\wedge\exists
s^{\prime}\in\lceil{\Delta}\rceil,\exists
t^{\prime}\in\lceil{\Theta_{s^{\prime}}}\rceil:s^{\prime}\not=^{\cal
L}t^{\prime}\\}$
For each $\Theta\in T$, there must be some
$s^{\prime}_{\Theta}\in\lceil{\Delta}\rceil$ and
$t^{\prime}_{\Theta}\in\lceil{\Theta_{s^{\prime}_{\Theta}}}\rceil$ such that
(i) either there is a formula $\varphi_{\Theta}$ with
$s^{\prime}_{\Theta}\models\varphi_{\Theta}$ but
$t^{\prime}_{\Theta}\not\models\varphi_{\Theta}$ (ii) or there is a formula
$\varphi^{\prime}_{\Theta}$ with
$t^{\prime}_{\Theta}\models\varphi^{\prime}_{\Theta}$ but
$s^{\prime}_{\Theta}\not\models\varphi^{\prime}_{\Theta}$. In the latter case
we set $\varphi_{\Theta}=\neg\varphi^{\prime}_{\Theta}$ and return back to the
former case. So for each $s^{\prime}\in\lceil{\Delta}\rceil$ it holds that
$s^{\prime}\models\bigwedge_{\\{{\Theta\in T\mid
s^{\prime}_{\Theta}=s^{\prime}}\\}}\varphi_{\Theta}$ and for each $\Theta\in
T$ with $s^{\prime}_{\Theta}=s^{\prime}$ there is some
$t^{\prime}_{\Theta}\in\lceil{\Theta_{s^{\prime}}}\rceil$ with
$t^{\prime}_{\Theta}\not\models\bigwedge_{\\{{\Theta\in T\mid
s^{\prime}_{\Theta}=s^{\prime}}\\}}\varphi_{\Theta}$. Let
$\varphi:=\langle
a\rangle\bigoplus_{s^{\prime}\in\lceil{\Delta}\rceil}\Delta(s^{\prime})\cdot\bigwedge_{\\{{\Theta\in
T\mid s^{\prime}_{\Theta}=s^{\prime}}\\}}\varphi_{\Theta}.$
It is clear that $s\models\varphi$, hence $t\models\varphi$ by $s=^{\cal L}t$.
It follows that there must be a $\Theta^{\ast}$ with
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta^{\ast}$,
$\Theta^{\ast}=\sum_{s^{\prime}\in\lceil{\Delta}\rceil}\Delta(s^{\prime})\cdot\Theta^{\ast}_{s^{\prime}}$
and for each
$s^{\prime}\in\lceil{\Delta}\rceil,t^{\prime}\in\lceil{\Theta^{\ast}_{s^{\prime}}}\rceil$
we have $t^{\prime}\models\bigwedge_{\\{{\Theta\in T\mid
s^{\prime}_{\Theta}=s^{\prime}}\\}}\varphi_{\Theta}$. This means that
$\Theta^{\ast}\not\in T$ and hence for each
$s^{\prime}\in\lceil{\Delta}\rceil,t^{\prime}\in\lceil{\Theta^{\ast}_{s^{\prime}}}\rceil$
we have $s^{\prime}=^{\cal L}t^{\prime}$. It follows that
$\Delta\mathrel{(=^{\cal L})^{\dagger}}\Theta^{\ast}$. By symmetry all
transitions of $t$ can be matched up by transitions of $s$. $\Box$
### 5.2 An expressive logic
We now add the probabilistic choice modality introduced in Section 5.1 to the
modal mu-calculus, and show that the resulting probabilistic mu-calculus is
expressive w.r.t. probabilistic bisimilarity.
#### 5.2.1 Probabilistic modal mu-calculus
Let ${\it Var}$ be a countable set of variables. We define a set ${\cal
L}_{\mu}$ of modal formulae in positive normal form given by the following
grammar:
$\begin{array}[]{rcl}\varphi&:=&\top\mid\bot\mid\langle
a\rangle\psi\mid[a]\psi\mid\varphi_{1}\wedge\varphi_{2}\mid\varphi_{1}\vee\varphi_{2}\mid
X\mid\mu X.\varphi\mid\nu X.\varphi\\\ \psi&:=&\bigoplus_{i\in
I}p_{i}\cdot\varphi_{i}\end{array}$
where $a\in\mathsf{Act}$, $I$ is a finite index set and $\sum_{i\in
I}p_{i}=1$. Here we still write $\varphi$ for a state formula and $\psi$ a
distribution formula. Sometimes we also use the finite conjunction
$\bigwedge_{i\in I}\varphi_{i}$ and disjunction $\bigvee_{i\in I}\varphi_{i}$.
As usual, we have $\bigwedge_{i\in\emptyset}\varphi_{i}=\top$ and
$\bigvee_{i\in\emptyset}\varphi_{i}=\bot$.
The two fixed point operators $\mu X$ and $\nu X$ bind the respective variable
$X$. We apply the usual terminology of free and bound variables in a formula
and write ${\it fv}(\varphi)$ for the set of free variables in $\varphi$.
We use environments, which binds free variables to sets of distributions, in
order to give semantics to formulae. We fix a finitary pLTS and let $S$ be its
state set. Let
${\sf Env}=\\{\,\rho\,\mid\,\rho:{\it Var}\rightarrow\mathcal{P}(S)\,\\}$
be the set of all environments and ranged over by $\rho$. For a set
$V\subseteq S$ and a variable $X\in{\it Var}$, we write $\rho[X\mapsto V]$ for
the environment that maps $X$ to $V$ and $Y$ to $\rho(Y)$ for all $Y\not=X$.
The semantics of a formula $\varphi$ can be given as the set of states
satisfying it. This entails a semantic functional $\mbox{\bbb[}\
\mbox{\bbb]}:{\cal L}_{\mu}\rightarrow{\sf Env}\rightarrow\mathcal{P}(S)$
defined inductively in Figure 1, where we also apply [ ] to distribution
formulae and $\mbox{\bbb[}\psi\mbox{\bbb]}$ is interpreted as the set of
distributions that satisfy $\psi$. As the meaning of a closed formula
$\varphi$ does not depend on the environment, we write
$\mbox{\bbb[}\varphi\mbox{\bbb]}$ for $\mbox{\bbb[}\varphi\mbox{\bbb]}_{\rho}$
where $\rho$ is an arbitrary environment.
$\begin{array}[]{rcl}\mbox{\bbb[}\top\mbox{\bbb]}_{\rho}&=&S\\\
\mbox{\bbb[}\bot\mbox{\bbb]}_{\rho}&=&\emptyset\\\
\mbox{\bbb[}\varphi_{1}\wedge\varphi_{2}\mbox{\bbb]}_{\rho}&=&\mbox{\bbb[}\varphi_{1}\mbox{\bbb]}_{\rho}\cap\mbox{\bbb[}\varphi_{2}\mbox{\bbb]}_{\rho}\\\
\mbox{\bbb[}\varphi_{1}\vee\varphi_{2}\mbox{\bbb]}_{\rho}&=&\mbox{\bbb[}\varphi_{1}\mbox{\bbb]}_{\rho}\cup\mbox{\bbb[}\varphi_{2}\mbox{\bbb]}_{\rho}\\\
\mbox{\bbb[}\langle a\rangle\psi\mbox{\bbb]}_{\rho}&=&\\{\,s\in
S\,\mid\,\exists\Delta:s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta\ \wedge\
\Delta\in\mbox{\bbb[}\psi\mbox{\bbb]}_{\rho}\,\\}\\\
\mbox{\bbb[}[a]\psi\mbox{\bbb]}_{\rho}&=&\\{\,s\in
S\,\mid\,\forall\Delta:s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta\ \Rightarrow\
\Delta\in\mbox{\bbb[}\psi\mbox{\bbb]}_{\rho}\,\\}\\\
\mbox{\bbb[}X\mbox{\bbb]}_{\rho}&=&\rho(X)\\\ \mbox{\bbb[}\mu
X.\varphi\mbox{\bbb]}_{\rho}&=&\bigcap\\{\,V\subseteq
S\,\mid\,\mbox{\bbb[}\varphi\mbox{\bbb]}_{\rho[X\mapsto V]}\subseteq V\,\\}\\\
\mbox{\bbb[}\nu X.\varphi\mbox{\bbb]}_{\rho}&=&\bigcup\\{\,V\subseteq
S\,\mid\,\mbox{\bbb[}\varphi\mbox{\bbb]}_{\rho[X\mapsto V]}\supseteq V\,\\}\\\
\mbox{\bbb[}\bigoplus_{i\in
I}p_{i}\cdot\varphi_{i}\mbox{\bbb]}_{\rho}&=&\\{\,\Delta\in\mathop{\mbox{$\mathcal{D}$}}({S})\,\mid\,\Delta=\bigoplus_{i\in
I}p_{i}\cdot\Delta_{i}\ \wedge\ \forall i\in I,\forall
t\in\lceil{\Delta_{i}}\rceil:t\in\mbox{\bbb[}\varphi_{i}\mbox{\bbb]}_{\rho}\,\\}\end{array}$
Figure 1: Semantics of probabilistic modal mu-calculus
The semantics of probabilistic modal mu-calculus (pMu) is the same as that of
the modal mu-calculus [36] except for the probabilistic choice modality which
are satisfied by distributions. The characterisation of least fixed point
formula $\mu X.\varphi$ and greatest fixed point formula $\nu X.\varphi$
follows from the well-known Knaster-Tarski fixed point theorem [56].
We shall consider (closed) equation systems of formulae of the form
$\begin{array}[]{rcl}E:X_{1}&=&\varphi_{1}\\\ &\vdots&\\\
X_{n}&=&\varphi_{n}\end{array}$
where $X_{1},...,X_{n}$ are mutually distinct variables and
$\varphi_{1},...,\varphi_{n}$ are formulae having at most $X_{1},...,X_{n}$ as
free variables. Here $E$ can be viewed as a function $E:{\it
Var}\rightarrow{\cal L}_{\mu}$ defined by $E(X_{i})=\varphi_{i}$ for
$i=1,...,n$ and $E(Y)=Y$ for other variables $Y\in{\it Var}$.
An environment $\rho$ is a solution of an equation system $E$ if $\forall
i:\rho(X_{i})=\mbox{\bbb[}\varphi_{i}\mbox{\bbb]}_{\rho}$. The existence of
solutions for an equation system can be seen from the following arguments. The
set ${\sf Env}$, which includes all candidates for solutions, together with
the partial order $\leq$ defined by
$\rho\leq\rho^{\prime}\ \mbox{\rm iff}\ \forall X\in{\it
Var}:\rho(X)\subseteq\rho^{\prime}(X)$
forms a complete lattice. The equation functional ${\cal E}:{\sf
Env}\rightarrow{\sf Env}$ given in the $\lambda$-calculus notation by
${\cal E}:=\lambda\rho.\lambda X.\mbox{\bbb[}E(X)\mbox{\bbb]}_{\rho}$
is monotonic. Thus, the Knaster-Tarski fixed point theorem guarantees
existence of solutions, and the largest solution
$\rho_{E}:=\bigsqcup\\{\,\rho\,\mid\,\rho\leq{\cal E}(\rho)\,\\}$
#### 5.2.2 Characteristic equation systems
As studied in [55], the behaviour of a process can be characterised by an
equation system of modal formulae. Below we show that this idea also applies
in the probabilistic setting.
###### Definition 5.4
Given a finitary pLTS, its characteristic equation system consists of one
equation for each state $s_{1},...,s_{n}\in S$.
$\begin{array}[]{rcl}E:X_{s_{1}}&=&\varphi_{s_{1}}\\\ &\vdots&\\\
X_{s_{n}}&=&\varphi_{s_{n}}\end{array}$
where
$\varphi_{s}:=(\bigwedge_{s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to10.50002pt{\rightarrowfill}}\limits^{\hbox to10.50002pt{\hfil\hbox{\vrule
height=4.52083pt,depth=2.1097pt,width=0.0pt\hskip 1.75pt$\scriptstyle a$\hskip
1.75pt}\hfil}}$}}}\Delta}\langle a\rangle
X_{\Delta})\wedge(\bigwedge_{a\in\mathsf{Act}}[a]\bigvee_{s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to10.50002pt{\rightarrowfill}}\limits^{\hbox to10.50002pt{\hfil\hbox{\vrule
height=4.52083pt,depth=2.1097pt,width=0.0pt\hskip 1.75pt$\scriptstyle a$\hskip
1.75pt}\hfil}}$}}}\Delta}X_{\Delta})$ (5)
with $X_{\Delta}:=\bigoplus_{s\in\lceil{\Delta}\rceil}\Delta(s)\cdot X_{s}$.
###### Theorem 5.5
Suppose $E$ is a characteristic equation system. Then $s\sim t$ if and only if
$t\in\rho_{E}(X_{s})$.
* Proof:
($\Leftarrow$) Let
$\mathrel{{\mathcal{R}}}=\\{\,(s,t)\,\mid\,t\in\rho_{E}(X_{s})\,\\}$. We first
show that
$\Theta\in\mbox{\bbb[}X_{\Delta}\mbox{\bbb]}_{\rho_{E}}\ {\rm implies}\
\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta.$ (6)
Let $\Delta=\bigoplus_{i\in I}p_{i}\cdot\overline{s_{i}}$, then
$X_{\Delta}=\bigoplus_{i\in I}p_{i}\cdot X_{s_{i}}$. Suppose
$\Theta\in\mbox{\bbb[}X_{\Delta}\mbox{\bbb]}_{\rho_{E}}$. We have that
$\Theta=\bigoplus_{i\in I}p_{i}\cdot\Theta_{i}$ and, for all $i\in I$ and
$t^{\prime}\in\lceil{\Theta_{i}}\rceil$, that
$t^{\prime}\in\mbox{\bbb[}X_{s_{i}}\mbox{\bbb]}_{\rho_{E}}$, i.e.
$s_{i}\mathrel{{\mathcal{R}}}t^{\prime}$. It follows that
$\overline{s_{i}}\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta_{i}$ and
thus $\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$.
Now we show that $\mathrel{{\mathcal{R}}}$ is a bisimulation.
1. 1.
Suppose $s\mathrel{{\mathcal{R}}}t$ and
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$. Then
$t\in\rho_{E}(X_{s})=\mbox{\bbb[}\varphi_{s}\mbox{\bbb]}_{\rho_{E}}$. It
follows from (5) that $t\in\mbox{\bbb[}\langle a\rangle
X_{\Delta}\mbox{\bbb]}_{\rho_{E}}$. So there exists some $\Theta$ such that
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ and
$\Theta\in\mbox{\bbb[}X_{\Delta}\mbox{\bbb]}_{\rho_{E}}$. Now we apply (6).
2. 2.
Suppose $s\mathrel{{\mathcal{R}}}t$ and
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$. Then
$t\in\rho_{E}(X_{s})=\mbox{\bbb[}\varphi_{s}\mbox{\bbb]}_{\rho_{E}}$. It
follows from (5) that
$t\in\mbox{\bbb[}[a]\bigvee_{s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to10.50002pt{\rightarrowfill}}\limits^{\hbox to10.50002pt{\hfil\hbox{\vrule
height=4.52083pt,depth=2.1097pt,width=0.0pt\hskip 1.75pt$\scriptstyle a$\hskip
1.75pt}\hfil}}$}}}\Delta}X_{\Delta}\mbox{\bbb]}$. Notice that it must be the
case that $s$ can enable action $a$, otherwise,
$t\in\mbox{\bbb[}[a]\bot\mbox{\bbb]}_{\rho_{E}}$ and thus $t$ cannot enable
$a$ either, in contradiction with the assumption
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$. Therefore,
$\Theta\in\mbox{\bbb[}\bigvee_{s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to10.50002pt{\rightarrowfill}}\limits^{\hbox to10.50002pt{\hfil\hbox{\vrule
height=4.52083pt,depth=2.1097pt,width=0.0pt\hskip 1.75pt$\scriptstyle a$\hskip
1.75pt}\hfil}}$}}}\Delta}X_{\Delta}\mbox{\bbb]}_{\rho_{E}}$, which implies
$\Theta\in\mbox{\bbb[}X_{\Delta}\mbox{\bbb]}_{\rho_{E}}$ for some $\Delta$
with $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$. Now we apply (6).
($\Rightarrow$) We define the environment $\rho_{\sim}$ by
$\rho_{\sim}(X_{s}):=\\{\,t\,\mid\,s\sim t\,\\}.$
It sufficies to show that $\rho_{\sim}$ is a post-fixed point of ${\cal E}$,
i.e.
$\rho_{\sim}\leq{\cal E}(\rho_{\sim})$ (7)
because in that case we have $\rho_{\sim}\leq\rho_{E}$, thus $s\sim t$ implies
$t\in\rho_{\sim}(X_{s})$ which in turn implies $t\in\rho_{E}(X_{s})$.
We first show that
$\Delta\mathrel{\sim^{\dagger}}\Theta\ {\rm implies}\
\Theta\in\mbox{\bbb[}X_{\Delta}\mbox{\bbb]}_{\rho_{\sim}}.$ (8)
Suppose $\Delta\mathrel{\sim^{\dagger}}\Theta$, by Proposition 2.3 we have
that (i) $\Delta=\bigoplus_{i\in I}p_{i}\cdot\overline{s_{i}}$, (ii)
$\Theta=\bigoplus_{i\in I}p_{i}\cdot\overline{t_{i}}$, (iii) $s_{i}\sim t_{i}$
for all $i\in I$. We know from (iii) that
$t_{i}\in\mbox{\bbb[}X_{s_{i}}\mbox{\bbb]}_{\rho_{\sim}}$. Using (ii) we have
that $\Theta\in\mbox{\bbb[}\bigoplus_{i\in I}p_{i}\cdot
X_{s_{i}}\mbox{\bbb]}_{\rho_{\sim}}$. Using (i) we obtain
$\Theta\in\mbox{\bbb[}X_{\Delta}\mbox{\bbb]}_{\rho_{\sim}}$.
Now we are in a position to show (7). Suppose $t\in\rho_{\sim}(X_{s})$. We
must prove that $t\in\mbox{\bbb[}\varphi_{s}\mbox{\bbb]}_{\rho_{\sim}}$, i.e.
$t\in(\bigcap_{s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to10.50002pt{\rightarrowfill}}\limits^{\hbox to10.50002pt{\hfil\hbox{\vrule
height=4.52083pt,depth=2.1097pt,width=0.0pt\hskip 1.75pt$\scriptstyle a$\hskip
1.75pt}\hfil}}$}}}\Delta}\mbox{\bbb[}\langle a\rangle
X_{\Delta}\mbox{\bbb]}_{\rho_{\sim}})\cap(\bigcap_{a\in\mathsf{Act}}\mbox{\bbb[}[a]\bigvee_{s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to10.50002pt{\rightarrowfill}}\limits^{\hbox to10.50002pt{\hfil\hbox{\vrule
height=4.52083pt,depth=2.1097pt,width=0.0pt\hskip 1.75pt$\scriptstyle a$\hskip
1.75pt}\hfil}}$}}}\Delta}X_{\Delta}\mbox{\bbb]}_{\rho_{\sim}})$
by (5). This can be done by showing that $t$ belongs to each of the two parts
of this intersection.
1. 1.
In the first case, we assume that $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$. Since $s\sim t$, there exists some $\Theta$ such
that $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ and $\Delta\mathrel{\sim^{\dagger}}\Theta$. By (8),
we get $\Theta\in\mbox{\bbb[}X_{\Delta}\mbox{\bbb]}_{\rho_{\sim}}$. It follows
that $t\in\mbox{\bbb[}\langle a\rangle X_{\Delta}\mbox{\bbb]}_{\rho_{\sim}}$.
2. 2.
In the second case, we suppose $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ for any action $a\in\mathsf{Act}$ and distribution
$\Theta$. Then by $s\sim t$ there exists some $\Delta$ such that
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ and $\Delta\mathrel{\sim^{\dagger}}\Theta$. By (8),
we get $\Theta\in\mbox{\bbb[}X_{\Delta}\mbox{\bbb]}_{\rho_{\sim}}$. As a
consequence,
$t\in\mbox{\bbb[}[a]\bigvee_{s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to10.50002pt{\rightarrowfill}}\limits^{\hbox to10.50002pt{\hfil\hbox{\vrule
height=4.52083pt,depth=2.1097pt,width=0.0pt\hskip 1.75pt$\scriptstyle a$\hskip
1.75pt}\hfil}}$}}}\Delta}X_{\Delta}\mbox{\bbb]}_{\rho_{\sim}}$. Since this
holds for arbitrary action $a$, our desired result follows.
$\Box$
#### 5.2.3 Characteristic formulae
So far we know how to construct the characteristic equation system for a
finitary pLTS. As introduced in [46], the three transformation rules in Figure
2 can be used to obtain from an equation system $E$ a formula whose
interpretation coincides with the interpretation of $X_{1}$ in the greatest
solution of $E$. The formula thus obtained from a characteristic equation
system is called a characteristic formula.
###### Theorem 5.6
Given a characteristic equation system $E$, there is a characteristic formula
$\varphi_{s}$ such that $\rho_{E}(X_{s})=\mbox{\bbb[}\varphi_{s}\mbox{\bbb]}$
for any state $s$. $\sqcap$$\sqcup$
The above theorem, together with the results in Section 5.2.2, gives rise to
the following corollary.
###### Corollary 5.7
For each state $s$ in a finitary pLTS, there is a characteristic formula
$\varphi_{s}$ such that $s\sim t$ iff
$t\in\mbox{\bbb[}\varphi_{s}\mbox{\bbb]}$. $\sqcap$$\sqcup$
1. 1.
Rule 1: $E\rightarrow F$
2. 2.
Rule 2: $E\rightarrow G$
3. 3.
Rule 3: $E\rightarrow H$ if $X_{n}\not\in{\it
fv}(\varphi_{1},...,\varphi_{n})$
$\begin{array}[]{rclrclrclrcl}E:X_{1}&=&\varphi_{1}&F:X_{1}&=&\varphi_{1}&G:X_{1}&=&\varphi_{1}[\varphi_{n}/X_{n}]&H:X_{1}&=&\varphi_{1}\\\
&\vdots&&&\vdots&&&\vdots&&&\vdots&\\\
X_{n-1}&=&\varphi_{n-1}&X_{n-1}&=&\varphi_{n-1}&X_{n-1}&=&\varphi_{n-1}[\varphi_{n}/X_{n}]&X_{n-1}&=&\varphi_{n-1}\\\
X_{n}&=&\varphi_{n}&X_{n}&=&\nu
X_{n}.\varphi_{n}&X_{n}&=&\varphi_{n}&&&\end{array}$
Figure 2: Transformation rules
## 6 Metric characterisation
In the definition of probabilistic bisimulation probabilities are treated as
labels since they are matched only when they are identical. One may argue that
this does not provide a robust relation: Processes that differ for a very
small probability, for instance, would be considered just as different as
processes that perform completely different actions. This is particularly
relevant to many applications where specifications can be given as perfect,
but impractical processes and other, practical processes are considered
acceptable if they only differ from the specification with a negligible
probability.
To find a more flexible way to differentiate processes, researchers in this
area have borrowed from mathematics the notion of metric222For simplicity, in
this section we use the term metric to denote both metric and pseudometric.
All the results are based on pseudometrics.. A metric is defined as a function
that associates a distance with a pair of elements. Whereas topologists use
metrics as a tool to study continuity and convergence, we will use them to
provide a measure of the difference between two processes that are not quite
bisimilar.
Since different processes may behave the same, they will be given distance
zero in our metric semantics. So we are more interested in pseudometrics than
metrics.
In the rest of this section, we fix a finite state pLTS
$(S,\mathsf{Act},\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle$\hskip
2.5pt}\hfil}}$}}})$ and provide the set of pseudometrics on $S$ with the
following partial order.
###### Definition 6.1
The relation $\preceq$ for the set ${\cal M}$ of $1$-bounded pseudometrics on
$S$ is defined by
$m_{1}\preceq m_{2}\ {\rm if}\ \forall s,t:m_{1}(s,t)\geq m_{2}(s,t).$
Here we reverse the ordering with the purpose of characterizing bisimilarity
as the greatest fixed point (cf: Corollary 6.10).
###### Lemma 6.2
$({\cal M},\preceq)$ is a complete lattice.
* Proof:
The top element is given by $\forall s,t:\top(s,t)=0$; the bottom element is
given by $\bot(s,t)=1$ if $s\not=t$, $0$ otherwise. Greatest lower bounds are
given by $(\bigsqcap X)(s,t)=\sup\\{m(s,t)\mid m\in X\\}$ for any
$X\subseteq{\cal M}$. Finally, least upper bounds are given by $\bigsqcup
X=\bigsqcap\ \\{m\in{\cal M}\mid\forall m^{\prime}\in X:m^{\prime}\preceq
m\\}$. $\Box$
###### Definition 6.3
$m\in{\cal M}$ is a state-metric if, for all $\epsilon\in[0,1)$,
$m(s,t)\leq\epsilon$ implies:
* •
if $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ then there exists some $\Delta^{\prime}$ such that
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta^{\prime}$ and
$\hat{m}(\Delta,\Delta^{\prime})\leq\epsilon$
where the lifted metric $\hat{m}$ was defined in (2) via the Kantorovich
metric. Note that if $m$ is a state-metric then it is also a metric. By
$m(s,t)\leq\epsilon$ we have $m(t,s)\leq\epsilon$, which implies
* •
if $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta^{\prime}$ then there exists some $\Delta$ such that
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ and $\hat{m}(\Delta^{\prime},\Delta)\leq\epsilon$.
In the above definition, we prohibit $\epsilon$ to be $1$ because we use $1$
to represent the distance between any two incomparable states including the
case where one state may perform a transition and the other may not.
The greatest state-metric is defined as
$m_{\it max}=\bigsqcup\\{m\in{\cal M}\mid m\mbox{ is a state-metric}\\}.$
It turns out that state-metrics correspond to bisimulations and the greatest
state-metric corresponds to bisimilarity. To make the analogy closer, in what
follows we will characterize $m_{\it max}$ as a fixed point of a suitable
monotone function on ${\cal M}$. First we recall the definition of Hausdorff
distance.
###### Definition 6.4
Given a $1$-bounded metric $d$ on $Z$, the Hausdorff distance between two
subsets $X,Y$ of $Z$ is defined as follows:
$H_{d}(X,Y)=\max\\{\sup_{x\in X}\inf_{y\in Y}d(x,y),\sup_{y\in Y}\inf_{x\in
X}d(y,x)\\}$
where $\inf\ \emptyset=1$ and $\sup\ \emptyset=0$.
Next we define a function $F$ on ${\cal M}$ by using the Hausdorff distance.
###### Definition 6.5
Let $der(s,a)=\\{\Delta\mid s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta\\}$. $F(m)$ is a pseudometric given by:
$F(m)(s,t)=\sup_{a\in\mathsf{Act}}\\{H_{\hat{m}}(der(s,a),der(t,a))\\}.$
Thus we have the following property.
###### Lemma 6.6
For all $\epsilon\in[0,1)$, $F(m)(s,t)\leq\epsilon$ if and only if:
* •
if $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ then there exists some $\Delta^{\prime}$ such that
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta^{\prime}$ and
$\hat{m}(\Delta,\Delta^{\prime})\leq\epsilon$;
* •
if $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta^{\prime}$ then there exists some $\Delta$ such that
$s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ and $\hat{m}(\Delta^{\prime},\Delta)\leq\epsilon$.
$\sqcap$$\sqcup$
The above lemma can be proved by directly checking the definition of $F$, as
can the next lemma.
###### Lemma 6.7
$m$ is a state-metric if and only if $m\preceq F(m)$. $\sqcap$$\sqcup$
Consequently we have the following characterisation:
$m_{\it max}=\bigsqcup\\{m\in{\cal M}\mid m\preceq F(m)\\}.$
###### Lemma 6.8
$F$ is monotone on ${\cal M}$. $\sqcap$$\sqcup$
Because of Lemma 6.2 and 6.8, we can apply Knaster-Tarski fixed point theorem,
which tells us that $m_{\it max}$ is the greatest fixed point of $F$.
Furthermore, by Lemma 6.7 we know that $m_{\it max}$ is indeed a state-metric,
and it is the greatest state-metric.
We now show the correspondence between state-metrics and bisimulations.
###### Theorem 6.9
Given a binary relation $\mathrel{{\mathcal{R}}}$ and a pseudometric
$m\in{\cal M}$ on a finite state pLTS such that
$m(s,t)=\left\\{\begin{array}[]{ll}0&\mbox{if $s\mathrel{{\mathcal{R}}}t$}\\\
1&\mbox{otherwise.}\end{array}\right.$ (9)
Then $\mathrel{{\mathcal{R}}}$ is a probabilistic bisimulation if and only if
$m$ is a state-metric.
* Proof:
The result can be proved by using Theorem 3.5, which in turn relies on Theorem
2.4 (1). Below we give an alternative proof that uses Theorem 2.4 (2) instead.
Given two distributions $\Delta,\Delta^{\prime}$ over $S$, let us consider how
to compute $\hat{m}(\Delta,\Delta^{\prime})$ if $\mathrel{{\mathcal{R}}}$ is
an equivalence relation. Since $S$ is finite, we may assume that
$V_{1},...,V_{n}\in S/{\cal R}$ are all the equivalence classes of $S$ under
$\mathrel{{\mathcal{R}}}$. If $s,t\in V_{i}$ for some $i\in 1..n$, then
$m(s,t)=0$, which implies $x_{s}=x_{t}$ by the first constraint of (2). So for
each $i\in 1..n$ there exists some $x_{i}$ such that $x_{i}=x_{s}$ for all
$s\in V_{i}$. Thus, some summands of (2) can be grouped together and we have
the following linear program:
$\sum_{i\in 1..n}(\Delta(V_{i})-\Delta^{\prime}(V_{i}))x_{i}$ (10)
with the constraint $x_{i}-x_{j}\leq 1$ for any $i,j\in 1..n$ with $i\not=j$.
Briefly speaking, if $\mathrel{{\mathcal{R}}}$ is an equivalence relation then
$\hat{m}(\Delta,\Delta^{\prime})$ is obtained by maximizing the linear program
(10).
($\Rightarrow$) Suppose $\mathrel{{\mathcal{R}}}$ is a bisimulation and
$m(s,t)=0$. From the assumption in (9) we know that $\mathrel{{\mathcal{R}}}$
is an equivalence relation. By the definition of $m$ we have
$s\mathrel{{\mathcal{R}}}t$. If $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ then $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta^{\prime}$ for some $\Delta^{\prime}$ such that
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Delta^{\prime}$. To show
that $m$ is a state-metric it suffices to prove $m(\Delta,\Delta^{\prime})=0$.
We know from
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Delta^{\prime}$ and Theorem
2.4 (2) that $\Delta(V_{i})=\Delta^{\prime}(V_{i})$, for each $i\in 1..n$. It
follows that (10) is maximized to be $0$, thus
$\hat{m}(\Delta,\Delta^{\prime})=0$.
($\Leftarrow$) Suppose $m$ is a state-metric and has the relation in (9).
Notice that $\mathrel{{\mathcal{R}}}$ is an equivalence relation. We show that
it is a bisimulation. Suppose $s\mathrel{{\mathcal{R}}}t$, which means
$m(s,t)=0$. If $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ then $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta^{\prime}$ for some $\Delta^{\prime}$ such that
$\hat{m}(\Delta,\Delta^{\prime})=0$. To ensure that
$\hat{m}(\Delta,\Delta^{\prime})=0$, in (10) the following two conditions must
be satisfied.
1. 1.
No coefficient is positive. Otherwise, if
$\Delta(V_{i})-\Delta^{\prime}(V_{i})>0$ then (10) would be maximized to a
value not less than $(\Delta(V_{i})-\Delta^{\prime}(V_{i}))$, which is greater
than $0$.
2. 2.
It is not the case that at least one coefficient is negative and the other
coefficients are either negative or $0$. Otherwise, by summing up all the
coefficients, we would get
$\Delta(S)-\Delta^{\prime}(S)<0$
which contradicts the assumption that $\Delta$ and $\Delta^{\prime}$ are
distributions over $S$.
Therefore the only possibility is that all coefficients in (10) are $0$, i.e.,
$\Delta(V_{i})=\Delta^{\prime}(V_{i})$ for any equivalence class $V_{i}\in
S/\mathrel{{\mathcal{R}}}$. It follows from Theorem 2.4 (2) that
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Delta^{\prime}$. So we have
shown that $\mathrel{{\mathcal{R}}}$ is indeed a bisimulation. $\Box$
###### Corollary 6.10
Let $s$ and $t$ be two states in a finite state pLTS. Then $s\sim t$ if and
only if $m_{\it max}(s,t)=0$.
* Proof:
($\Rightarrow$) Since $\sim$ is a bisimulation, by Theorem 6.9 there exists
some state-metric $m$ such that $s\sim t$ iff $m(s,t)=0$. By the definition of
$m_{\it max}$ we have $m\preceq m_{\it max}$. Therefore $m_{\it max}(s,t)\leq
m(s,t)=0$.
($\Leftarrow$) From $m_{\it max}$ we construct a pseudometric $m$ as follows.
$m(s,t)=\left\\{\begin{array}[]{ll}0&\mbox{if $m_{\it max}(s,t)=0$}\\\
1&\mbox{otherwise.}\end{array}\right.$
Since $m_{\it max}$ is a state-metric, it is easy to see that $m$ is also a
state-metric. Now we construct a binary relation $\mathrel{{\mathcal{R}}}$
such that $\forall s,s^{\prime}:s\mathrel{{\mathcal{R}}}s^{\prime}$ iff
$m(s,s^{\prime})=0$. If follows from Theorem 6.9 that
$\mathrel{{\mathcal{R}}}$ is a bisimulation. If $m_{\it max}(s,t)=0$, then
$m(s,t)=0$ and thus $s\mathrel{{\mathcal{R}}}t$. Therefore we have the
required result $s\sim t$ because $\sim$ is the largest bisimulation. $\Box$
## 7 Algorithmic characterisation
In this section we propose an “on the fly” algorithm for checking if two
states in a finitary pLTS are bisimilar.
An important ingredient of the algorithm is to check if two distributions are
related by a lifted relation. Fortunately, Theorem 3.7 already provides us a
method for deciding whether
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$, for two given
distributions $\Delta,\Theta$ and a relation $\mathrel{{\mathcal{R}}}$. We
construct the network ${\cal N}(\Delta,\Theta,\mathrel{{\mathcal{R}}})$ and
compute the maximum flow with well-known methods, as sketched in Algorithm 1.
Algorithm 1 Check$(\Delta,\Theta,\mathrel{{\mathcal{R}}})$ _Input_ : A
nonempty finite set $S$, distributions
---
$\Delta,\Theta\in\mathop{\mbox{$\mathcal{D}$}}({S})$ and
$\mathrel{{\mathcal{R}}}\subseteq S\times S$
_Output_ : If $\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$ then
“yes” else “no”
_Method_ :
Construct the network ${\cal N}(\Delta,\Theta,\mathrel{{\mathcal{R}}})$
Compute the maximum flow $F$ in ${\cal
N}(\Delta,\Theta,\mathrel{{\mathcal{R}}})$
If $F<1$ then return “no” else “yes”.
As shown in [4], computing the maximum flow in a network can be done in time
$O(n^{3}/\log n)$ and space $O(n^{2})$, where $n$ is the number of nodes in
the network. So we immediately have the following result.
###### Lemma 7.1
The test whether $\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$ can
be done in time $O(n^{3}/\log n)$ and space $O(n^{2})$. $\sqcap$$\sqcup$
Algorithm 2 Bisim$(s,t)$
Bisim$(s,t)=\\{$
$NotBisim:=\\{{}\\}$
fun Bis$(s,t)$={
$Visited:=\\{{}\\}$
$Assumed:=\\{{}\\}$
Match$(s,t)$}
} handle $WrongAssumption\Rightarrow\textbf{Bis}(s,t)$
return Bis$(s,t)$
Match$(s,t)=$
$Visited:=Visisted\cup\\{{(s,t)}\\}$
$b=\bigwedge_{a\in A}\textbf{MatchAction}(s,t,a)$
if $b=false$ then
$NotBisim:=NotBisim\cup\\{{(s,t)}\\}$
if $(s,t)\in Assumed$ then
raise $WrongAssumption$
end if
end if
return $b$
MatchAction$(s,t,a)=$
for all $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta_{i}$ do
for all $t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta_{j}$ do
$b_{ij}=\textbf{MatchDistribution}(\Delta_{i},\Theta_{j})$
end for
end for
return
$(\bigwedge_{i}(\bigvee_{j}b_{ij}))\underline{\wedge(\bigwedge_{j}(\bigvee_{i}b_{ij}))}$
MatchDistribution$(\Delta,\Theta)=$
Assume $\lceil{\Delta}\rceil=\\{{s_{1},...,s_{n}}\\}$ and
$\lceil{\Theta}\rceil=\\{{t_{1},...,t_{m}}\\}$
$\mathrel{{\mathcal{R}}}:=\\{{(s_{i},t_{j})\mid\textbf{Close}(s_{i},t_{j})=true}\\}$
return Check$(\Delta,\Theta,\mathrel{{\mathcal{R}}})$
Close$(s,t)=$
if $(s,t)\in NotBisim$ then
return $false$
else if $(s,t)\in Visited$ then
$Assumed:=Assumed\cup\\{{(s,t)}\\}$
return $true$
else
return Match$(s,t)$
end if
We now present a bisimilarity-checking algorithm by adapting the algorithm
proposed in [39] for value-passing processes, which in turn was inspired by
[22].
The main procedure in the algorithm is Bisim$(s,t)$. It starts with the
initial state pair $(s,t)$, trying to find the smallest bisimulation relation
containing the pair by matching transitions from each pair of states it
reaches. It uses three auxiliary data structures:
* •
$NotBisim$ collects all state pairs that have already been detected as not
bisimilar.
* •
$Visited$ collects all state pairs that have already been visited.
* •
$Assumed$ collects all state pairs that have already been visited and assumed
to be bisimilar.
The core procedure, Match, is called from function Bis inside the main
procedure Bisim. Whenever a new pair of states is encountered it is inserted
into $Visited$. If two states fail to match each other’s transitions then they
are not bisimilar and the pair is added to $NotBisim$. If the current state
pair has been visited before, we check whether it is in $NotBisim$. If this is
the case, we return $false$. Otherwise, a loop has been detected and we make
assumption that the two states are bisimilar, by inserting the pair into
$Assumed$, and return $true$. Later on, if we find that the two states are not
bisimilar after finishing searching the loop, then the assumption is wrong, so
we first add the pair into $NotBisim$ and then raise the exception
$WrongAssumption$, which forces the function Bis to run again, with the new
information that the two states in this pair are not bisimilar. In this case,
the size of $NotBisim$ has been increased by at least one. Hence, Bis can only
be called for finitely many times. Therefore, the procedure Bisim$(s,t)$ will
terminate. If it returns $true$, then the set $(Visited-NotBisim)$ constitutes
a bisimulation relation containing the pair $(s,t)$.
The main difference from the algorithm of checking non-probabilistic
bisimilarity in [39] is the introduction of the procedure
MatchDistribution$(\Delta,\Theta)$, where we approximate $\sim$ by a binary
relation $\mathrel{{\mathcal{R}}}$ which is coarser than $\sim$ in general,
and we check the validity of
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$. If
$\Delta\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}\Theta$ does not hold, then
$\Delta\mathrel{\sim^{\dagger}}\Theta$ is invalid either and
MatchDistribution$(\Delta,\Theta)$ returns false correctly. Otherwise, the two
distributions $\Delta$ and $\Theta$ are considered equivalent with respect to
$\mathrel{{\mathcal{R}}}$ and we move on to match other pairs of
distributions. The correctness of the algorithm is stated in the following
theorem.
###### Theorem 7.2
Given two states $s_{0}$ and $t_{0}$ in a finitary pLTS, the function
Bisim$(s_{0},t_{0})$ terminates, and it returns _true_ if and only if
$s_{0}\sim t_{0}$.
* Proof:
Let $\textbf{Bis}_{i}$ stand for the $i$-th execution of the function Bis. Let
$Assumed_{i}$ and $NotBisim_{i}$ be the set $Assumed$ and $NotBisim$ at the
end of Bisi. When Bisi is finished, either a $WrongAssumption$ is raised or no
$WrongAssumption$ is raised. In the former case, $Assumed_{i}\cap
NotBisim_{i}\not=\emptyset$; in the latter case, the execution of the function
Bisim is completed. From function Close we know that $Assumed_{i}\cap
NotBisim_{i-1}=\emptyset$. Now it follows from the simple fact
$NotBisim_{i-1}\subseteq NotBisim_{i}$ that $NotBisim_{i-1}\subset
NotBisim_{i}$. Since we are considering finitary pLTSs, there is some $j$ such
that $NotBisim_{j-1}=NotBisim_{j}$, when all the non-bisimilar state pairs
reachable from $s_{0}$ and $t_{0}$ have been found and Bisim must terminate.
For the correctness of the algorithm, we consider the relation
$\mathrel{{\mathcal{R}}}_{i}=Visited_{i}-NotBisim_{i}$, where $Visited_{i}$ is
the set $Visited$ at the end of Bisi. Let Bisk be the last execution of Bis.
For each $i\leq k$, the relation $\mathrel{{\mathcal{R}}}_{i}$ can be regarded
as an approximation of $\sim$, as far as the states appeared in
$\mathrel{{\mathcal{R}}}_{i}$ are concerned. Moreover,
$\mathrel{{\mathcal{R}}}_{i}$ is a coarser approximation because if two states
$s,t$ are re-visited but their relation is unknown, they are assumed to be
bisimilar. Therefore, if Bis${}_{k}(s_{0},t_{0})$ returns $false$, then
$s_{0}\not\sim t_{0}$. On the other hand, if Bis${}_{k}(s_{0},t_{0})$ returns
$true$, then $\mathrel{{\mathcal{R}}}_{k}$ constitutes a bisimulation relation
containing the pair $(s_{0},t_{0})$. This follows because
$\textbf{Match}(s_{0},t_{0})=true$ which basically means that whenever
$s\mathrel{{\mathcal{R}}}_{k}t$ and $s\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ there exists some transition
$t\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ such that
$\textbf{Check}(\Delta,\Theta,\mathrel{{\mathcal{R}}}_{k})=true$, i.e.
$\Delta\mathrel{\mathrel{{\mathcal{R}}}_{k}^{\dagger}}\Theta$. Indeed, this
rules out the possibility that $s_{0}\not\sim t_{0}$ as otherwise we would
have $s_{0}\not\sim_{\omega}t_{0}$ by Proposition 4.4, that is
$s_{0}\not\sim_{n}t_{0}$ for some $n>0$. The latter means that some transition
$s_{0}\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Delta$ exists such that for all
$t_{0}\mathrel{\mathrel{\hbox{$\mathop{\hbox
to15.00002pt{\rightarrowfill}}\limits^{\hbox to15.00002pt{\hfil\hbox{\vrule
height=6.45831pt,depth=3.01385pt,width=0.0pt\hskip 2.5pt$\scriptstyle a$\hskip
2.5pt}\hfil}}$}}}\Theta$ we have
$\Delta\not\mathrel{\sim_{n-1}^{\dagger}}\Theta$, or symmetrically with the
roles of $s_{0}$ and $t_{0}$ exchanged, i.e. $\Delta$ and $\Theta$ can be
distinguished at level $n$, so a contradiction arises. $\Box$
Below we consider the time and space complexities of the algorithm.
###### Theorem 7.3
Let $s$ and $t$ be two states in a pLTS with $n$ states in total. The function
$\textbf{Bisim}(s,t)$ terminates in time $O(n^{7}/\log n)$ and space
$O(n^{2})$.
* Proof:
The number of state pairs is bounded by $n^{2}$. In the worst case, each
execution of the function $\textbf{Bis}(s,t)$ only yields one new pair of
states that are not bisimilar. The number of state pairs examined in the first
execution of $\textbf{Bis}(s,t)$ is at most $O(n^{2})$, in the second
execution is at most $O(n^{2}-1)$, $\cdots$. Therefore, the total number of
state pairs examined is at most $O(n^{2}+(n^{2}-1)+\cdots+1)=O(n^{4})$. When a
state pair $(s,t)$ is examined, each transition of $s$ is compared with all
transitions of $t$ labelled with the same action. Since the pLTS is finitely
branching, we could assume that each state has at most $c$ outgoing
transitions. Therefore, for each state pair, the number of comparisons of
transitions is bound by $c^{2}$. As a comparison of two transitions calls the
function Check once, which requires time $O(n^{3}/\log n)$ by Lemma 7.1. As a
result, examining each state pair takes time $O(c^{2}n^{3}/\log n)$. Finally,
the worst case time complexity of executing $\textbf{Bisim}(s,t)$ is
$O(n^{7}/\log n)$.
The space requirement of the algorithm is easily seen to be $O(n^{2})$, in
view of Lemma 7.1. $\Box$
###### Remark 7.4
With mild modification, the above algorithm can be adapted to check
probabilistic similarity. We simply remove the underlined part in the function
MatchAction; the rest of the algorithm remains unchanged. Similar to the
analysis in Theorems 7.2 and 7.3, the new algorithm can be shown to correctly
check probabilistic similarity over finitary pLTSs; its worst case time and
space complexities are still $O(n^{7}/\log n)$ and $O(n^{2})$, respectively.
## 8 Conclusion
To define behavioural equivalences or preorders for probabilistic processes
often involves a lifting operation that turns a binary relation
$\mathrel{{\mathcal{R}}}$ on states into a relation
$\mathrel{\mathrel{{\mathcal{R}}}^{\dagger}}$ on distributions over states. We
have shown that several different proposals for lifting relations can be
reconciled. They are nothing more than different forms of essentially the same
lifting operation. More interestingly, we have discovered that this lifting
operation corresponds well to the Kantorovich metric, a fundamental concept
used in mathematics to lift a metric on states to a metric on distributions
over states, besides the fact the lifting operation is related to the maximum
flow problem in optimisation theory.
The lifting operation leads to a neat notion of probabilistic bisimulation,
for which we have provided logical, metric, and algorithmic characterisations.
1. 1.
We have introduced a probabilistic choice modality to specify the behaviour of
distributions of states. Adding the new modality to the Hennessy-Milner logic
and the modal mu-calculus results in an adequate and an expressive logic
w.r.t. probabilistic bisimilarity, respectively.
2. 2.
Due to the correspondence of the lifting operation and the Kantorovich metric,
bisimulations can be naturally characterised as pseudometrics which are post-
fixed points of a monotone function, and bisimilarity as the greatest post-
fixed point of the funciton.
3. 3.
We have presented an “on the fly” algorithm to check if two states in a
finitary pLTS are bisimilar. The algorithm is based on the close relationship
between the lifting operation and the maximum flow problem.
In the belief that a good scientific concept is often elegant, even seen from
different perspectives, we consider the lifting operation and probabilistic
bisimulation as two concepts in probabilistic concurrency theory that are
formulated in the right way.
## References
* [1] C. Baier. On the algorithmic verification of probabilistic systems, 1998. Habilitation Thesis, Universität Mannheim.
* [2] C. Baier, B. Engelen, and M. E. Majster-Cederbaum. Deciding bisimilarity and similarity for probabilistic processes. Journal of Computer and System Sciences, 60(1):187–231, 2000.
* [3] E. Bandini and R. Segala. Axiomatizations for probabilistic bisimulation. In Proceedings of the 28th International Colloquium on Automata, Languages and Programming, volume 2076 of Lecture Notes in Computer Science, pages 370–381. Springer, 2001.
* [4] J. Cheriyan, T. Hagerup, and K. Mehlhorn. Can a maximum flow be computed on O(nm) time? In Proceedings of the 17th International Colloquium on Automata, Languages and Programming, volume 443 of Lecture Notes in Computer Science, pages 235–248. Springer, 1990.
* [5] I. Christoff. Testing equivalences and fully abstract models for probabilistic processes. In Proceedings the 1st International Conference on Concurrency Theory, volume 458 of Lecture Notes in Computer Science, pages 126–140. Springer, 1990.
* [6] R. Cleaveland, S. P. Iyer, and M. Narasimha. Probabilistic temporal logics via the modal mu-calculus. Theoretical Computer Science, 342(2-3):316–350, 2005.
* [7] Y. Deng, T. Chothia, C. Palamidessi, and J. Pang. Metrics for action-labelled quantitative transition systems. Electronic Notes in Theoretical Computer Science, 153(2):79–96, 2006.
* [8] Y. Deng and W. Du. Probabilistic barbed congruence. Electronic Notes in Theoretical Computer Science, 190(3):185–203, 2007.
* [9] Y. Deng and W. Du. Kantorovich metric in computer science: A brief survey. Electronic Notes in Theoretical Computer Science, 353(3):73–82, 2009.
* [10] Y. Deng and W. Du. A local algorithm for checking probabilistic bisimilarity. In Proceedings of the 4th International Conference on Frontier of Computer Science and Technology, pages 401–407. IEEE Computer Society, 2009\.
* [11] Y. Deng and R. van Glabbeek. Characterising probabilistic processes logically. In Proceedings of the 17th International Conference on Logic for Programming, Artificial Intelligence and Reasoning, volume 6397 of Lecture Notes in Computer Science, pages 278–293. Springer, 2010.
* [12] Y. Deng, R. van Glabbeek, M. Hennessy, and C. Morgan. Testing finitary probabilistic processes (extended abstract). In Proceedings of the 20th International Conference on Concurrency Theory, volume 5710 of Lecture Notes in Computer Science, pages 274–288. Springer, 2009.
* [13] Y. Deng, R. van Glabbeek, M. Hennessy, and C. C. Morgan. Characterising testing preorders for finite probabilistic processes. Logical Methods in Computer Science, 4(4):1–33, 2008.
* [14] Y. Deng, R. van Glabbeek, M. Hennessy, C. C. Morgan, and C. Zhang. Remarks on testing probabilistic processes. Electronic Notes in Theoretical Computer Science, 172:359–397, 2007\.
* [15] Y. Deng, R. van Glabbeek, C. C. Morgan, and C. Zhang. Scalar outcomes suffice for finitary probabilistic testing. In Proceedings of the 16th European Symposium on Programming, volume 4421 of Lecture Notes in Computer Science, pages 363–378. Springer, 2007.
* [16] J. Desharnais, A. Edalat, and P. Panangaden. A logical characterization of bisimulation for labelled Markov processes. In Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science, pages 478–489. IEEE Computer Society Press, 1998.
* [17] J. Desharnais, R. Jagadeesan, V. Gupta, and P. Panangaden. Metrics for labeled Markov systems. In Proceedings of the 10th International Conference on Concurrency Theory, volume 1664 of Lecture Notes in Computer Science, pages 258–273. Springer-Verlag, 1999.
* [18] J. Desharnais, R. Jagadeesan, V. Gupta, and P. Panangaden. The metric analogue of weak bisimulation for probabilistic processes. In Proceedings of the 17th Annual IEEE Symposium on Logic in Computer Science, pages 413–422. IEEE Computer Society, 2002.
* [19] J. Desharnais, R. Jagadeesan, V. Gupta, and P. Panangaden. Metrics for labelled markov processes. Theoretical Computer Science, 318(3):323–354, 2004.
* [20] J. Desharnais, F. Laviolette, and M. Tracol. Approximate analysis of probabilistic processes: Logic, simulation and games. In Proceedings of the 5th International Conference on the Quantitative Evaluaiton of Systems, pages 264–273. IEEE Computer Society, 2008\.
* [21] S. Even. Graph Algorithms. Computer Science Press, 1979.
* [22] J.-C. Fernandez and L. Mounier. Verifying bisimulations “on the fly”. In Proceedings of the 3rd International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols, pages 95–110. North-Holland, 1990.
* [23] N. Ferns, P. Panangaden, and D. Precup. Metrics for finite Markov decision processes. In Proceedings of the 20th Conference in Uncertainty in Artificial Intelligence, pages 162–169. AUAI Press, 2004.
* [24] N. Ferns, P. Panangaden, and D. Precup. Metrics for Markov decision processes with infinite state spaces. In Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, pages 201–208. AUAI Press, 2005.
* [25] A. Giacalone, C.-C. Jou, and S. A. Smolka. Algebraic reasoning for probabilistic concurrent systems. In Proceedings of IFIP TC2 Working Conference on Programming Concepts and Methods, 1990.
* [26] A. L. Gibbs and F. E. Su. On choosing and bounding probability metrics. International Statistical Review, 70(3):419–435, 2002.
* [27] H. Hansson and B. Jonsson. A calculus for communicating systems with time and probabilities. In Proceedings of IEEE Real-Time Systems Symposium, pages 278–287. IEEE Computer Society Press, 1990.
* [28] M. Hennessy and R. Milner. Algebraic laws for nondeterminism and concurrency. Journal of the ACM, 32(1):137–161, 1985.
* [29] M. Huth and M. Kwiatkowska. Quantitative analysis and model checking. In Proceedings of the 12th Annual IEEE Symposium on Logic in Computer Science, pages 111–122. IEEE Computer Society, 1997.
* [30] B. Jonsson, C. Ho-Stuart, and W. Yi. Testing and refinement for nondeterministic and probabilistic processes. In Proceedings of the 3rd International Symposium on Formal Techniques in Real-Time and Fault-Tolerant Systems, volume 863 of Lecture Notes in Computer Science, pages 418–430. Springer, 1994.
* [31] B. Jonsson and W. Yi. Compositional testing preorders for probabilistic processes. In Proceedings of the 10th Annual IEEE Symposium on Logic in Computer Science, pages 431–441. Computer Society Press, 1995.
* [32] B. Jonsson and W. Yi. Testing preorders for probabilistic processes can be characterized by simulations. Theoretical Computer Science, 282(1):33–51, 2002.
* [33] B. Jonsson, W. Yi, and K. G. Larsen. Probabilistic extensions of process algebras. In Handbook of Process Algebra, chapter 11, pages 685–710. Elsevier, 2001.
* [34] L. Kantorovich. On the transfer of masses (in Russian). Doklady Akademii Nauk, 37(2):227–229, 1942.
* [35] L. V. Kantorovich and G. S. Rubinshtein. On a space of totally additive functions. Vestn Lening. Univ., 13(7):52–59, 1958.
* [36] D. Kozen. Results on the propositional mu-calculus. Theoretical Computer Science, 27:333–354, 1983.
* [37] K. G. Larsen and A. Skou. Bisimulation through probabilistic testing. Information and Computation, 94(1):1–28, 1991.
* [38] K. G. Larsen and A. Skou. Compositional verification of probabilistic processes. In Proceedings of the 3rd International Conference on Concurrency Theory, volume 630 of Lecture Notes in Computer Science, pages 456–471. Springer, 1992.
* [39] H. Lin. “On-the-fly” instantiation of value-passing processes. In Proceedings of FORTE’98, volume 135 of IFIP Conference Proceedings, pages 215–230. Kluwer, 1998.
* [40] G. Lowe. Probabilistic and prioritized models of timed CSP. Theoretical Computer Science, 138:315–352, 1995.
* [41] A. McIver and C. Morgan. An expectation-based model for probabilistic temporal logic. Technical Report PRG-TR-13-97, Oxford University Computing Laboratory, 1997.
* [42] A. McIver and C. Morgan. Results on the quantitative mu-calculus. ACM Transactions on Computational Logic, 8(1), 2007.
* [43] R. Milner. Communication and Concurrency. Prentice Hall, 1989.
* [44] M. M. Mislove, J. Ouaknine, and J. Worrell. Axioms for probability and nondeterminism. Electronic Notes in Theoretical Computer Science, 96:7–28, 2004\.
* [45] G. Monge. Mémoire sur la théorie des déblais et des remblais. Histoire de l’Academie des Science de Paris, page 666, 1781.
* [46] M. Müller-Olm. Derivation of characteristic formulae. Electronic Notes in Theoretical Computer Science, 18:159–170, 1998\.
* [47] J. B. Orlin. A faster strongly polynomial minimum cost flow algorithm. In Proceedings of the 20th ACM Symposium on the Theory of Computing, pages 377–387. ACM, 1988.
* [48] D. Park. Concurrency and automata on infinite sequences. In Proceedings of the 5th GI Conference, volume 104 of Lecture Notes in Computer Science, pages 167–183. Springer, 1981.
* [49] A. Parma and R. Segala. Logical characterizations of bisimulations for discrete probabilistic systems. In Proceedings of the 10th International Conference on Foundations of Software Science and Computational Structures, volume 4423 of Lecture Notes in Computer Science, pages 287–301. Springer, 2007.
* [50] A. Pnueli. Linear and branching structures in the semantics and logics of reactive systems. In Proceedings of the 12th International Colloquium on Automata, Languages and Programming, volume 194 of Lecture Notes in Computer Science, pages 15–32. Springer, 1985.
* [51] M. L. Puterman. Markov Decision Processes. Wiley, 1994.
* [52] S. Rachev. Probability Metrics and the Stability of Stochastic Models. Wiley New York, 1991.
* [53] R. Segala. Modeling and verification of randomized distributed real-time systems. Technical Report MIT/LCS/TR-676, PhD thesis, MIT, Dept. of EECS, 1995\.
* [54] R. Segala and N. Lynch. Probabilistic simulations for probabilistic processes. In Proceedings of the 5th International Conference on Concurrency Theory, volume 836 of Lecture Notes in Computer Science, pages 481–496. Springer, 1994.
* [55] B. Steffen and A. Ingólfsdóttir. Characteristic formulae for processes with divergence. Information and Computation, 110:149–163, 1994.
* [56] A. Tarski. A lattice-theoretical fixpoint theorem and its application. Pacific Journal of Mathematics, 5:285–309, 1955.
* [57] R. Tix, K. Keimel, and G. Plotkin. Semantic domains for combining probability and non-determinism. Electronic Notes in Theoretical Computer Science, 129:1–104, 2005\.
* [58] F. van Breugel, C. Hermida, M. Makkai, and J. Worrell. An accessible approach to behavioural pseudometrics. In Proceedings of the 32nd International Colloquium on Automata, Languages and Programming, volume 3580 of Lecture Notes in Computer Science, pages 1018–1030. Springer, 2005.
* [59] F. van Breugel, C. Hermida, M. Makkai, and J. Worrell. Recursively defined metric spaces without contraction. Theoretical Computer Science, 380(1-2):143–163, 2007.
* [60] F. van Breugel, B. Sharma, and J. Worrell. Approximating a behavioural pseudometric without discount for probabilistic systems. In Proceedings of the 10th International Conference on Foundations of Software Science and Computational Structures, volume 4423 of Lecture Notes in Computer Science, pages 123–137. Springer, 2007.
* [61] F. van Breugel and J. Worrell. An algorithm for quantitative verification of probabilistic transition systems. In Proceedings of the 12th International Conference on Concurrency Theory, volume 2154 of Lecture Notes in Computer Science, pages 336–350. Springer, 2001.
* [62] F. van Breugel and J. Worrell. Towards quantitative verification of probabilistic transition systems. In Proceedings of the 28th International Colloquium on Automata, Languages and Programming, volume 2076 of Lecture Notes in Computer Science, pages 421–432. Springer, 2001.
* [63] F. van Breugel and J. Worrell. A behavioural pseudometric for probabilistic transition systems. Theoretical Computer Science, 331(1):115–142, 2005.
* [64] F. van Breugel and J. Worrell. Approximating and computing behavioural distances in probabilistic transition systems. Theoretical Computer Science, 360(1-3):373–385, 2006.
* [65] A. Vershik. Kantorovich metric: Initial history and little-known applications. Journal of Mathematical Sciences, 133(4):1410–1417, 2006.
* [66] C. Villani. Topics in Optimal Transportation, volume 58 of Graduate Studies in Mathematics. American Mathematical Society, 2003.
* [67] W. Yi and K. G. Larsen. Testing probabilistic and nondeterministic processes. In Proceedings of the IFIP TC6/WG6.1 12th International Symposium on Protocol Specification, Testing and Verification, volume C-8 of IFIP Transactions, pages 47–61. North-Holland, 1992.
* [68] L. Zhang, H. Hermanns, F. Eisenbrand, and D. N. Jansen. Flow faster: Efficient decision algorithms for probabilistic simulations. Logical Methods in Computer Science, 4(4:6), 2008.
|
arxiv-papers
| 2011-03-23T17:16:50 |
2024-09-04T02:49:17.905279
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yuxin Deng and Wenjie Du",
"submitter": "Yuxin Deng",
"url": "https://arxiv.org/abs/1103.4577"
}
|
1103.4623
|
# Some degenerations of $G_{2}$ and Calabi-Yau varieties
Michał Kapustka
###### Abstract.
We introduce a variety $\hat{G}_{2}$ parameterizing isotropic five-spaces of a
general degenerate four-form in a seven dimensional vector space. It is in a
natural way a degeneration of the variety $G_{2}$, the adjoint variety of the
simple Lie group $\mathbb{G}_{2}$. It occurs that it is also the image of
$\mathbb{P}^{5}$ by a system of quadrics containing a twisted cubic.
Degenerations of this twisted cubic to three lines give rise to degenerations
of $G_{2}$ which are toric Gorenstein Fano fivefolds. We use these two
degenerations to construct geometric transitions between Calabi–Yau
threefolds. We prove moreover that every polarized K3 surface of Picard number
2, genus 10, and admitting a $g^{1}_{5}$ appears as linear sections of the
variety $\hat{G}_{2}$.
## 1\. Introduction
We shall denote by $G_{2}$ the adjoint variety of the complex simple Lie group
$\mathbb{G}_{2}$. In geometric terms this is a subvariety of the Grassmannian
$G(5,V)$ consisting of 5-spaces isotropic with respect to a chosen non-
degenerate 4-form $\omega$ on a 7-dimensional vector space $V$. In this
context the word non-degenerate stands for 4-forms contained in the open orbit
in $\bigwedge^{4}V$ of the natural action of $\operatorname{Gl}(7)$. It is
known (see [1]) that this open orbit is the complement of a hypersurface of
degree 7. The hypersurface is the closure of the set of 4-forms which can be
decomposed into the sum of 3 simple forms. The expected number of simple forms
needed to decompose a general 4-form is also 3, meaning that our case is
defective. In fact this is the only known example (together with the dual
$(k,n)=(3,7)$) with $3\leq k\leq n-3$ in which a general $k$-form in an
n-dimensional space cannot be decomposed into the sum of an expected number of
simple forms. A natural question comes to mind. What is the variety
$\hat{G}_{2}$ of 5-spaces isotropic with respect to a generic 4-form from the
hypersurface of degree 7? ¿From the above point of view it is a variety which
is not expected to exist. We prove that the Plücker embedding of $\hat{G}_{2}$
is linearly isomorphic to the closure of the image of $\mathbb{P}^{5}$ by the
map defined by quadrics containing a fixed twisted cubic. We check also that
$\hat{G}_{2}$ is singular along a plane and appears as a flat deformation of
$G_{2}$.
Next, we study varieties obtained by degenerating the twisted cubic to a
reducible cubic. All of them appear to be flat deformations of $G_{2}$.
However only one of them appears to be a linear section of $G(5,V)$. It
corresponds to the variety of 5-spaces isotropic with respect to a 4-form from
the tangential variety to the Grassmannian $G(4,7)$. The two other
degenerations corresponding to configurations of lines give rise to toric
degenerations of $G_{2}$. The variety $G_{2}$ as a spherical variety is proved
in [7] to have such degenerations, but for $G_{2}$ it is not clear whether the
constructed degeneration is Fano. In the context of applications, mainly for
purposes of mirror symmetry of Calabi–Yau manifolds, it is important that
these degenerations lead to Gorenstein toric Fano varieties. Our two toric
varieties are both Gorenstein and Fano, they admit respectively 3 and 4
singular strata of codimension 3 and degree 1. Hence the varieties obtained by
intersecting these toric 5-folds with a quadric and a hyperplane have 6 and 8
nodes respectively. The small resolutions of these nodes are Calabi–Yau
threefolds which are complete intersections in smooth toric Fano 5-folds and
are connected by conifold transition to the Borcea Calabi–Yau threefolds of
degree 36, which are sections of $G_{2}$ by a hyperplane and a quadric, and
will be denoted $X_{36}$. This is the setting for the methods developed in [3]
to work and provide a partially conjectural construction of mirror. Note that
in [5] the authors found a Gorenstein toric Fano fourfold whose hyperplane
section is a nodal Calabi–Yau threefold admitting a smoothing which has the
same hodge numbers, degree, and degree of the second Chern class as $X_{36}$.
It follows by a theorem of Wall that it is diffeomorphic to it and by
connectedness of the Hilbert scheme is also a flat deformation of it. However,
a priori the two varieties can be in different components of the Hilbert
scheme, hence do not give rise to a properly understood conifold transition.
In this case it is not clear what is the connection between the mirrors of
these varieties.
The geometric properties of $\hat{G}_{2}$ are also used in the paper for the
construction of another type of geometric transitions. A pair of geometric
transitions joining $X_{36}$ and the complete intersection of a quadric and a
quartic in $\mathbb{P}^{5}$. The first is a conifold transition involving a
small contraction of two nodes the second a geometric transition involving a
primitive contraction of type III.
In the last section we consider a different application of the considered
constructions. We apply it to the study of polarized K3 surfaces of genus 10.
By the Mukai linear section theorem (see [14]) we know that a generic
polarized K3 surface of genus 10 appears as a complete linear section of
$G_{2}$. A classification of the non-general cases has been presented in [10].
The classification is however made using descriptions in scrolls, which is not
completely precise in a few special cases. We use our construction to clarify
one special case in this classification. This is the case of polarized K3
surfaces $(S,L)$ of genus 10 having a $g^{1}_{5}$ (i.e. a smooth
representative of $L$ admits a $g^{1}_{5}$). In particular we prove that a
smooth linear section of $G_{2}$ does not admit a $g^{1}_{5}$. Then, we prove
that each smooth two dimensional linear section of $\hat{G}_{2}$ has a
$g^{1}_{5}$ and that K3 surfaces appearing in this way form a component of the
moduli space of such surfaces. More precisely we get the following.
###### Proposition 1.1.
Let $(S,L)$ be a polarized K3 surface of genus 10 such that $L$ admits exactly
one $g^{1}_{5}$, then $(S,L)$ is a proper linear section of one of the four
considered degenerations of $G_{2}$.
###### Proposition 1.2.
If $(S,L)$ is a polarized K3 surface of genus 10 such that $L$ admits a
$g^{1}_{5}$ induced by an elliptic curve, and $S$ has Picard number 2, then
$(S,L)$ is a proper linear section of $\hat{G}_{2}$.
The methods used throughout the paper are elementary and rely highly on direct
computations in chosen coordinates including the use of Macaulay2 and Magma.
## 2\. The variety $G_{2}$
In this section we recall a basic description of the variety $G_{2}$ using
equations.
###### Lemma 2.1.
The variety $G_{2}$ appears as a five dimensional section of the Grassmannian
$G(2,7)$ with seven hyperplanes (non complete intersection). It parametrizes
the set of 2-forms $\\{\left[v_{1}\wedge v_{2}\right]\in G(2,V)\ |\
v_{1}\wedge v_{2}\wedge\omega=0\in\bigwedge^{6}V\\}$, where $V$ is a seven
dimensional vector space and $\omega$ a non-degenerate four-form on it.
By [1] we can choose $\omega=x_{1}\wedge x_{2}\wedge x_{3}\wedge
x_{7}+x_{4}\wedge x_{5}\wedge x_{6}\wedge x_{7}+x_{2}\wedge x_{3}\wedge
x_{5}\wedge x_{6}+x_{1}\wedge x_{3}\wedge x_{4}\wedge x_{6}+x_{1}\wedge
x_{2}\wedge x_{4}\wedge x_{5}$. The variety $G_{2}$ is then described in its
linear span $W\subset\mathbb{P}(\bigwedge^{2}V)$ with coordinates $(a\dots n)$
by $4\times 4$ Pfaffians of the matrix:
$\left(\begin{array}[]{ccccccc}0&-f&e&g&h&i&a\\\ f&0&-d&j&k&l&b\\\
-e&d&0&m&n&-g-k&c\\\ -g&-j&-m&0&c&-b&d\\\ -h&-k&-n&-c&0&a&e\\\
-i&-l&g+k&b&-a&0&f\\\ -a&-b&-c&-d&-e&-f&0\end{array}\right).$
## 3\. The variety $\hat{G}_{2}$
¿From [1] there is a hypersurface of degree 7 in $\bigwedge^{4}V$
parameterizing four-forms which may be written as a sum of three pure forms.
The generic element of this hypersurface corresponds to a generic degenerate
four-form $\omega_{0}$. After a suitable change of coordinates we may assume
(see [1]) that $\omega_{0}=x_{1}\wedge x_{2}\wedge x_{3}\wedge
x_{7}+x_{4}\wedge x_{5}\wedge x_{6}\wedge x_{7}+x_{2}\wedge x_{3}\wedge
x_{5}\wedge x_{6}+x_{1}\wedge x_{3}\wedge x_{4}\wedge x_{6}$. Let us consider
the variety $\hat{G}_{2}=\\{\left[v_{1}\wedge v_{2}\right]\in G(2,V)\ |\
v_{1}\wedge v_{2}\wedge\omega_{0}=0\in\bigwedge^{6}V\\}$. Analogously as in
the non-degenerate case it is described in it’s linear span by $4\times 4$
Pfaffians of a matrix of the form
$\left(\begin{array}[]{ccccccc}0&0&e&g&h&i&a\\\ 0&0&-d&j&-g&l&b\\\
-e&d&0&m&n&k&c\\\ -g&-j&-m&0&0&-b&d\\\ -h&g&-n&0&0&a&e\\\ -i&-l&k&b&-a&0&f\\\
-a&-b&-c&-d&-e&-f&0\end{array}\right).$
Directly from the equations we observe that $\hat{G}_{2}$ contains a smooth
Fano fourfold $F$ described in the space $(b,c,d,f,j,l,m,k)$ by the $4\times
4$ Pfaffians of the matrix
$\left(\begin{array}[]{ccccccc}0&-d&j&l&b\\\ d&0&m&k&c\\\ -j&-m&0&-b&d\\\
-l&-k&b&0&f\\\ -b&-c&-d&-f&0\end{array}\right)$
###### Remark 3.1.
In fact we see directly a second such Fano fourfold $F^{\prime}$ isomorphic to
$F$ and meeting $F$ in a plane. It is analogously the intersection of
$\hat{G}_{2}$ with the space $(e,h,i,a,n,c,k,f)$. We shall see that there is
in fact a one parameter family of such Fano fourfolds any two intersecting in
the plane $(c,k,f)$.
###### Observation 3.2.
The image of the projection of $\hat{G}_{2}$ from the plane spanned by
$(c,k,f)$ is a hyperplane section of $\mathbb{P}^{1}\times\mathbb{P}^{5}$.
###### Proof.
The projection maps $\hat{G}_{2}$ to $\mathbb{P}^{10}$ with coordinates
$(a,b,d,e,g,h,i,j,l,m,n)$ Observe that the equations of the projection involve
$2\times 2$ minors of the matrix
$\left(\begin{array}[c]{cccccc}e&g&h&i&a&-n\\\
-d&j&-g&l&b&m\end{array}\right),$
as these equations appear in the description of $\hat{G}_{2}$ and do not
involve $c,k,f$. It follows that the image is contained in a hyperplane
section $P$ of a $\mathbb{P}^{1}\times\mathbb{P}^{5}$. Next we check that the
map is an isomorphism over the open subset given by $g=1$ of $P$. ∎
###### Proposition 3.3.
The Hilbert scheme of projective 3-spaces contained in $\hat{G}_{2}$ is a
conic. Moreover the union of these 3-spaces is a divisor $D$ of degree $8$ in
$\hat{G}_{2}$.
###### Proof.
We start by proving the following lemmas.
###### Lemma 3.4.
Let a plane $P$ have four points of intersection with a $G(5,V)$, such that
they span this plane. Then $P\cap G(5,V)$ is a conic parameterizing all five-
spaces containing a three-space $W$.
###### Proof.
The proof follows from [15], as three points in $G(2,V)$ always lie in a
$G(2,A)$ for some subspace $A$ of dimension 6. ∎
###### Lemma 3.5.
A projective three-space $\Pi\subset G(2,7)$ is contained in $\hat{G}$ if and
only if there exists a vector $u$ in $V$ and a four-space $v_{1}\wedge
v_{2}\wedge v_{3}\wedge v_{4}\in G(4,7)$ such that $u\wedge\omega_{0}=u\wedge
v_{1}\wedge v_{2}\wedge v_{3}\wedge v_{4}$ and $\Pi$ is generated by $u\wedge
v_{1}$, $u\wedge v_{2}$, $u\wedge v_{3}$, $u\wedge v_{4}$.
###### Proof.
To prove the if part we observe that our conditions imply $u\wedge
v_{i}\wedge\omega_{0}=0$ for $i=1,\dots,4$. Let us pass to the proof of the
only if part. Observe first that any projective three-space contained in
$G(2,V)$ is spanned by four points of the form $u\wedge v_{1}$,$u\wedge
v_{2}$,$u\wedge v_{3}$,$u\wedge v_{4}$. By our assumption on $\omega_{0}$ the
form $u\wedge\omega_{0}\neq 0$, and it is killed by the vectors
$u,v_{1},\dots,v_{4}$, hence equals $u\wedge v_{1}\wedge v_{2}\wedge
v_{3}\wedge v_{4}$. ∎
Now, it follows from Lemma 3.5 that the set of projective three-spaces
contained in $\hat{G}$ is parametrized by those
$\left[v\right]\in\mathbb{P}(V)$ for which $v\wedge\omega_{0}\in G(5,7)$. The
form $\omega_{0}$ may be written as the sum of three simple forms
corresponding to three subspaces $P_{1}$, $P_{2}$, $P_{3}$ of dimension 4 in
$V$, each two meeting in a line and no three having a nontrivial intersection.
Hence the form $v\wedge\omega_{0}$ may be written as the sum of three simple
forms corresponding to three subspaces of dimension 5 each spanned by $v$ and
one of the spaces $P_{i}$. By lemma 3.4 the sum of these three 5-forms may be
a simple form only if they all contain a common 3-space. But this may happen
only if $v$ lies in the space spanned by the lines $P_{i}\cap P_{j}$. Now it
is enough to see that the condition $v\wedge\omega_{0}$ is simple, corresponds
for the chosen coordinate space to $((v\wedge\omega_{0})^{*})^{2}=0$ and
perform a straightforward computation to see that it induces a quadratic
equation on the coefficients of $v\in\operatorname{span}\\{P_{1}\cap P_{2},\
P_{1}\cap P_{3},\ P_{2}\cap P_{3}\\}$.
In coordinates the constructed divisor is the intersection of $\hat{G}_{2}$
with $\\{g=h=j=0\\}$. The latter defines on $G(5,7)$ the set of lines
intersecting the distinguished plane. We compute in Macaulay2 its degree. ∎
###### Remark 3.6.
¿From the above proof it follows that the form $\omega_{0}$ defines a conic
$Q$ in $\mathbb{P}(V)$ by $Q=\\{\left[v\right]\in\mathbb{P}(V)\ |\
v\wedge\omega_{0}\in G(5,7)\\}$. Observe that any secant line of this conic is
an element of $\hat{G}_{2}$. Indeed let $v_{1},v_{2}\in V$ be two vectors such
that $\left[v_{1}\right],\ \left[v_{2}\right]\in Q$. Then
$v_{i}\wedge\omega_{0}$ defines a 5 space $\Pi_{i}\subset V$ for $i=1,2$.
Consider now the product $v_{1}\wedge v_{2}\wedge\omega_{0}$. If it is not
zero it defines a hyperplane in $V$. It follows that
$\dim(\Pi_{1}\cap\Pi_{2})=4$ and $\omega_{0}$ can then be written in the form
$\omega_{0}=u_{1}\wedge u_{2}\wedge u_{3}\wedge u_{4}+v_{1}\wedge
v_{2}\wedge\alpha$. According to [1] this decomposition corresponds to a non
general degenerate form $\omega_{0}$ giving us a contradiction.
The proof implies also that each $\mathbb{P}^{3}$ contained in $\hat{G}_{2}$
is a $\mathbb{P}^{3}$ of lines passing through a chosen $v$ in the conic and
contained in the projective four-space corresponding to $v\wedge\omega_{0}$.
###### Remark 3.7.
For any three points $v_{1},v_{2},v_{3}$ lying on the distinguished conic $Q$,
there exists a decomposition of $\omega_{0}$ into the sum of 3 simple forms
$\alpha_{1},\alpha_{2},\alpha_{3}$ such that
$v_{1}\wedge(\omega_{0}-\alpha_{1})=v_{3}\wedge(\omega_{0}-\alpha_{2})=v_{3}\wedge(\omega_{0}-\alpha_{3})=0$.
In other words for any triple of points on the conic there is a decomposition
with corresponding 4-spaces $P_{1}$,$P_{2}$,$P_{3}$ such that
$(v_{1},v_{2},v_{3})=(P_{1}\cap P_{2},P_{1}\cap P_{3},P_{2}\cap P_{3})$.
###### Remark 3.8.
A three form defining $\hat{G}_{2}$ has a five dimensional family of
presentations into the sum of three simple forms corresponding to three
subspaces $P_{1}$, $P_{2}$, $P_{3}$, however all these presentations induce
the same space $\operatorname{span}\\{P_{1}\cap P_{2},P_{1}\cap
P_{3},P_{2}\cap P_{3}\\}$. This space corresponds to the only projective plane
in $\hat{G}$ consisting of lines contained in a projective plane. All other
planes contained in $\hat{G}$ consist of lines passing through a point and
contained in a three-space.
###### Proposition 3.9.
The projection of $\hat{G}_{2}$ from $F$ is a birational map onto
$\mathbb{P}^{5}$ whose inverse is the map $\varphi$ defined by the system of
quadrics in $\mathbb{P}^{5}$ containing a common twisted cubic.
###### Proof.
Observe that the considered projection from $F$ decomposes into a projection
from the plane spanned by $c,k,f$ and the canonical projection from
$\mathbb{P}^{1}\times\mathbb{P}^{5}$ onto $\mathbb{P}^{5}$. The latter
restricted to $P$ is the blow down of $\mathbb{P}^{1}\times\mathbb{P}^{3}$. It
follows that the map is an isomorphism between the open set given by $g=1$ and
its image in $\mathbb{P}^{5}$. Let us write down explicitly the inverse map.
Let $(x,y,z,t,u,v)$ be a coordinate system in $\mathbb{P}^{5}$. Consider a
twisted cubic curve given by $u=0$, $v=0$ and the minors of the matrix
$\left(\begin{array}[c]{ccc}x&y&z\\\ t&x&y\end{array}\right).$
Let $L$ be the system of quadrics containing the twisted cubic. Choose the
coordinates $(a,\dots,n)$ of $H^{0}(L)$ in the following way:
$(a,\dots,n)=(uy,vy,yt-x^{2},-vx,ux,y^{2}-xz,uv,-u^{2},uz,v^{2},-xy+zt,vz,vt,ut).$
We easily check that the corresponding map is well defined and inverse to the
projection by writing down the matrix defining $\hat{G}_{2}$ with substituted
coordinates.
$\left(\begin{array}[c]{ccccccc}0&0&ux&uv&-u^{2}&uz&uy\\\
0&0&vx&v^{2}&-uv&vz&vy\\\ -ux&-vx&0&vt&-ut&-xy+zt&yt-x^{2}\\\
-uv&-v^{2}&-vt&0&0&-vy&-vx\\\ u^{2}&uv&ut&0&0&uy&ux\\\ -uz&-vz&xy-
zt&vy&-uy&0&y^{2}-xz\\\ -uy&-vy&-yt+x^{2}&vx&-ux&-y^{2}+xz&0\\\
\end{array}\right)$
∎
###### Remark 3.10.
The images of the 4-dimensional projective spaces containing the twisted cubic
form a pencil of smooth Fano fourfolds each two meeting in the plane which is
the image of the $\mathbb{P}^{3}$ spanned by the twisted cubic. The statement
follows from the fact that we can change coordinates in $\mathbb{P}^{5}$ and
hence we can assume that any two chosen Fano fourfolds obtained in this way
are $F$ and $F^{\prime}$ in Remark 3.1.
###### Lemma 3.11.
The singular locus of $\hat{G}_{2}$ is a plane.
###### Proof.
To see that the distinguished plane is singular it is enough to observe that
each line secant to the distinguished conic $C$ is the common element of two
projective three-spaces contained in $\hat{G}_{2}$. These are the spaces of
lines corresponding to the points of intersections of the secant line with
$C$. By the same argument it follows also that the divisor $D^{\prime}$ is
singular in the plane. To check smoothness outside let us perform the
following argument. Clearly the system $|2H-E|$ on the blow up of
$\mathbb{P}^{5}$ in the twisted cubic separates points and tangent directions
outside the pre-image transform of the $\mathbb{P}^{3}$ spanned by the twisted
cubic. It remains to study the image of the exceptional divisor, which is
$D^{\prime}$. Now observe that for any $F_{1}$ and $F_{2}$ in the pencil of
Fano fourfolds described in Remark 3.10 there is a hyperplane in
$\mathbb{P}^{13}$ whose intersection with $\hat{G}_{2}$ decomposes in $F_{1}$,
$F_{2}$ and $D^{\prime}$. It follows that the singularities of $\hat{G}_{2}$
may occur only in the singularities of $D^{\prime}$ and in the base points of
the pencil. We hence need only to prove that $D^{\prime}$ is smooth outside
the distinguished plane. This follows directly from the description of the
complement of the plane in $D^{\prime}$ as a vector bundle over the product of
the twisted cubic with $\mathbb{P}^{1}$.
∎
###### Remark 3.12.
Observe that the map induced on $\mathbb{P}^{5}$ contracts only the secant
lines of the twisted cubic to distinct points of the distinguished
$\mathbb{P}^{2}$.
###### Remark 3.13.
A generic codimension 2 section of $\hat{G}_{2}$ by 2 hypersurfaces is nodal.
We check this by taking a codimension 2 linear section and looking at its
singularity.
###### Lemma 3.14.
The variety $\hat{G}$ is a flat deformation of $G$.
###### Proof.
We observe that both varieties arise as linear sections of $G(2,V)$ by some
$\mathbb{P}^{10}$. Moreover we easily find an algebraic family with those as
fibers. Indeed consider the family parameterized by $t\in\mathbb{C}$ of
varieties given in $\mathbb{P}^{13}$ by the $4\times 4$ Pfaffians of the
matrices:
$\left(\begin{array}[]{ccccccc}0&-tf&e&g&h&i&a\\\ f&0&-d&j&-g-tk&l&b\\\
-e&d&0&m&n&k&c\\\ -g&-j&-m&0&tc&-b&d\\\ -h&-k&-n&-c&0&a&e\\\
-i&-l&g+k&b&-a&0&f\\\ -a&-b&-c&-d&-e&-f&0\end{array}\right).$
For each $t\in\mathbb{C}$ the equations describe the variety of isotropic
five-spaces with respect to the form $\omega_{t}=x_{1}\wedge x_{2}\wedge
x_{3}\wedge x_{7}+x_{4}\wedge x_{5}\wedge x_{6}\wedge x_{7}+x_{2}\wedge
x_{3}\wedge x_{5}\wedge x_{6}+x_{1}\wedge x_{3}\wedge x_{4}\wedge
x_{6}+tx_{1}\wedge x_{2}\wedge x_{4}\wedge x_{5}$. The latter is a
nondegenerate fourform for $t\neq 0$. It follows that for $t\neq 0$ the
corresponding fiber of the family is isomorphic to $G_{2}$ and for $t=0$ it is
equal to $\hat{G}_{2}$.
The assertion then follows from the equality of their Hilbert polynomials,
which we compute using MACAULAY 2. ∎
## 4\. Further degenerations
Observe that one can further degenerate $\hat{G}_{2}$ by considering
degenerations of the twisted cubic $C$ in $\mathbb{P}^{5}$. In particular the
twisted cubic can degenerate to one of the following:
* •
the curve $C_{0}$ which is the sum of a smooth conic and a line intersecting
it in a point
* •
a chain $C_{1}$ of three lines spanning a $\mathbb{P}^{3}$
* •
a curve $C_{2}$ consisting of three lines passing through a common point and
spanning a $\mathbb{P}^{3}$
Let us consider the three cases separately.
Let us start with the conic and the line. In this case we can assume that the
ideal of $C_{0}$ is given in $\mathbb{P}^{5}$ by $\\{u=0,v=0\\}$ and the
minors of the matrix
$\left(\begin{array}[c]{ccc}x,y,z\\\ t,x,0\end{array}\right).$
Then the image of $\mathbb{P}^{5}$ by the system of quadrics containing
$C_{0}$ can also be written as a section of $G(2,7)$ consisting of two-forms
killed by the four-form $\omega_{1}=x_{1}\wedge x_{2}\wedge x_{3}\wedge
x_{7}+x_{2}\wedge x_{3}\wedge x_{5}\wedge x_{6}+x_{1}\wedge x_{3}\wedge
x_{4}\wedge x_{6}$. To find the deformation family we consider the family of
varieties isomorphic to $\hat{G}_{2}$ corresponding to twisted cubics given by
$\\{u=0,v=0\\}$ and the minors of the matrix
$\left(\begin{array}[c]{ccc}x,y,z\\\ t,x,\lambda y\end{array}\right).$
We conclude comparing Hilbert polynomials.
###### Remark 4.1.
The forms $\omega_{0}$ and $\omega_{1}$ represent the only two orbits of forms
in $\bigwedge^{3}(V)$ whose corresponding isotropic varieties are flat
degenerations of $G_{2}$. To prove it we use the representatives of all 9
orbits contained in [1] and check one by one the invariants of varieties they
define using Macaulay2. In all other cases the dimension of the isotropic
variety is higher.
In the case of a chain of lines the situation is a bit different.
###### Proposition 4.2.
The variety $G_{2}$ admits a degeneration over a disc to a Gorenstein toric
Fano 5-fold whose only singularities are 3 conifold singularities in
codimension $3$ toric strata of degree 1.
###### Proof.
As $\hat{G}_{2}$ is a degeneration of $G_{2}$ over a disc it is enough to
prove that $\hat{G}_{2}$ admits such a degeneration. We know that the latter
is the image of $\mathbb{P}^{5}$ by the map defined by the system of quadrics
containing a twisted cubic $C$. Let us choose a coordinate system
$(x,y,z,t,u,v)$ such that $C$ is given in $\mathbb{P}^{5}$ by $\\{u=0,v=0\\}$
and the minors of the matrix
$\left(\begin{array}[c]{ccc}x,y,z\\\ t,x,y\end{array}\right),$
then choose the chain of lines $C_{1}$ to be defined by $\\{u=0,v=0\\}$ and
the minors of the matrix
$\left(\begin{array}[c]{ccc}0,y,z\\\ t,x,0\end{array}\right).$
Let $T$ be the variety in $\mathbb{P}^{13}$ defined as the closure of the
image of $\mathbb{P}^{5}$ by the system of quadrics containing $C_{0}$. It is
an anti-canonically embedded toric variety with corresponding dual reflexive
polytope:
$\begin{array}[]{ccccccc}(&0&0&1&0&0&)\\\ (&0&0&0&1&0&)\\\
(&-1&-1&-1&-1&-1&)\\\ (&0&0&0&0&1&)\\\ (&1&0&0&0&0&)\\\ (&0&1&0&0&0&)\\\
(&1&1&1&0&1&)\\\ (&0&0&-1&0&-1&)\\\ (&1&1&0&1&1&)\end{array}$
We check using Magma that the singular locus of this polytope has three
conifold singularities along codimension 3 toric strata of degree 1.
Consider the family of quadrics parameterized by $\lambda$ containing the
curves $C_{\lambda}$ defined by $\\{u=0,v=0\\}$ and the minors of the matrix
$\left(\begin{array}[c]{ccc}\lambda x,y,z\\\ t,x,\lambda y\end{array}\right).$
For each $\lambda\neq 0$ the equations of the image of $\mathbb{P}^{5}$ by the
corresponding system of quadrics agree with the minors of the matrix:
$\left(\begin{array}[]{ccccccc}0&0&\lambda e&g&h&i&a\\\ 0&0&-\lambda
d&j&-g&l&b\\\ -\lambda e&\lambda d&0&m&n&k&c\\\ -g&-j&-m&0&0&-\lambda b&d\\\
-h&g&-n&0&0&\lambda a&e\\\ -i&-l&k&\lambda b&-\lambda a&0&f\\\
-a&-b&-c&-d&-e&-f&0\end{array}\right),$
in the coordinates
$(a,\dots,n)=(uy,vy,yt-x^{2},-vx,ux,y^{2}-xz,uv,-u^{2},uz,v^{2},-xy+zt,vz,vt,ut).$
The latter define a variety isomorphic to $\hat{G_{2}}$ for each $\lambda$. It
is easy to check that this family degenerates to $T$ when $\lambda$ tends to
0\. By comparing Hilbert polynomials we obtain that it is a flat degeneration
of $\hat{G}_{2}$, hence of $G_{2}$. ∎
In the case of the twisted cubic degenerating to three lines meeting in a
point we obtain a Gorenstein toric Fano 5-fold with 4 singular strata of
codimension 3 and degree 1 which is a flat deformation of $G_{2}$. The
corresponding dual reflexive polytope is:
$\begin{array}[c]{ccccccc}(&-1&-1&-1&-1&-1&)\\\ (&0&0&1&0&0&)\\\
(&0&0&0&1&0&)\\\ (&0&0&0&0&1&)\\\ (&1&0&0&0&0&)\\\ (&0&1&0&0&0&)\\\
(&1&1&1&1&0&)\\\ (&1&1&1&0&1&)\\\ (&1&1&0&1&1&)\\\ (&2&2&1&1&1&)\end{array}$
### 4.1. Application to mirror symmetry
One of the methods of computing mirrors to Calabi-Yau threefolds is to find
their degenerations to complete intersections in Gorenstein toric Fano
varieties. Let us present the method, contained in [3], in our context. We aim
to use the constructed toric degeneration to compute the mirror of the Calabi-
Yau threefold $X_{36}$. As the construction is still partially conjectural we
omit details in what follows.
Consider the degeneration of $G_{2}$ to $T$. We have, $X_{36}$ is a generic
intersection of $G_{2}$ with a hyperplane and a quadric. On the other hand
when we intersect $T$ with a generic hyperplane and a generic quadric we get a
Calabi-Yau threefold $\hat{Y}$ with 6 nodes. It follows that $\hat{Y}$ is a
flat degeneration of $X_{36}$. Moreover $\hat{Y}$ admits a small resolution of
singularities, which is also a complete intersection in a toric variety. We
shall denote it by $Y$. The variety $Y$ is a smooth Calabi-Yau threefold
connected to $X$ by a conifold transition. Due to results of [2] the variety
$Y$ has a mirror family $\mathcal{Y}^{*}$ with generic element denoted by
$Y^{*}$. The latter is found explicitly as a complete intersection in a toric
variety obtained from the description of $Y$ by the method of nef partitions.
The authors in [2] prove that there is in fact a canonical isomorphism between
$H^{1,1}(Y)$ and $H^{1,2}(Y^{*})$. Let us consider the one parameter subfamily
$\mathcal{X}^{*}$ of the family $\mathcal{Y}^{*}$ corresponding to the
subspace of $H^{1,2}$ consisting of elements associated by the above
isomorphism to the pullbacks of Cartier divisors from $\hat{Y}$.
The delicate part of this mirror construction of $X_{36}$ is to prove that a
generic Calabi–Yau threefold from the subfamily $\mathcal{X}^{*}$ has 6 nodes
satisfying an appropriate number of relations. This is only a conjecture ([3,
Conj. 6.1.2]) which we are still unable to solve, also in this case. Assume
that the conjecture is true. We then obtain a construction of the mirror
family of $X_{36}$ as a family of small resolutions of the elements of the
considered subfamily.
## 5\. A geometric bi-transition
In this section we construct two geometric transitions between Calabi–Yau
threefolds based on the map from Proposition 3.9.
Let us consider a generic section $X$ of $\hat{G_{2}}$ by a hyperplane and a
quadric. Observe that $X$ has exactly two nodes and admits a smoothing to a
Borcea Calabi-Yau threefold $X_{36}$ of degree 36. Observe moreover that $X$
contains a system of smooth K3 surfaces each two intersecting in exactly the
two nodes. Namely these are the intersections of the pencil of $F$ with the
quadric and the hyperplane. Blowing up any of them is a resolution of
singularities of $X$. Let us consider the second resolution i.e. the one with
the exceptional lines flopped. It is a Calabi-Yau threefold $Z$ with a
fibration by K3 surfaces of genus 6 and generic Picard number 1. Observe
moreover that according to proposition 3.9 the map $\varphi^{-1}$ factors
through the blow up $\tilde{\mathbb{P}}^{5}$ of $\mathbb{P}^{5}$ in the
twisted cubic $C$. Let $E$ be the exceptional divisor of the blow up and $H$
the pullback of the hyperplane from $\mathbb{P}^{5}$. In this context $Z$ is
the intersection of two generic divisors of type $|2H-E|$ and $|4H-2E|$
respectively.
###### Lemma 5.1.
The Picard number $\rho(Z)=2$
###### Proof.
We follow the idea of [11]. Observe that both systems $|2H-E|$ and $|4H-2E|$
are base point free and big on $\tilde{\mathbb{P}}^{5}$. On
$\tilde{\mathbb{P}}^{5}$ both divisors contract the proper transform of
$\mathbb{P}^{3}$ to $\mathbb{P}^{2}$. It follows by [17, Thm. 6] that the
Picard group of $Z$ is isomorphic to the Picard group of
$\tilde{\mathbb{P}}^{5}$ which is of rank 2. ∎
Moreover $Z$ contains a divisor $D^{\prime}$ fibered by conics. In one hand
$D^{\prime}$ is the proper transform of the divisor $D$ from Proposition 3.3
by the considered resolution of singularities, on the other hand $D^{\prime}$
is the intersection of $Z$ with the exceptional divisor $E$. It follows that
$D^{\prime}$ is contracted to a twisted cubic in $\mathbb{P}^{5}$ by the
blowing down of $E$ and the contraction is primitive by Lemma 5.1. It follows
that $Z$ is connected by a conifold transition involving a primitive
contraction of 2 lines with $X$, and by a geometric transition involving a
type III primitive contraction with the complete intersection
$Y_{2,4}\subset\mathbb{P}^{5}$.
###### Remark 5.2.
We can look also from the other direction. Let $C$ be a twisted cubic, $Q_{2}$
a generic quadric containing it, and $Q_{4}$ a generic quartic singular along
it. Then the intersection $Q_{2}\cap Q_{4}$ contains the double twisted cubic
and two lines secant to it. Taking the map defined by the system of quadrics
containing $C$ the singular cubic is blown up and the two secant lines are
contracted to 2 nodes.
## 6\. Polarized K3 surfaces genus 10 with a $g^{1}_{5}$
In this section we investigate polarized K3 surfaces of genus 10 which appear
as sections of the varieties studied in this paper.
###### Proposition 6.1.
A polarized K3 surface $(S,L)$, which is a proper linear section of a $G_{2}$
does not admit a $g^{1}_{5}$.
###### Proof.
Let us first prove the following lemma:
###### Lemma 6.2.
Let $p_{1},\dots,p_{5}$ be five points on $G(2,V)$ of which no two lie on a
line in $G(2,V)$ and no three lie on a conic in $G(2,V)$ and such that they
span a 3-space $P$. Then $\\{p_{1},\dots,p_{5}\\}\subset G(2,W)\subset G(2,V)$
for some five dimensional subspace $W$ of $V$.
###### Proof.
Let $p_{1},\dots,p_{5}$ correspond to planes $U_{1},\dots,U_{5}\subset V$. By
Lemma 3.4 we may assume that no four of these points lie on a plane. Assume
without loss of generality that $p_{1},\dots,p_{4}$ span the 3-space. If
$\dim(U_{1}+U_{2}+U_{3}+U_{4})=6$ the assertion follows from [15, Lemma 2.3].
We need to exclude the case $U_{1}+U_{2}+U_{3}+U_{4}=V$. In this case
(possibly changing the choice of $p_{1},\dots,p_{4}$ from the set
$\\{p_{1},\dots,p_{5}\\}$) we may choose a basis in one of the two following
ways $\\{v_{1},\dots,v_{7}\\}$ such that $v_{1},v_{2}\in U_{1}$,
$v_{3},v_{4}\in U_{2}$, $v_{5},v_{6}\in U_{3}$, and either
$v_{7},v_{1}+v_{3}+v_{5}\in U_{4}$ or $v_{7},v_{1}+v_{3}\in U_{4}$. Each point
of $P$ is then represented by a bi-vector
$w=av_{1}\wedge v_{2}+bv_{3}\wedge v_{4}+cv_{5}\wedge
v_{6}+dv_{7}\wedge(v_{1}+v_{3}+v_{5}),$
or
$w=av_{1}\wedge v_{2}+bv_{3}\wedge v_{4}+cv_{5}\wedge
v_{6}+dv_{7}\wedge(v_{1}+v_{3}),$
for some $a,b,c,d\in\mathbb{C}$. By simple calculation we have $w^{2}=0$ if
and only if exactly one of the $a,b,c,d$ is nonzero, which gives a
contradiction with the existence of $p_{5}$ as in the assumption. ∎
Now assume that $L$ has a $g^{1}_{5}$. It follows from [15, (2.7)] that it is
given by five points on $L$ spanning a $\mathbb{P}^{3}$. By Lemma 6.2 these
points are contained in a section of $G_{2}$ with a $G(2,5)$. We conclude by
[13, lem. 3.3] as five isolated points cannot be a linear section of a cubic
scroll by a $\mathbb{P}^{3}$. ∎
###### Proposition 6.3.
Every smooth polarized surface $(S,L)$ which appears as a complete linear
section of $\hat{G}_{2}$ is a $K3$ surface with a $g^{1}_{5}$.
###### Proof.
As $\hat{G}_{2}$ is a flat deformation of $G_{2}$ it’s smooth complete linear
section of dimension 2 are K3 surfaces of genus 10. Moreover each of these
surfaces contains an elliptic curve of degree 5 which is a section of the Fano
fourfold $F$. ∎
Let us consider the converse. Let $(S,L)$ be polarized K3 surface of genus 10,
such that $L$ admits a $g^{1}_{5}$ induced by and elliptic curve $E$ and do
not admit a $g^{1}_{4}$. By the theorem of Green and Lazarsfeld [9] this is
the case for instance when $L$ admits a $g^{1}_{5}$ and does not admit neither
a $g^{1}_{4}$ nor a $g^{2}_{7}$.
We have $E.L=5$ and $E^{2}=0$ hence $h^{0}(O(L)|_{E})=5$ and
$h^{0}(O(L-E)|_{E})=5$. It follows from the standard exact sequence that
$h^{0}(O(L-E))\geq 6$ and $h^{0}(O(L-2E))\geq 1$. We claim that $|L-E|$ is
base point free: Indeed, denote by $D$ its moving part and $\Delta$ its fixed
part. Clearly $|D-E|$ is effective as $|L-2E|$ is. Observe that $D$ cannot be
of the form $kE^{\prime}$ with $E^{\prime}$ an elliptic curve, because as
$D-E$ is effective we would have $E^{\prime}=E$ hence $k\leq 3$ which would
contradict $h^{0}(O(L-E))\geq 6$. Hence we may assume that $D$ is a smooth
irreducible curve and $h^{1}(O(D))=0$. By Riemann-Roch we have:
$4+D^{2}=2h^{0}(O(D))=2h^{0}(O(D+\Delta)\geq 4+(D+\Delta)^{2}$
and analogously:
$4=4+E^{2}=2h^{0}(O(E))=2h^{0}(O(E+\Delta)\geq 4+(E+\Delta)^{2},$
because $|D-E|$ being effective implies $\Delta$ is also the fixed part of
$|E+\Delta|$. It follows that $L.\Delta=(D+E+\Delta).\Delta\leq 0$, which
contradicts ampleness of $L$.
It follows from the claim that $|L-E|$ is big, nef, base point free and
$h^{0}(O(L-E))=6$. Observe that $|L-E|$ is not hyper-elliptic. Indeed, first
since $(L-E).E=5$ it cannot be a double genus 2 curve. Assume now that there
exists an elliptic curve $E^{\prime}$ such that $E^{\prime}.(L-E)=2$ then
$L.E\leq 4$ because $|L-2E|$ being effective implies $(L-2E).E^{\prime}\geq
0$. This contradicts the nonexistence of $g^{1}_{4}$ on $L$.
Hence $|L-E|$ defines a birational morphism to a surface of degree 8 in
$\mathbb{P}^{5}$. Observe moreover that the image of an element in $|L-2E|$ is
a curve of degree 3 spanning a $\mathbb{P}^{3}$. The latter follows from the
fact that by the standard exact sequence
$0\longrightarrow O(E)\longrightarrow O(L-E)\longrightarrow
O_{\Gamma}(L-E)\longrightarrow 0$
for $\Gamma\in|L-2E|$ we have $h^{0}((L-E)|_{\Gamma})=4$.
Next we have two possibilities:
1. (1)
The system $|L-E|$ is trigonal, then the image of $\varphi_{|L-E|}(S)$ is
contained in a cubic threefold scroll. The latter is either the Segre
embedding of $\mathbb{P}^{1}\times\mathbb{P}^{2}$ or a cone over a cubic
rational normal scroll surface.
2. (2)
The surface $\varphi_{|L-E|}(S)$ is a complete intersection of three quadrics.
Moreover for the image $C=\varphi_{|L-E|}(\Gamma)$ we have the following
possibilities:
* •
Either $C$ is a twisted cubic,
* •
or $C$ is the union of a conic and a line,
* •
or $C$ is the union of three lines.
Consider now the composition $\psi$ of $\varphi_{|L-E|}$ with the birational
map given by quadrics in $\mathbb{P}^{5}$ containing $C$. It is given by a
subsystem of $|L|=|2(L-E)-(L-2E)|$. Moreover in every case above $\psi(S)$
spans a $\mathbb{P}^{10}$, because in each case the space of quadrics
containing $\varphi_{|L-E|}(S)$ is three dimensional. It follows that $\psi$
is defined on $S$ by the complete linear system $|L|$. Finally $(S,L)$ is
either a proper linear section of one of the three considered degenerations of
$G_{2}$ or a divisor in the blow up of a cubic scroll in a cubic curve.
In particular we have the following.
###### Proposition 6.4.
Let $(S,L)$ be a polarized K3 surface of genus 10 such that $L$ admits a
$g^{1}_{5}$ induced by an elliptic curve $E$ but no $g^{1}_{4}$. If moreover
$|L-E|$ is not trigonal, then $(S,L)$ is a proper linear section of one of the
four considered degenerations of $G_{2}$.
###### Remark 6.5.
The system $|L-E|$ is trigonal on $S$ if and only if there exists an elliptic
curve $E^{\prime}$ on $S$ such that one of the following holds:
1. (1)
$L.E^{\prime}=6$ and $E.E^{\prime}=3$
2. (2)
$L.E^{\prime}=5$ and $E.E^{\prime}=2$.
Now observe that in both cases we obtain a second $g^{1}_{5}$ on $L$. In the
first case it is given by the restriction of $E^{\prime}$ and in the second we
get at least a $g^{2}_{7}$ by restricting $|L-E-E^{\prime}|$, the latter gives
rise to a $g^{1}_{5}$ by composing the map with a projection from the singular
point of the image by the $g^{2}_{7}$ (there is a singular point by Noether’s
genus formula).
We can now easily prove Proposition 1.1.
###### Proof.
Proof of Proposition 1.1 Indeed the existence of exactly one $g^{1}_{5}$
excludes both the existence of a $g^{2}_{7}$ and of a $g^{1}_{4}$, hence the
$g^{1}_{5}$ is induced by an elliptic pencil $|E|$ on $S$. Moreover by Remark
6.5 we see that $|L-E|$ is then trigonal. ∎
The Proposition 1.2 follows directly from Proposition 6.4 and the fact that in
the more degenerate case we clearly get a higher Picard number due to the
decomposition of $C$.
###### Remark 6.6.
The K3 surfaces obtained as sections of considered varieties fit to the case
$g=10$, $c=3$, $D^{2}=0$, and scroll of type $(2,1,1,1,1)$ from [10] (Observe
that there is a misprint in the table, because $H^{0}(L-2D)$ should be 1 in
this case). The embedding in the scroll corresponds to the induced embedding
in the projection of $\hat{G}_{2}$ from the distinguished plane.
## 7\. Acknowledgements
I would like to thank K. Ranestad, J. Buczyński and G.Kapustka for their help.
I acknowledge also the referee for useful comments. The project was partially
supported by SNF, No 200020-119437/1 and by MNSiW, N N201 414539.
## References
* [1] H. Abo, G. Ottaviani, Ch. Peterson, _Non-Defectivity of Grassmannians of planes_ , arXiv:0901.2601v1 [math.AG], to appear in J. Algebraic Geom.
* [2] V.V. Batyrev, L. A. Borisov, _On Calabi-Yau Complete Intersections in Toric Varieties_ , in âHigher Dimensional Complex Geometryâ, M. Andreatta and T. Peternell eds., 1996, 37-65.
* [3] V. V. Batyrev, I. Ciocan-Fontanine, B. Kim, D. van Straten, _Conifold transitions and mirror symmetry for Calabi-Yau complete intersections in Grassmannians_. Nuclear Phys. B 514 (1998), no. 3, 640–666.
* [4] V. V. Batyrev, I. Ciocan-Fontanine, B. Kim, D. van Straten, _Mirror symmetry and toric degenerations of partial flag manifolds_. Acta Math. 184 (2000), no. 1, 1–39.
* [5] V. V. Batyrev, M. Kreuzer, _Constructing new Calabi-Yau 3-folds and their mirrors via conifold transitions_ , preprint 2008, arXiv:0802.3376v2.
* [6] W. Bosma, J. Cannon, C. Playoust, _The Magma algebra system. I. The user language_ , J. Symbolic Comput., 24 (1997), 235–265.
* [7] M. Brion, V. Alexeev, _Toric degenerations of spherical varieties_ , Selecta Math. (N.S.) 10 (2004), 453–478.
* [8] D. R. Grayson, M. E. Stillman: _Macaulay 2, a software system for research in algebraic geometry_ , Available at http://www.math.uiuc.edu/Macaulay2/
* [9] M. Green, R. Lazarsfeld, _Special divisors on curves on a K3 surface_ , Invent. Math. 89 (1987), 357–370.
* [10] T. Johnsen, A. L. Knutsen, _$K3$ projective models in scrolls_. Lecture Notes in Mathematics, 1842. Springer-Verlag, Berlin, 2004. viii+164
* [11] G. Kapustka, _Primitive contractions of Calabi–Yau threefolds II_ , J. London Math. Soc.J. Lond. Math. Soc. (2) 79 (2009), no. 1, 259–271.
* [12] M. Kapustka, _Geometric transitions between Calabi-Yau threefolds related to Kustin-Miller unprojections._ , arXiv:1005.5558v1 [math.AG].
* [13] M. Kapustka, K. Ranestad, _Vector bundles on Fano varieties of genus 10_ , preprint 2010, arXiv:1005.5528v1 [math.AG].
* [14] S. Mukai, _Curves, K3 surfaces and Fano 3-folds of genus $\leq 10$_, in Algebraic Geometry and Commutative Algebra in Honor of Masayoshi Nagata, pp. 357-377, Kinokuniya, Tokyo, 1988.
* [15] S. Mukai, _Curves and Grassmannians._ Algebraic geometry and related topics (Inchon, 1992), 19–40, Conf. Proc. Lecture Notes Algebraic Geom., I, Int. Press, Cambridge, MA, 1993.
* [16] S. Mukai, _Non-abelian Brill-Noether theory and Fano 3-folds_ [translation of Sugaku 49 (1997), no. 1, 1–24]. Sugaku Expositions 14 (2001), no. 2, 125–153.
* [17] G. V. Ravindra, V. Srinivas, _The Grothendieck-Lefschetz theorem for normal projective varieties_ , J. Algebraic Geom. 15 (2006), 563–590.
|
arxiv-papers
| 2011-03-23T20:34:09 |
2024-09-04T02:49:17.914530
|
{
"license": "Public Domain",
"authors": "Michal Kapustka",
"submitter": "Michal Kapustka",
"url": "https://arxiv.org/abs/1103.4623"
}
|
1103.4771
|
# Lensing and Waveguiding of Ultraslow Pulses in an Atomic Bose-Einstein
Condensate
Devrim Tarhan dtarhan@harran.edu.tr Alphan Sennaroglu Özgür E.
Müstecaplıog̃lu Department of Physics, Harran University, Osmanbey
Yerleşkesi, Şanlıurfa, 63300, Turkey Department of Physics, Koç University,
Rumelifeneri yolu, Sarıyer, Istanbul, 34450, Turkey Institute of Quantum
Electronics, ETH Zurich Wolfgang-Pauli-Strasse 16, CH-8093 Zurich, Switzerland
###### Abstract
We investigate lensing and waveguiding properties of an atomic Bose-Einstein
condensate for ultraslow pulse generated by electromagnetically induced
transparency method. We show that a significant time delay can be controllably
introduced between the lensed and guided components of the ultraslow pulse. In
addition, we present how the number of guided modes supported by the
condensate and the focal length can be controlled by the trap parameters or
temperature.
###### keywords:
Bose-Einstein condensate , Electromagnetically induced transparency ,
ultraslow pulse propagation , Waveguides , Lenses
###### PACS:
03.75.Nt , 42.50.Gy , 41.20.Jb , 42.82.Et , 42.79.Bh
††journal: Optics Communications
## 1 Introduction
Quantum interference effects, such as electromagnetically induced transparency
(EIT) [1, 2], can produce considerable changes in the optical properties of
matter and have been utilized to demonstrate ultraslow light propagation
through an atomic Bose-Einstein condensate (BEC) [3]. This has promised a
variety of new and appealing applications in coherent optical information
storage as well as in quantum information processing . However, the potential
of information storage in such systems is shadowed by their inherently low
data rates. To overcome this challenge, exploitation of transverse directions
for a multimode optical memory via three dimensional waveguiding of slow EIT
pulse [4] has been recently suggested for BECs [5]. Transverse confinement of
slow light is also quintessential for various proposals of high performance
intracavity and nanofiber slow light schemes (See e.g. Ref. [6] and references
therein). Furthermore, temperature dependence of group velocity of ultraslow
light in a cold gas has been investigated for an interacting Bose gases [7].
A recent experiment, on the other hand, has drawn attention that ultracold
atomic systems with graded index profiles may not necessarily have perfect
transverse confinement due to simultaneously competing effects of lensing and
waveguiding [8]. The experiment is based upon a recoil-induced resonance (RIR)
in the high gain regime, employed for an ultracold atomic system as a graded
index waveguiding medium. As a result of large core radius with high
refractive index contrast, and strong dispersion due to RIR, radially confined
multimode slow light propagation has been realized [8]. As also noted in the
experiment, a promising and intriguing regime would have few modes where
guided nonlinear optical phenomena could happen [8].
It has already been shown that the few mode regime of ultraslow waveguiding
can be accessed by taking advantage of the sharp density profile of BEC and
the strong dispersion provided by the usual EIT [5]. Present work aims to
reconsider this result by taking into account the simultaneous lensing
component. On one hand, the lensing could be imagined as a disadvantage
against reliable high capacity quantum memory applications. Our investigations
do aid to comprehend the conditions of efficient transverse confinement. On
the other hand, we argue that because the lensing component is also strongly
delayed with a time scale that can be observably large relative to the
waveguiding modes, such spatially resolved slow pulse splitting can offer
intriguing possibilities for creating and manipulating flying bits of
information, especially in the nonlinear regime. Indeed, earlier proposals to
split ultraslow pulses in some degrees of freedom (typically polarization),
face many challenges of complicated multi-level schemes, multi-EIT windows,
and high external fields [9]. Quite recently birefringent lensing in atomic
gaseous media in EIT setting has been discussed [10]. The proposed splitting
of lensing and guiding modes is both intuitively and technically clear, and
easy to implement in EIT setting, analogous to the RIR experiment.
The paper is organized as follows: After describing our model system briefly
in Sec. 2, EIT scheme for an interacting BEC is presented in Sec. 3.
Subsequently we focus on the lensing effect of the ultraslow pulse while
reviewing already known waveguiding results shortly in Sec. 4. Main results
and their discussion are in Sec. 5. Finally, we conclude in Section 6.
## 2 Model System
We consider oblique incidence of a Gaussian beam pulse onto the cigar shaped
condensate as depicted in Fig. (1). Due to particular shape of the condensate
two fractions, the one along the long ($z$) axis of the condensate and the
parallel the short ($r$) axis, of the incident Gaussian beam exhibit different
propagation characteistics. The $r$-fraction exhibit the lensing effect and
focused at a focal length ($f$) while the axial component would be guided in a
multi mode or single mode formation.
Figure 1: Lensing and waveguiding effects in a slow Gaussian beam scheme with
an ultracold atomic system.
The angle of incidence ($\theta$) controls the fraction of the probe power
converted either to the lensing mode or to the guiding modes. When both
lensing and waveguiding simultaneously happen in an ultraslow pulse
propagation set up, an intriguing possibility arises. Different density
profiles along radial and axial directions translates to different time delays
of the focused and guided modes. Due to significant difference in the optical
path lengths of guided and focused components, an adjustable relative time
delay can be generated between these two components. As a result, these two
components become spatially separated. The fraction of the beam parallel to
the short axis of the condensate undergoes propagation in a lens-like
quadratic index medium resulting in a change in the spot size or the beam
waist of the output beam. We call this as the lensing effect and the
corresponding fraction as the lensed-fraction. The fraction of the beam
propagating along the long axis is propagating in weakly-guided regime of
graded index medium that can be described in terms of LP modes. This is called
as guiding effect and the corresponding fraction is called as guided fraction.
We use the usual Gaussian beam transformation methods under paraxial
approximation to estimate the focal length. In addition our aim is to estimate
time delay between these two fractions. The output lensed and guided fractions
of the incident beam are delayed in time relative to each other. In general
temporal splitting depends on modal, material and waveguide dispersions. For a
simple estimation of relative time delay of these components, we consider only
the lowest order modes in the lensed and the guided fractions, and take the
optical paths as the effective lengths of the corresponding short and long
axes of the condensate. In this case we ignore the small contributions of
modal and waveguide dispersions and determine the group velocity, same for
both fractions, by assuming a constant peak density of the condensate in the
material dispersion relation.
## 3 EIT scheme for an interacting BEC
A Bose gas can be taken as condensate part and thermal part at low
temperature. Following Ref. [11], density profile of BEC can be written by
$\rho(\vec{r})=\rho_{c}(\vec{r})+\rho_{{\rm th}}(\vec{r})$, where $\rho_{{\rm
c}}(\vec{r})=[(\mu(T)-V(\vec{r}))/U_{0}]\Theta(\mu-V(\vec{r}))$ is the density
of the condensed atoms and $\rho_{{\rm th}}$ is the density of the thermal
ideal Bose gas. Here $U_{0}=4\pi\hbar^{2}a_{s}/m$; $m$ is atomic mass; $a_{s}$
is the atomic s-wave scattering length. $\Theta(.)$ is the Heaviside step
function and $T_{C}$ is the critical temperature. The external trapping
potential is $V(\vec{r})=(m/2)(\omega_{r}^{2}r^{2}+\omega_{z}^{2}z^{2})$ with
trap frequencies $\omega_{r},\omega_{z}$ for the radial and axial directions,
respectively. At temperatures below $T_{c}$, the chemical potential $\mu$ is
evaluated by $\mu(T)=\mu_{TF}(N_{0}/N)^{2/5}$, where $\mu_{TF}$ is the
chemical potential obtained under Thomas-Fermi approximation,
$\mu_{TF}=((\hbar\omega_{t})/2)(15Na_{s}/a_{h})^{2/5}$, with
$\omega_{t}=(\omega_{z}\omega_{r}^{2})^{1/3}$ and
$a_{h}=\sqrt{\hbar/(\omega_{z}\omega_{r}^{2})^{1/3}}$, the average harmonic
oscillator length scale. The condensate fraction is given by
$N_{0}/N=1-x^{3}-s\zeta(2)/\zeta(3)x^{2}(1-x^{3})^{2/5}$, with $x=T/T_{c}$,
and $\zeta$ is the Riemann-Zeta function. The scaling parameter $s$ is given
by $s=\mu_{TF}/k_{B}T_{C}=(1/2)\zeta(3)^{1/3}(15N^{1/6}a_{s}/a_{h})^{2/5}$.
Treating condensate in equilibrium and under Thomas-Fermi approximation (TFA)
is common in ultraslow light literature and generally a good approximation
because density of an ultracold atomic medium is slowly changing during the
weak probe propagation. The propagation is in the order of microseconds while
atomic dynamics is in millisecond time scales. Due to weak probe propagation
under EIT conditions, most of the atoms remain in the lowest state[12]. The
validity of TFA further depends on the length scale of the harmonic potential.
If the length scale is much larger than the healing length, TFA with the
harmonic potential works fine. The healing length of the BEC is defined as
$\xi=[1/(8\pi na_{s})]^{1/2}$ [13] where $n$ is the density of an atomic Bose-
Einstein condensate and it can be taken as $n=\rho(0,0)$. We consider range of
parameters in this work within the range of validity of TFA. The interaction
of atomic BEC with strong probe and the coupling pump field may drive atomic
BEC out of equilibrium. This non- equilibrium discussion is beyond the scope
of the present paper.
We consider, beside the probe pulse, there is a relatively strong coupling
field interacting with the condensate atoms in a $\Lambda$-type three level
scheme with Rabi frequency $\Omega_{c}$. The upper level is coupled to the
each level of the lower doublet either by probe or coupling field transitions.
Under the weak probe condition, susceptibility $\chi$ for the probe transition
can be calculated as a linear response as most of the atoms remain in the
lowest state. Assuming local density approximation, neglecting local field,
multiple scattering and quantum corrections and employing steady state
analysis we find the well-known EIT susceptibility [1, 14], $\chi_{i},i=r,z$,
for either radial ($r$) or axial ($z$) fraction of the probe pulse. Total EIT
susceptibility for BEC in terms of the density $\rho$ can be expressed as
$\chi_{i}=\rho_{i}\,\chi_{1}$ in the framework of local density approximation.
Here $\chi_{1}$ is the single atom response given by
$\displaystyle\chi_{1}=\frac{{\left|{\mu}\right|^{2}}}{{\varepsilon_{0}\hbar}}\frac{{i(-i\Delta+\Gamma_{2}/2)}}{{{(\Gamma_{2}/2-i\Delta)(\Gamma_{3}/2-i\Delta)+\Omega_{C}^{2}/4}}},$
(1)
where $\Delta$ is the detuning from the resonant probe transition . For the
ultracold atoms and assuming co-propagating laser beams, Doppler shift in the
detuning is neglected. $\mu$ is the dipole matrix element for the probe
transition. It can also be expressed in terms of resonant wavelength $\lambda$
of the probe transition via
$\mu=3\varepsilon_{0}\hbar\lambda^{2}\gamma/8\pi^{2}$, where $\gamma$ is the
radiation decay rate of the upper level. $\Gamma_{2}$ and $\Gamma_{3}$ denote
the dephasing rates of the atomic coherences of the lower doublet. At the
probe resonance, imaginary part of $\chi$ becomes negligible and results in
turning an optically opaque medium transparent.
## 4 Propagation of beam through a quadratic index medium
### 4.1 Lensing effect
We can neglect thermal part of a Bose gas due to the high index contrast
between the condensate and the thermal gas background so that $\rho=\rho_{c}$.
We specifically consider a gas of $N=8\times 10^{6}$ 23Na atoms with
$\Gamma_{3}=0.5\gamma$, $\gamma=2\pi\times 10^{7}$Hz, $\Gamma_{2}=7\times
10^{3}$ Hz, and $\Omega_{c}=2\gamma$. We take $\omega_{r}=160$ Hz and
$\omega_{z}=40$ Hz. For these parameters, we evaluate $\chi^{\prime}=0.02$ and
$\chi^{\prime\prime}=0.0004$ at $\Delta=0.1\gamma$, where $\chi^{\prime}$ and
$\chi^{\prime\prime}$ are the real and imaginary parts of $\chi$,
respectively. Neglecting $\chi^{\prime\prime}$, the refractive index becomes
$n=\sqrt{1+\chi^{\prime}}$. In the $z$ direction it can be written as [15, 16]
$\displaystyle n(z)=n_{0}[1-\beta_{z}^{2}z^{2}]^{1/2},$ (2)
where $n_{0}=(1+\mu\chi_{1}^{\prime}/U_{0})^{1/2}$ and quadratic index
coefficient is
$\beta_{z}^{2}=\chi_{1}^{\prime}m\omega_{z}^{2}/(2U_{0}n_{0}^{2})$. Thomas-
Fermi radius for the axial coordinate is given by
$R_{TF_{z}}=\sqrt{2\mu(T)/m\omega_{z}^{2}}$. Expanding Eq. (2) in Taylor
series, the refractive index reduces to
$\displaystyle n(z)\approx n_{0}[1-\frac{1}{2}\beta_{z}^{2}z^{2}].$ (3)
Figure 2: $z$ dependence of the refractive index of a condensate of $8\times
10^{6}$ 23 Na atoms at $T=296$ nK under off-resonant EIT scheme. The other
parameters used are $\Omega_{c}=1.5\times\gamma$, $\Delta=0.1\times\gamma$,
$M=23$ amu, $\lambda_{0}=589$ nm, $\gamma=2\pi\times 10^{7}$ Hz,
$\Gamma_{3}=0.5\gamma$, $\Gamma_{2}=7\times 10^{3}$ Hz.
The refractive index as a function of $z$ is shown in Fig. 2. With such a
refractive index profile, the atomic medium can act as a thin lens for the
component of the probe in the radial direction. In the case of a medium with a
quadratic index variation, we can estimate the focal length by using
geometrical or beam optics [15, 16]. In the paraxial geometrical optics
regime, where the angle made by the beam ray with optic axis is small, the
differential equation satisfied by the ray height s(r) is
$d^{2}s/dr^{2}+\beta_{z}^{2}r=0$ [15, 16]. The initial ray height and initial
slope are $r_{i}$ , $\theta_{i}={ds/dr}|_{i}$ respectively. The solutions of
the differential equation are
$s(r)=r_{i}\cos(\beta_{z}r)+1/\beta_{z}\sin(\beta_{z}r)\theta_{i}$ and
$\theta(z)=ds/dr=-\beta_{z}\sin(\beta_{z}r)r_{i}+\cos(\beta_{z}r)\theta_{i}$.
Hence, the ray transformation matrix $M_{T}$ which connects the initial and
final ray vectors is given by Eq. (6), where $r$ is the propagation length in
the medium. [15, 16, 17]
$\displaystyle
M_{T}=\left(\begin{array}[]{cc}\cos(\beta_{z}r)&\frac{1}{\beta_{z}}\sin(\beta_{z}r)\\\
-\beta_{z}\sin(\beta_{z}r)&\cos(\beta_{z}r)\end{array}\right).$ (6)
The same ray transformation matrix given by Eq. (6) can be used to investigate
the effect of the quadratic medium on the light in the beam optics regime.
Consider a collimated Gaussian beam incident on a focusing ultra cold medium
represented by $M_{T}$. The $q$ parameter $q(r)$ of the Gaussian beam which
describes the spot size and the radius of curvature of the beam is given by
$\displaystyle\frac{1}{q(r)}=\frac{1}{R(r)}-i\frac{\lambda}{n\pi w(r)^{2}},$
(7)
where $w(r)$ is the spot-size function, and n is the medium refractive index.
If the incident beam is collimated, $R(r)\rightarrow\infty$ , and the initial
$q$-parameter $q_{1}$ becomes $iz_{0}$, where the Rayleigh range $z_{0}$ is
given by $z_{0}=\pi\omega_{0}^{2}/\lambda$ . Here, $w_{0}$ is the initial beam
waist on the BEC and we assumed that the background refractive index is nearly
$1$. After a distance of $r$ inside the BEC, the transformed $q$-parameter
$q_{2}$ will given by
$\displaystyle
q_{2}=\frac{iz_{0}\cos(\beta_{r}r)+\frac{1}{\beta_{z}}\sin(\beta_{z}r)}{-iz_{0}\beta_{z}\sin(\beta_{z}r)+\cos(\beta_{z}r)}.$
(8)
We have two cases to consider for focusing. If the BEC is very thin, the focal
length will be approximately given by $1/\beta_{z}^{2}L_{r}$ where $L_{r}$ is
the transverse width of the BEC. If the BEC length is not negligible, the next
collimated beam will be formed inside the BEC at the location $L_{f}$, given
by $\beta_{z}L_{f}=\pi/2$.
### 4.2 Waveguiding effect
Along the radial direction, similar to previous treatment, refractive index
profile in the radial direction can be written as
$\displaystyle
n(r)=\cases{n_{0}[1-\frac{1}{2}\,\beta_{r}^{2}\,r^{2}]^{1/2}&$r\leq
R_{TF_{r}}$.\cr 1&$r\geq R_{TF_{r}}$},$ (9)
where $n_{0}=(1+\mu\chi_{i}^{\prime}/U_{0})^{1/2}$ and
$\beta_{r}^{2}=\chi_{1}^{\prime}m\omega_{z}^{2}/(2U_{0}n_{0}^{2})$. Thomas-
Fermi radius is given by $R_{TF_{r}}=\sqrt{2\mu(T)/m\omega_{r}^{2}}$.
Figure 3: $r$ dependence of the refractive index of the condensate. The
parameters are the same as Fig. (2)
The refractive index which is given in Eq.(9) is plotted as a function of $r$
in Fig.3. Expanding Eq. (9) in Taylor series, the refractive index reduces to
$n(r)\approx n_{0}(1-0.5\beta_{r}^{2}r^{2})$. Such an index behavior is
analogous to the one of a graded index fiber [5]. Slow light propagation
through the condensate can be described similar to that of the weakly guided
regime of the graded index fiber, where the optical modes can be given in
terms of linearly polarized modes (LP modes). The mode profiles are determined
by solving the wave equation which reduces to the Helmholtz equation [5].
We use the cylindrical coordinates as the refractive index $n(r)$ is axially
symmetric. The wave equation for the axial fraction of the probe field is
$[\nabla^{2}+k^{2}]E=0$, where $k^{2}=k_{z}^{2}+k_{r}^{2}$ and $\nabla^{2}$ is
the Laplacian operator in cylindrical coordinate. Here $k_{r}$ is the radial
wave number and $k_{z}$ is the propagation constant in the $z$ direction. The
solution for the wave equation is $E=\psi(r)\cos(l\phi)\exp[i(\omega
t-k_{z}z)]$. Here $l=0,1,2,3,...$ and $\phi$ is the absolute phase. If we put
this solution in the wave equation, we get the Helmholtz radial equation
$\displaystyle[d^{2}/dr^{2}+(1/r)d/dr+p^{2}(r)]\psi(r)=0,$ (10)
in which $p^{2}(r)=(k_{0}^{2}n^{2}(r)-k_{z}^{2}-l^{2}/r^{2})$ [15]. Here
$k_{0}$ can be expressed in terms of resonant wavelength $\lambda$ of the
probe transition $k_{0}=2\pi/\lambda=1.07\times 10^{7}$ (1/m).
We use transfer matrix method developed in Ref. [5] in order to solve the
Helmholtz equation. Eq.(10).
We assume that the atomic cloud can be described by layers of constant
refractive index, such that, their indices monotonically increase towards the
center of the cloud. By taking sufficiently large number of thin layers such a
discrete model can represent the true behavior of the refractive index. For
constant index, analytical solutions of the Helmholtz equation can be found in
terms of Bessel functions. Such solutions are then matched at the shell
boundaries of the layers. Doing this for all the layers, electromagnetic
boundary conditions provide a recurrence relation for the Bessel function
coefficients, whose solution yields the wave number $k$. The mode profiles
determined by this method are shown in Fig. (6), and Fig. (7).
## 5 Results and Discussions
### 5.1 Lensing Properties
The condensate can act as a lens or a guiding medium depending on the Rayleigh
range of the probe relative to the effective length of the condensate [8]. For
the cigar shaped BEC geometry we consider, the radial fraction of the probe is
subject to lensing while the axial fraction is guided. Let us first examine
the focal length of the condensate for the radial fraction of the incident
probe. For that aim we need to determine the effective radial length
$\displaystyle L_{r}=\left[\int_{V}d^{3}rr^{2}\rho(r,z)\right]^{1/2},$ (11)
corresponding to the radial rms width of the density distribution. Radial
component of the probe field enters the medium and converges at a focus
distance, determined
$f=\frac{1}{\beta_{z}^{2}L_{r}}.$ (12)
Figure 4: Behavior of the focus length with respect to trap frequency for the
radial direction and scattering length. The solid curve is for $a_{s}=5$ nm
and the dashed curve is for $a_{s}=7$ nm. The other parameters are the same as
Fig. (2)
Focal length is plotted in as shown in Fig. (4) as a function of trap
frequency for the radial direction and with the scattering length at a
temperature $T=296$ nK. The increase in the radial trap frequency leads to
decrease in the radial size of the BEC, which causes the rise of the focal
length seen in Fig. (4). In contrast, the size of the condensate increases by
atom-atom interactions, characterized by the scattering length $a_{s}$, which
causes an overall reduction of the focal length.
### 5.2 Waveguiding Properties
For the cigar shaped BEC geometry we consider again the axial fraction of the
probe is subject to guided. The effective axial length is determined by
$\displaystyle
L_{z}=\left[\frac{4\pi}{N}\int_{0}^{\infty}\,r\mathrm{d}r\int_{0}^{\infty}\,\mathrm{d}zz^{2}\rho(r,z)\right]^{1/2}.$
(13)
The axial length $L_{z}$ is an effective length corresponding to the axial
width of the density distribution. Group velocities ($v_{gi},i=r,z$) of the
different density profiles for both radial and axial directions can be
calculated from the susceptibility using the relations
$\displaystyle\frac{1}{{v_{gi}}}$ $\displaystyle=$
$\displaystyle\frac{1}{c}+\frac{\pi}{\lambda}\frac{{\partial\chi_{i}}}{{\partial\Delta}}.$
(14)
Here, c is the speed of light and imaginary part of the susceptibility is
negligibly small relative to the real part of it. EIT can be used to achieve
ultraslow light velocities, owing to the steep dispersion of the EIT
susceptibility $\chi$ [3]. In general temporal splitting depends on modal,
material and waveguide dispersions. For a simple estimation of relative time
delay of these components, we consider only the lowest order modes in the
lensed and the guided fractions, and take the optical paths as the effective
lengths of the corresponding short and long axes of the condensate. In this
case we ignore the small contributions of modal and waveguide dispersions and
determine the group velocity, same for both fractions, by assuming a constant
peak density of the condensate in the material dispersion relation. Density
profiles along radial and axial directions lead to different time delays of
the focused and guided modes. Due to significant difference in the optical
path lengths of guided and focused components, a certain time delay can be
generated between these two components.
Figure 5: Thermal behavior of the time delays of the guided and focused
fractions. The solid and dashed lines are for $t_{Dz}$ and $t_{Dr}$ ,
respectively. The parameters are the same as Fig. (2) except
$\Omega_{c}=0.5\gamma$
Taking into account transverse confinement, ultraslow Gaussian beam
propagation along the long axis of a BEC can be described in terms of multiple
LP modes, propagating at different ultraslow speeds [5]. In the geometry and
system we consider here, for instance, there can be found approximately $44$
modes at temperature $T=42$ nK. We compare the delay time of the slowest mode
(the lowest guided mode) with that of the lensing mode which can be
respectively calculated by $t_{Dz}=L_{z}/v_{gz}$ and $t_{Dr}=L_{r}/v_{gr}$.
Thermal behaviors of the time delays are shown in Fig. (5). Fig. (5) indicates
that due to different refractive indices seen by the axial and radial
fractions of the probe, significant temporal delay, $\sim 50\,\mu$s, can occur
between them at low temperatures $T\sim 42$ nK. Time delays for each fraction
decreases with the temperature. Besides, the relative difference of their time
delays diminishes with the temperature. The condensed cloud shrinks due to the
increase in temperature, so that effective lengths of the BEC diminish.
Therefore, just below $T_{C}$, time delays drop to zero.
At low temperatures waveguiding modes can occur in an atomic Bose-Einstein
condensate because of an atomic Bose-Einstein condensate. Translation of the
Thomas-Fermi density profile of the condensate to the refractive index makes
the medium gain waveguiding characteristics analogous to those of a graded
index fiber [5, 8]. For the same set of parameters, for which the radial
fraction of the probe undergoes lensing effect, the axial fraction is guided
in multiple LP modes, as the corresponding Rayleigh range is larger than the
effective axial length. The lowest two LP modes are shown in Fig. (6), and
Fig. (7). The total number of modes that can be supported by the condensate is
determined by the dimensionless normalized frequency $V$ which is defined as
$V=(\omega/c)R(n_{0}^{2}-1)^{1/2}$ where $R=\sqrt{2\mu(T)/m\omega_{r}^{2}}$
[5]. The radius of a condensate that would support only single mode, LP00, is
found to be $R\sim 1\,\mu$m. In principle this suggest that simultaneous
lensing and single LP mode guiding should also be possible.
Figure 6: Contour plot of intensity of the $\psi_{00}\exp[i(\omega
t-\beta_{z}z)]$ (LP00 mode). The parameters are the same as Fig. (5)
Figure 7: Contour plot of intensity of the $\psi_{10}\cos(\phi)\exp[i(\omega
t-\beta_{z}z)]$ (LP10 mode). The parameters are the same as Fig. 5.
## 6 Conclusion
We investigate simultaneous lensing and ultraslow waveguiding properties of
atomic condensate under EIT conditions by using a quadratic-index model of the
medium. The focus length, relative time delay, and multiple guided mode
characteristics are determined taking into account three dimensional nature of
the system. In particular, dependence of focus length on atom-atom
interactions via s-wave scattering length, and on trap frequency and
temperature are examined. Our results reveal how to select a suitable set of
system parameters to tune Rayleigh range and the aspect ratio of the cloud to
make either lensing or guiding more favorable along a particular direction. We
have shown that the focus length can be calibrated by the transverse trap
frequency, incoming wave length, scattering length and temperature. In
addition, time-delayed splitting of ultraslow Gaussian beam into radial and
axial fractions is found.
## Acknowledgements
We thank Z. Dutton for valuable and useful discussions. D.T. was supported by
TUBITAK-Kariyer grant No. 109T686. Ö.E.M. acknowledges support by TUBITAK
(109T267) and DPT-UEKAE quantum cryptology center.
## References
* [1] S.E. Harris, Physics Today 50 (1997) 36-42.
* [2] M. Fleischhauer, A. Imamoglu, and J. P. Marangos, Rev. Mod. Phys. 77 (2005) 633.
* [3] L.V. Hau, S.E. Harris, Z. Dutton, C.H. Behroozi, Nature 397 (1999) 594-598.
* [4] J. Cheng, S. Han, Y. Yan, Phys. Rev. A 72 (2005) 021801(R).
* [5] D. Tarhan, N. Postacioglu, Ö. E. Müstecaplıog̃lu, Opt. Lett. 32 (2007) 1038.
* [6] F. L. Kien, and K. Hakuta, Phys. Rev. A 79 (2009) 043813.
* [7] G. Morigi, and G. S. Agarwal, Phys. Rev. A 79 (2000) 013801.
* [8] M. Vengalattore, and M. Prentiss, Phys. Rev. Lett. 95 (2005) 243601.
* [9] G. S. Agarwal, and S. Dasgupta, Phys. Rev. A 65 (2002) 053811.
* [10] H. R. Zhang, L. Zhou and C. P. Sun, Phys. Rev. A 80 (2009) 013812.
* [11] M. Naraschewski, D.M. Stamper-Kurn, Phys. Rev. A 58 (1998) 2423.
* [12] Z. Dutton, and L. V. Hau, Phys. Rev. A 70 (2004) 053831\.
* [13] C. J. Pethick and H. Smith, Bose-Einstein Condensation in Dilute Gases, Cambridge, Cambridge, 2002.
* [14] M.O. Scully, M.S. Zubairy, Quantum Optics, Cambridge, Cambridge, 1997.
* [15] A. Yariv, Optical Electronics, Saunders College,Holt,Rinehart and Wiston, 1991.
* [16] A. Sennaroglu, Photonics and Laser Engineering: Principles, Devices and Applications, New Yowk, McGraw-Hill, 2010.
* [17] L. Casperson, A. Yariv, Appl. Phys. Lett. 12 (1968) 355.
|
arxiv-papers
| 2011-03-24T14:14:29 |
2024-09-04T02:49:17.921465
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Devrim Tarhan, Alphan Sennaroglu, Ozgur E. Mustecaplioglu",
"submitter": "Devrim Tarhan",
"url": "https://arxiv.org/abs/1103.4771"
}
|
1103.4959
|
# On mean-square boundedness of stochastic linear systems with quantized
observations
Debasish Chatterjee , Peter Hokayem , Federico Ramponi and John Lygeros
D. Chatterjee, P. Hokayem, F. Ramponi, J. Lygeros are with the Automatic
Control Laboratory, ETH Zürich, Physikstrasse 3, 8092 Zürich, Switzerland.
{chatterjee,hokayem,ramponif,lygeros}@control.ee.ethz.ch
###### Abstract.
We propose a procedure to design a state-quantizer with _finite_ alphabet for
a marginally stable stochastic linear system evolving in $\mathbb{R}^{d}$, and
a bounded policy based on the resulting quantized state measurements to ensure
bounded second moment in closed-loop.
This research was partially supported by the Swiss National Science Foundation
under grant 200021-122072 and by the European Commission under the project
Feednetback FP7-ICT-223866 (www.feednetback.eu).
## 1\. Introduction and Result
Consider the linear control system
($\ast$) $x_{t+1}=Ax_{t}+Bu_{t}+w_{t},\quad x_{0}\text{ given},\quad
t=0,1,\ldots,$
where the state $x_{t}\in\mathbb{R}^{d}$ and the control
$u_{t}\in\mathbb{R}^{m}$, $(w_{t})_{t\in\mathbb{N}_{0}}$ is a mean-zero
sequence of noise vectors, and $A$ and $B$ being matrices of appropriate
dimensions. It is assumed that instead of perfect measurements of the state,
quantized state measurements are available by means of a quantizer
$\mathfrak{q}:\mathbb{R}^{d}\longrightarrow Q$, with $Q\subset\mathbb{R}^{d}$
being a set of vectors in $\mathbb{R}^{d}$ called alphabets/bins.
Our objective is to construct a quantizer with finite alphabet and a
corresponding control policy such that the magnitude of the control is
_uniformly bounded_ , i.e., for some $U_{\max}>0$ we have
$\left\lVert{u_{t}}\right\rVert\leqslant U_{\max}$ for all $t$, the number of
alphabets $Q$ is _finite_ , and the states of ($\ast$ ‣ 1) are _mean-square
bounded_ in closed-loop.
Stabilization with quantized state measurements has a rich history, see e.g.,
[2, 4, 1, 3, 7, 8] and the references therein. While most of the literature
investigates quantization techniques for stabilization under communication
constraints, especially of systems with eigenvalues outside the closed unit
disc, our result is directed towards “maximally coarse” quantization—with
finite alphabet of Lyapunov stable systems; communication constraints are not
addressed in this work. The authors are not aware of any prior work dealing
with stabilization with finite alphabet in the context of Lyapunov stable
systems. Observe that unlike deterministic systems, local stabilization of
stochastic systems with unbounded noise, at least one eigenvalue with
magnitude greater than $1$, and bounded inputs is impossible.
###### Assumption 1.
* $\circ$
The matrix $A$ is Lyapunov stable—the eigenvalues of $A$ have magnitude at
most $1$, and those on the unit circle have equal geometric and algebraic
multiplicities.
* $\circ$
The pair $(A,B)$ is reachable in $\kappa$ steps, i.e.,
$\operatorname{rank}\begin{pmatrix}B&AB&\cdots&A^{\kappa-1}B\end{pmatrix}=d$.
* $\circ$
$(w_{t})_{t\in\mathbb{N}_{0}}$ is a mean-zero sequence of mutually independent
noise vectors satisfying
$C_{4}\coloneqq\sup_{t\in\mathbb{N}_{0}}\mathsf{E}\bigl{[}\left\lVert{w_{t}}\right\rVert^{4}\bigr{]}<\infty$.
* $\circ$
$\left\lVert{u_{t}}\right\rVert\leqslant U_{\max}$ for all
$t\in\mathbb{N}_{0}$.$\diamondsuit$
The policy that we construct below belongs to the class of $\kappa$-history-
dependent policies, where the history is that of the quantized states. We
refer the reader to our earlier article [6] for the basic setup, various
definitions, and in particular to [6, §3.4] for the details about a change of
basis in $\mathbb{R}^{d}$ that shows that it is sufficient to consider $A$
orthogonal. We let
$\mathcal{R}_{k}(A,M)\coloneqq\begin{pmatrix}A^{k-1}M&\cdots&AM&M\end{pmatrix}$
for a matrix $M$ of appropriate dimension, $M^{+}\coloneqq
M^{\mathsf{T}}(MM^{\mathsf{T}})^{-1}$ denote the Moore-Penrose pseudoinverse
of $M$ in case the latter has full row rank, and
$\sigma_{\min}(M),\sigma_{\max}(M)$ denote the minimal and maximal singular
values of $M$, resp. $I$ denotes the $d\times d$ identity matrix. For a vector
$v\in\mathbb{R}^{d}$, let
$\Pi_{v}(\cdot)\coloneqq\bigl{\langle}{\cdot},\tfrac{v}{\left\lVert{v}\right\rVert}\bigr{\rangle}\tfrac{v}{\left\lVert{v}\right\rVert}$
and $\Pi_{v}^{\perp}(\cdot)\coloneqq I-\Pi_{v}(\cdot)$ denote the projections
onto the span of $v$ and its orthogonal complement, respectively. For $r>0$
the radial $r$-saturation function
$\operatorname{sat}_{r}:\mathbb{R}^{d}\longrightarrow\mathbb{R}^{d}$ defined
as
$\operatorname{sat}_{r}(y)\coloneqq\min\\{r,\left\lVert{y}\right\rVert\\}\frac{y}{\left\lVert{y}\right\rVert}$,
and let $B_{r}\subset\mathbb{R}^{d}$ denote the open $r$ ball centered at $0$
with $\partial B_{r}$ being its boundary. We have the following theorem:
###### Theorem 2.
Consider the system ($\ast$ ‣ 1), and suppose that Assumption 1 holds. Assume
that the quantizer is such that there exists a constant $r$ satisfying:
1. a)
$\displaystyle{r>\frac{\sqrt{\kappa}\,\sigma_{\max}(\mathcal{R}_{\kappa}(A,I))\sqrt[4]{C_{4}}}{\cos(\varphi)-\sin(\varphi)}}$,
where $\varphi\in[0,\pi/4[$ is the maximal angle between $z$ and
$\mathfrak{q}(z)$, $z\not\in B_{r}$, and
2. b)
$\mathfrak{q}(z)=\mathfrak{q}(\operatorname{sat}_{r}(z))\in\partial B_{r}$ for
every $z\not\in B_{r}$.
Finally assume that that $U_{\max}\geqslant
r/\sigma_{\min}(\mathcal{R}_{\kappa}(A,B))$. Then successive $\kappa$-step
applications of the control policy
$\begin{pmatrix}u_{\kappa t}\\\ \vdots\\\
u_{\kappa(t+1)-1}\end{pmatrix}\coloneqq-\mathcal{R}_{\kappa}(A,B)^{+}A^{\kappa}\mathfrak{q}(x_{\kappa
t}),\quad t\in\mathbb{N}_{0},$
ensures that
$\sup_{t\in\mathbb{N}_{0}}\mathsf{E}_{x_{0}}\bigl{[}\left\lVert{x_{t}}\right\rVert^{2}\bigr{]}<\infty$.
Observe that Theorem 2 outlines a procedure for constructing a quantizer with
finitely many bins, an example of which on $\mathbb{R}^{2}$ is depicted in
Figure 1. We see from the hypotheses of Theorem 2 that the quantizer has no
large gap between the bins on the $r$-sphere, and is “radial”; the
quantization rule for states inside $B_{r}$ does not matter insofar as mean-
square boundedness of the states is concerned. As a consequence of the control
policy in Theorem 2, the control alphabet is also finite with
$\kappa\left\lvert{Q}\right\rvert$ elements. Moreover, note that as
$\varphi\searrow 0$, i.e., as the “density” of the bins on the $r$-sphere
increases, we recover the policy proposed in [6], and in particular, the lower
bound on $U_{\max}$ in [6].
Figure 1. Pictorial depiction of the proposed quantization scheme in
$\mathbb{R}^{2}$, with
$\\{\mathbf{q}_{0}=0,\mathbf{q}_{1},\ldots,\mathbf{q}_{8}\\}$ being the set of
bins. The various projections are computed for a generic state $z$ outside the
$r$-ball centered at the origin.
## 2\. Proof of Theorem 2
We assume that the random variables $w_{t}$ are defined on some probability
space $(\Omega,\mathfrak{F},\mathsf{P})$. Herereafter
$\mathsf{E}^{\mathfrak{F}^{\prime}}[\cdot]$ denotes conditional expectation
for a $\sigma$-algebra $\mathfrak{F}^{\prime}\subset\mathfrak{F}$.
We need the following immediate consequence of [5, Theorem 1].
###### Proposition 3.
Let $(\xi_{t})_{t\in\mathbb{N}_{0}}$ be a sequence of nonnegative random
variables on some probability space $(\Omega,\mathfrak{F},\mathsf{P})$, and
let $(\mathfrak{F}_{t})_{t\in\mathbb{N}_{0}}$ be any filtration to which
$(\xi_{t})_{t\in\mathbb{N}_{0}}$ is adapted. Suppose that there exist
constants $b>0$, and $J,M<\infty$, such that $\xi_{0}\leqslant J$, and for all
$t$:
$\displaystyle\mathsf{E}^{\mathfrak{F}_{t}}[\xi_{t+1}-\xi_{t}]\leqslant-b\quad\text{on
the event }\\{\xi_{t}>J\\},\quad\text{and}$
$\displaystyle\mathsf{E}\bigl{[}\left\lvert{\xi_{t+1}-\xi_{t}}\right\rvert^{4}\big{|}\xi_{0},\ldots,\xi_{t}\bigr{]}\leqslant
M.$
Then there exists a constant $\gamma=\gamma(b,J,M)>0$ such that
$\displaystyle{\sup_{t\in\mathbb{N}_{0}}\mathsf{E}\bigl{[}\xi_{t}^{2}\bigr{]}\leqslant\gamma}$.
_Proof of Theorem 2: _ Let $\mathfrak{F}_{t}$ be the $\sigma$-algebra
generated by $\\{x_{s}\mid s=0,\ldots,t\\}$. Since $\mathfrak{q}$ is a
measurable map, it is clear that $(\mathfrak{q}(x_{t}))_{t\in\mathbb{N}_{0}}$
is $(\mathfrak{F}_{t})_{t\in\mathbb{N}_{0}}$-adapted.
We have, for $t\in\mathbb{N}_{0}$, on $\\{\left\lVert{x_{\kappa
t}}\right\rVert>r\\}$,
$\displaystyle\mathsf{E}^{\mathfrak{F}_{\kappa
t}}\bigl{[}\left\lVert{x_{\kappa(t+1)}}\right\rVert-\left\lVert{x_{\kappa
t}}\right\rVert\bigr{]}=\mathsf{E}^{\mathfrak{F}_{\kappa
t}}\bigl{[}\left\lVert{A^{\kappa}x_{\kappa
t}+\mathcal{R}_{\kappa}(A,B)\bar{u}_{\kappa t}+\bar{w}_{\kappa
t}}\right\rVert-\left\lVert{x_{\kappa t}}\right\rVert\bigr{]},$
where $\bar{u}_{\kappa t}\coloneqq\begin{pmatrix}u_{\kappa t}\\\ \vdots\\\
u_{\kappa(t+1)-1}^{\mathsf{T}}\end{pmatrix}\in\mathbb{R}^{\kappa m}$, and
$\bar{w}_{\kappa t}\coloneqq\mathcal{R}_{\kappa}(A,I)\begin{pmatrix}w_{\kappa
t}\\\ \vdots\\\ w_{\kappa(t+1)-1}\end{pmatrix}\in\mathbb{R}^{\kappa d}$ is
zero-mean noise. To wit, we have
$\displaystyle\mathsf{E}^{\mathfrak{F}_{\kappa t}}$
$\displaystyle\bigl{[}\left\lVert{A^{\kappa}x_{\kappa
t}+\mathcal{R}_{\kappa}(A,B)\bar{u}_{\kappa t}+\bar{w}_{\kappa
t}}\right\rVert-\left\lVert{x_{\kappa t}}\right\rVert\bigr{]}$
$\displaystyle\leqslant\mathsf{E}^{\mathfrak{F}_{\kappa
t}}\bigl{[}\left\lVert{A^{\kappa}x_{\kappa
t}+\mathcal{R}_{\kappa}(A,B)\bar{u}_{\kappa
t}}\right\rVert-\left\lVert{x_{\kappa
t}}\right\rVert\bigr{]}+\sqrt{\kappa}\,\sigma_{\max}(\mathcal{R}_{\kappa}(A,I))\sqrt[4]{C_{4}}.$
Selecting the controls $\bar{u}_{\kappa
t}=-\mathcal{R}_{\kappa}(A,B)^{+}A^{\kappa}\mathfrak{q}(x_{\kappa t})$ as in
the theorem and using the fact that $\mathfrak{q}(x_{\kappa t})=\Pi_{x_{\kappa
t}}(x_{\kappa t})+\Pi_{x_{\kappa t}}^{\perp}(x_{\kappa t})$, we arrive at
$\displaystyle\mathsf{E}^{\mathfrak{F}_{\kappa t}}$
$\displaystyle\bigl{[}\left\lVert{A^{\kappa}x_{\kappa
t}+\mathcal{R}_{\kappa}(A,B)\bar{u}_{\kappa
t}}\right\rVert-\left\lVert{x_{\kappa t}}\right\rVert\bigr{]}$
$\displaystyle=\left\lVert{A^{\kappa}x_{\kappa
t}-A^{\kappa}\mathfrak{q}(x_{\kappa t})}\right\rVert-\left\lVert{x_{\kappa
t}}\right\rVert$ $\displaystyle\leqslant\left\lVert{A^{\kappa}x_{\kappa
t}-\operatorname{sat}_{r}(A^{\kappa}x_{\kappa
t})}\right\rVert-\left\lVert{x_{\kappa
t}}\right\rVert+\left\lVert{\Pi_{x_{\kappa
t}}^{\perp}(A^{\kappa}\mathfrak{q}(x_{\kappa t}))}\right\rVert$
$\displaystyle\qquad+\left\lVert{\operatorname{sat}_{r}(A^{\kappa}x_{\kappa
t})-\Pi_{x_{\kappa t}}(A^{\kappa}\mathfrak{q}(x_{\kappa t}))}\right\rVert$
$\displaystyle=-r+\left\lVert{A^{\kappa}\operatorname{sat}_{r}(x_{\kappa
t})-\Pi_{x_{\kappa t}}(A^{\kappa}\mathfrak{q}(x_{\kappa
t}))}\right\rVert+\left\lVert{\Pi_{x_{\kappa
t}}^{\perp}(A^{\kappa}\mathfrak{q}(x_{\kappa t}))}\right\rVert$
$\displaystyle=-r+\left\lVert{A^{\kappa}\operatorname{sat}_{r}(x_{\kappa
t})-\Pi_{x_{\kappa
t}}\bigl{(}A^{\kappa}\mathfrak{q}(\operatorname{sat}_{r}(x_{\kappa
t}))\bigr{)}}\right\rVert$ $\displaystyle\qquad+\left\lVert{\Pi_{x_{\kappa
t}}^{\perp}\bigl{(}A^{\kappa}\mathfrak{q}(\operatorname{sat}_{r}(x_{\kappa
t}))\bigr{)}}\right\rVert\;\;\text{by hypothesis \ref{t:main:radial}}$
$\displaystyle\leqslant-r+r(1-\cos(\varphi))+r\sin(\varphi)$
$\displaystyle\leqslant-b\quad\text{for some $b>0$ by hypothesis
\ref{t:main:maxangle}}.$
Moreover, we see that for $t\in\mathbb{N}_{0}$, since $A$ is orthogonal,
$\mathsf{E}\Bigl{[}\Bigl{|}\left\lVert{x_{\kappa(t+1)}}\right\rVert-\left\lVert{x_{\kappa
t}}\right\rVert\Bigr{|}^{4}\,\Big{|}\,\\{\left\lVert{x_{\kappa
s}}\right\rVert\\}_{s=0}^{t}\Bigr{]}=\mathsf{E}\Bigl{[}\Bigl{|}\left\lVert{x_{\kappa(t+1)}}\right\rVert-\left\lVert{A^{\kappa}x_{\kappa
t}}\right\rVert\Bigr{|}^{4}\,\Big{|}\,\\{\left\lVert{x_{\kappa
s}}\right\rVert\\}_{s=0}^{t}\Bigr{]}\\\
=\mathsf{E}\bigl{[}\bigl{|}\left\lVert{A^{\kappa}x_{\kappa
t}+\mathcal{R}_{\kappa}(A,B)\bar{u}_{\kappa
t}+\mathcal{R}_{\kappa}(A,I)\bar{w}_{\kappa
t}}\right\rVert-\left\lVert{A^{\kappa}x_{\kappa
t}}\right\rVert\bigr{|}^{4}\,\big{|}\,\\{\left\lVert{x_{\kappa
s}}\right\rVert\\}_{s=0}^{t}\bigr{]}\\\
\leqslant\mathsf{E}\bigl{[}\left\lVert{\mathcal{R}_{\kappa}(A,B)\bar{u}_{\kappa
t}+\mathcal{R}_{\kappa}(A,I)\bar{w}_{\kappa
t}}\right\rVert^{4}\,\big{|}\,\\{\left\lVert{x_{\kappa
s}}\right\rVert\\}_{s=0}^{t}\bigr{]}\leqslant M$
for some $M>0$ since $\bar{u}_{\kappa t}$ is bounded in norm and
$\mathsf{E}\bigl{[}\left\lVert{w_{t}}\right\rVert^{4}\bigr{]}\leqslant C_{4}$
for each $t$.
We let $J\coloneqq\max\\{\left\lVert{x_{0}}\right\rVert,r\\}$. It remains to
define $\xi_{t}\coloneqq\left\lVert{x_{\kappa t}}\right\rVert$ and appeal to
Proposition 3 with the above definition of $(\xi_{t})_{t\in\mathbb{N}_{0}}$ to
conclude that there exists some $\gamma=\gamma(b,J,M)>0$ such that
$\sup_{t\in\mathbb{N}_{0}}\mathsf{E}\bigl{[}\xi_{t}^{2}\bigr{]}=\sup_{t\in\mathbb{N}_{0}}\mathsf{E}_{x_{0}}\bigl{[}\left\lVert{x_{\kappa
t}}\right\rVert^{2}\bigr{]}\leqslant\gamma$. A standard argument, e.g., as in
[6, Proof of Lemma 9], shows that this is enough to guarantee
$\sup_{t\in\mathbb{N}_{0}}\mathsf{E}_{x_{0}}\bigl{[}\left\lVert{x_{t}}\right\rVert^{2}\bigr{]}\leqslant\gamma^{\prime}$
for some $\gamma^{\prime}>0$. $\square$
## 3\. Simulation
The figure below shows the average of the square norm of the state over $1000$
runs of the system:
$x_{t+1}=\begin{pmatrix}\cos(\pi/3)&-\sin(\pi/3)\\\
\sin(\pi/3)&\cos(\pi/3)\end{pmatrix}x_{t}+\begin{pmatrix}1\\\
0\end{pmatrix}u_{t}+w_{t},$
where $x_{0}=\bigl{(}10\;\;10\bigr{)}^{\mathsf{T}}$,
$w_{t}\in{\mathcal{N}}(0,I_{2})$, and where $u_{t}$ is chosen respectively
according to the policy proposed in this article and the one proposed in [6].
## References
* [1] R. W. Brockett and D. Liberzon, Quantized feedback stabilization of linear systems, IEEE Transactions on Automatic Control, 45 (2000), pp. 1279–1289.
* [2] D. F. Delchamps, Stabilizing a linear system with quantized state feedback, IEEE Transactions on Automatic Control, 35 (1990), pp. 916–924.
* [3] N. Elia and S. Mitter, Stabilization of linear systems with limited information, Automatic Control, IEEE Transactions on, 46 (2002), pp. 1384–1400.
* [4] G. N. Nair, F. Fagnani, S. Zampieri, and R. J. Evans, Feedback control under data rate constraints: An overview, Proceedings of the IEEE, 95 (2007), pp. 108–137.
* [5] R. Pemantle and J. S. Rosenthal, Moment conditions for a sequence with negative drift to be uniformly bounded in $L^{r}$, Stochastic Processes and their Applications, 82 (1999), pp. 143–155.
* [6] F. Ramponi, D. Chatterjee, A. Milias-Argeitis, P. Hokayem, and J. Lygeros, Attaining mean square boundedness of a marginally stable stochastic linear system with a bounded control input, IEEE Transactions on Automatic Control, 55 (2010), pp. 2414–2418.
* [7] S. Tatikonda, A. Sahai, and S. Mitter, Stochastic linear control over a communication channel, IEEE Transactions on Automatic Control, 49 (2004), pp. 1549–1561.
* [8] S. Yüksel, Stochastic stabilization of noisy linear systems with fixed-rate limited feedback, IEEE Transactions on Automatic Control, 55 (2010), pp. 2847–2853.
|
arxiv-papers
| 2011-03-25T12:34:14 |
2024-09-04T02:49:17.928628
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Debasish Chatterjee, Peter Hokayem, Federico Ramponi, and John Lygeros",
"submitter": "Debasish Chatterjee",
"url": "https://arxiv.org/abs/1103.4959"
}
|
1103.5062
|
# Discrete Simulation of Power Law Noise
Neil Ashby
University of Colorado, Boulder CO
email: ashby@boulder.nist.gov
Affiliate, NIST, Boulder, CO
###### Abstract
A method for simulating power law noise in clocks and oscillators is presented
based on modification of the spectrum of white phase noise, then Fourier
transforming to the time domain. Symmetric real matrices are introduced whose
traces–the sums of their eigenvalues–are equal to the Allan variances, in
overlapping or non-overlapping forms, as well as for the corresponding forms
of the modified Allan variance. Diagonalization of these matrices leads to
expressions for the probability distributions for observing a variance at an
arbitrary value of the sampling or averaging interval $\tau$, and hence for
estimating confidence in the measurements. A number of applications are
presented for the common power-law noises.
## 1 Introduction
The characterization of clock performance by means of average measures such as
Allan variance, Hadamard variance, Theo, and modified forms of such variances,
is widely applied within the time and frequency community as well as by most
clock and oscillator fabricators. Such variances are measured by comparing the
times $t_{k}$ on a device under test, with the times at regular intervals
$k\tau_{0}$ on a perfect reference, or at least on a better reference.
Imperfections in performance of the clock under test are studied by analyzing
noise in the time deviation sequence $x_{k}=t_{k}-k\tau_{0}$, or the
fractional frequency difference during the sampling interval $\tau=s\tau_{0}$:
$\Delta_{k,s}^{(1)}=(x_{k+s}-x_{k})/(s\tau_{0}).$ (1)
The frequency spectrum of fractional frequency differences can usually be
adequately characterized by linear superposition of a small set of types of
power law noise. The frequency spectrum of the fractional frequency
differences of a particular noise type is given by a one-sided spectral
density
$S_{y}(f)=h_{\alpha}f^{\alpha},\quad f>0.$ (2)
(The units of $S_{y}(f)$ are Hz-1.) For the common power-law noise types,
$\alpha$ varies in integral steps from +2 down to -2 corresponding
respectively to white phase modulation, flicker phase modulation, white
frequency modulation, flicker frequency modulation, and random walk of
frequency.
Simulation of clock noise can be extremely useful in testing software
algorithms that use various windowing functions and Fourier transform
algorithms to extract spectral density and stability information from measured
time deviations, and especially in predicting the probability for observing a
particular value of some clock stability variance. This paper develops a
simple simulation method for a time difference sequence that guarantees the
spectral density will have some chosen average power law dependence.
Expressions for the common variances and their modified forms are derived here
that agree with expressions found in the literature, with some exceptions.
This approach also leads to predictions of probabilities for observing a
variance of a particular type at particular values of the sampling time. A
broad class of probability functions naturally arises. These only rarely
correspond to chi-squared distributions.
This paper is organized as follows. Sect. 2 introduces the basic simulation
method, and Sect 3 applies the method to the overlapping Allan variance. Sect.
4 shows how diagonalization of the averaged squared second-difference
operator, applied to the simulated time series, leads to expressions for the
probability of observing a value of the variance for some chosen value of the
sampling or averaging time. Expressions for the mean squared deviation of the
mean of the variance itself are derived in Sect. 6. The approach is used to
discuss the modified Allan variance in Sect. 7, and the non-overlapping form
of the Allan variance is treated in Sect. 8. Appendix 1 discusses evaluation
of a contour integral for the probability, first introduced in Sect. 4, in the
general case.
## 2 Discrete Time Series
We imagine the noise amplitudes at Fourier frequencies $f_{m}$ are generated
by a set of $N$ normally distributed random complex numbers $w_{n}$ having
mean zero and variance $\sigma$, that would by themselves generate a simulated
spectrum for white phase noise. These random numbers are divided by a function
of the frequency, $|f_{m}|^{\lambda}$, producing a spectral density that has
the desired frequency characteristics. For ordinary power law noise, the
exponent $\lambda$ is a multiple of $1/2$, but it could be anything. The
frequency noise is then transformed to the time domain, producing a time
series with the statistical properties of the selected power law noise. The
Allan variance, Modified Allan variance, Hadamard Variance, variances with
dead time, and other quantities of interest can be calculated using either the
frequency noise or the time series.
In the present paper we discuss applications to calculation of various
versions of the Allan variance. Of considerable interest are results for the
probability of observing a value of the Allan variance for particular values
of the sampling time $\tau$ and time series length $N$. The derivations in
this paper are theoretical predictions. A natural frequency cutoff occurs at
$f_{h}=1/(2\tau_{0})$, where $\tau_{0}$ is the time between successive time
deviations. This number is not necessarily related in an obvious way to some
hardware bandwidth. The measurements are assumed to be made at the times
$k\tau_{0}$, and the time errors or residuals relative to the reference clock
are denoted by $x_{k}$. The averaging or sampling time is denoted by
$\tau=s\tau_{0}$, where $s$ is an integer. The total length of time of the
entire measurement series is $T=N\tau_{0}$. The possible frequencies that
occur in the Fourier transform of the time residuals are
$f_{m}=\frac{m}{N\tau_{0}}\,.\quad\quad-\frac{N}{2}+1\leq m\leq\frac{N}{2}\,.$
(3)
Noise Sequences. In order that a set of noise amplitudes in the frequency
domain represent a real series in the time domain, the amplitudes must satisfy
the reality condition
$w_{-m}=(w_{m})^{\ast}.$ (4)
$N$ random numbers are placed in $N/2$ real and $N/2$ imaginary parts of the
positive and negative frequency spectrum. Thus if $w_{m}=u_{m}+iv_{m}$ where
$u_{m}$ and $v_{m}$ are independent uncorrelated random numbers, then
$(w_{m})^{\ast}=u_{m}-iv_{m}$. Since the frequencies $\pm 1/2\tau_{0}$
represent essentially the same contribution, $v_{N/2}$ will not appear. We
shall assume the variance of the noise amplitudes is such that
$\big{<}(w_{m})^{*}w_{n}\big{>}=\big{<}u^{2}+v^{2}\big{>}\delta_{mn}=2\sigma^{2}\delta_{mn};\quad
m\neq 0,N/2.$ (5)
Also, $<w_{m}^{2}>=<u^{2}-v^{2}+2iuv>=0{\rm\ for\ }m\neq N/2.$ The index $m$
runs from $-N/2+1$ to $N/2$. In order to avoid division by zero, we shall
always assume that the Fourier amplitude corresponding to zero frequency
vanishes. This only means that the average of the time residuals in the time
series will be zero, and has no effect on any variance that involves time
differences.
We perform a discrete Fourier transform of the frequency noise and obtain the
amplitude of the $k^{th}$ member of the time series for white PM:
$x_{k}=\frac{\tau_{0}^{2}}{\sqrt{N}}\sum_{m=-N/2+1}^{N/2}e^{-\frac{2\pi
imk}{N}}w_{m}.$ (6)
The factor $\tau_{0}^{2}$ is inserted so that the time series will have the
physical dimensions of time if $w_{m}$ has the dimensions of frequency. We
then multiply each frequency component by $|f_{0}/f_{m}|^{\lambda}$. This will
generate the desired power-law form of the spectral density. The time series
will be represented by
$X_{k}=\frac{\tau_{0}^{2}}{\sqrt{N}}\sum_{m=-N/2+1}^{N/2}\frac{|f_{0}|^{\lambda}}{|f_{m}|^{\lambda}}e^{-\frac{2\pi
imk}{N}}w_{m}.$ (7)
The constant factor $|f_{0}|^{\lambda}$ has been inserted to maintain the
physical units of the time series. The noise level is determined by $f_{0}$.
For this to correspond to commonly used expressions for the one-sided spectral
density, Eq. (2), we shall assume that
$\frac{\tau_{0}^{2}|f_{0}|^{\lambda}}{\sqrt{N}}=\sqrt{\frac{h_{\alpha}}{16\pi^{2}\sigma^{2}(N\tau_{0})}}\,.$
(8)
We shall show that if $2\lambda=2-\alpha$ the correct average spectral density
is obtained. The simulated time series is
$X_{k}=\sqrt{\frac{h_{\alpha}}{16\pi^{2}\sigma^{2}(N\tau_{0})}}\sum_{m}\frac{e^{-\frac{2\pi
imk}{N}}}{|f_{m}|^{\lambda}}w_{m}\,.$ (9)
The average (two-sided) spectral density of the time residuals is obtained
from a single term in Eq. (9):
$s_{x}(f_{m})=\frac{h_{\alpha}}{16\pi^{2}\sigma^{2}(N\tau_{0})f_{m}^{2\lambda}}\bigg{<}\frac{w_{m}w_{m}^{*}}{\Delta
f}\bigg{>}=\frac{h_{\alpha}}{8\pi^{2}f_{m}^{2\lambda}}$ (10)
where $\Delta f=1/(N\tau_{0})$ is the spacing between successive allowed
frequencies. The average (two-sided) spectral density of fractional frequency
fluctuations is given by the well-known relation
$s_{y}(f)=(2\pi f)^{2}s_{x}(f)\,,$ (11)
and the one-sided spectral density is
$S_{y}(f)=\cases{0,&$f<0$;\cr 2s_{y}(f)=h_{\alpha}f^{\alpha},&$f>0$\,,\cr}$
(12)
where $2\lambda=2-\alpha$.
## 3 Overlapping Allan Variance
Consider the second-difference operator defined by
$\Delta_{j,s}^{(2)}=\frac{1}{\sqrt{2\tau^{2}}}(X_{j+2s}-2X_{j+s}+X_{j}).$ (13)
The fully overlapping Allan variance is formed by averaging the square of this
quantity over all possible values of $j$ from 1 to $N-2s$. Thus
$\sigma_{y}^{2}(\tau)=\bigg{<}\frac{1}{N-2s}\sum_{j=1}^{N-2s}(\Delta_{j,s}^{(2)})^{2}\bigg{>}.$
(14)
In terms of the time series, Eq. (9), the second difference can be reduced
using elementary trigonometric identities:
$\displaystyle\Delta_{j,s}^{(2)}=\sqrt{\frac{h_{\alpha}}{32\pi^{2}\tau^{2}\sigma^{2}(N\tau_{0})}}\hbox
to166.2212pt{}$
$\displaystyle\times\sum_{m}\frac{w_{m}}{|f_{m}|^{\lambda}}\bigg{(}e^{-\frac{2\pi
im(j+2s)}{N}}-2e^{-\frac{2\pi im(j+s)}{N}}+e^{-\frac{2\pi im(j)}{N}}\bigg{)}$
$\displaystyle=-\sqrt{\frac{h_{\alpha}}{2\pi^{2}\tau^{2}\sigma^{2}(N\tau_{0})}}\sum_{m}\frac{w_{m}}{|f_{m}|^{\lambda}}e^{-\frac{2\pi
im(j+s)}{N}}\bigg{(}\sin\frac{\pi ms}{N}\bigg{)}^{2}\,.$ (15)
We form the averaged square of $\Delta_{j,s}^{(2)}$ by multiplying the real
quantity times its complex conjugate, then averaging over all possible values
of $j$.
$\displaystyle\sigma_{y}^{2}(\tau)=\frac{h_{\alpha}}{2\pi^{2}\tau^{2}(N\tau_{0})(N-2s)}\sum_{m,n,j}\bigg{<}\frac{w_{m}w_{n}^{*}}{\sigma^{2}}\bigg{>}\frac{\bigg{(}\sin\big{(}\frac{\pi
ms}{N}\big{)}\sin\big{(}\frac{\pi
ns}{N}\big{)}\bigg{)}^{2}}{|f_{m}f_{n}|^{\lambda}}$ $\displaystyle\hbox
to-36.135pt{}\times e^{-\frac{2\pi i(m-n)(j+s)}{N}}\,.$ (16)
The average of the product of random variables only contributes $2\sigma^{2}$
when $m=n$ (see Eq. (5), except when $m=n=N/2$ where
$\big{<}|w_{N/2}|^{2}\big{>}=\sigma^{2}$. The Allan variance reduces to
$\sigma_{y}^{2}(\tau)=\frac{h_{\alpha}}{\pi^{2}\tau^{2}(N\tau_{0})}\bigg{(}\sum_{m}\frac{\bigg{(}\sin\big{(}\frac{\pi
ms}{N}\big{)}\bigg{)}^{4}}{|f_{m}|^{2\lambda}}+\frac{1}{2}\frac{\bigg{(}\sin\big{(}\frac{\pi
s}{2}\big{)}\bigg{)}^{4}}{\big{(}f_{N/2}\big{)}^{2\lambda}}\bigg{)}\,,$ (17)
since every term in the sum over $j$ contributes the same amount. The zero
frequency term is excluded from the sum. For convenience we introduce the
abbreviation
$K=\frac{2h_{\alpha}}{\pi^{2}\tau^{2}(N\tau_{0})}\,.$ (18)
If we sum over positive frequencies only, a factor of 2 comes in except for
the most positive frequency and so
$\sigma_{y}^{2}(\tau)=K\bigg{(}\sum_{m>0}^{N/2-1}\frac{\bigg{(}\sin\big{(}\frac{\pi
ms}{N}\big{)}\bigg{)}^{4}}{f_{m}^{2\lambda}}+\frac{1}{4}\frac{\bigg{(}\sin\frac{\pi
s}{2}\bigg{)}^{4}}{(f_{N/2})^{2\lambda}}\bigg{)}.$ (19)
If the frequencies are spaced densely enough to pass from the sum to an
integral, then $\Delta f=(N\tau_{0})^{-1}$ and
$\frac{1}{N\tau_{0}}\sum_{m}F(|f_{m}|)\rightarrow\int F(|f|)\,df$ (20)
and we obtain the well-known result[6]
$\sigma_{y}^{2}(\tau)=2\int_{0}^{f_{h}}\frac{S_{y}(f)\,df}{\pi^{2}\tau^{2}f^{2}}\bigg{(}\sin(\pi
f\tau)\bigg{)}^{4}\,.$ (21)
Similar arguments lead to known expressions for the non-overlapping version of
the Allan variance as well as for the modified Allan variance. These will be
discussed in later sections.
## 4 Confidence Estimates
In the present section we shall develop expressions for the probability of
observing a particular value $A_{o}$ for the overlapping Allan variance in a
single measurement, or in a single simulation run. $A_{o}$ is a random
variable representing a possible value of the overlapping variance. We use a
subscript “o” to denote the completely overlapping case. To save writing, we
introduce the following abbreviations:
$\displaystyle F_{m}^{j}=\frac{\bigg{(}\sin\bigg{(}\frac{\pi
ms}{N}\bigg{)}\bigg{)}^{2}}{|f_{m}|^{\lambda}}\cos\bigg{(}\frac{2\pi
m(j+s)}{N}{}\bigg{)}$ $\displaystyle
G_{m}^{j}=\frac{\bigg{(}\sin\bigg{(}\frac{\pi
ms}{N}\bigg{)}\bigg{)}^{2}}{|f_{m}|^{\lambda}}\sin\bigg{(}\frac{2\pi
m(j+s)}{N}{}\bigg{)}$ (22)
The dependence on $s$ is suppressed, but is to be understood. We write the
second difference in terms of a sum over positive frequencies only, keeping in
mind that the most positive and the most negative frequencies only contribute
a single term since $\sin(\pi(j+s))=0$. The imaginary contributions cancel,
and we obtain
$\Delta_{j,s}^{(2)}=\sqrt{K}\sum_{m>0}\big{(}F_{m}^{j}\frac{u_{m}}{\sigma}+G_{m}^{j}\frac{v_{m}}{\sigma}\big{)}\,.$
(23)
There is no term in $v_{N/2}$. It is easy to see that the overlapping Allan
variance is given by
$\sigma_{y}^{2}(\tau)=\frac{K}{N-2s}\sum_{j}\sum_{m>0}\bigg{(}(F_{m}^{j})^{2}+(G_{m}^{j})^{2}\bigg{)}\,.$
(24)
To compute the probability that a particular value $A_{o}$ is observed for the
Allan variance, given all the possible values that the random variables
$u_{1},v_{1},...u_{N/2}$ can have, we form the integral
$P(A_{o})=\int\delta\bigg{(}A_{o}-\frac{1}{N-2s}\sum_{j}\big{(}\Delta_{j,s}^{(2)}\big{)}^{2}\bigg{)}\prod_{m>0}\bigg{(}e^{-\frac{u_{m}^{2}+v_{m}^{2}}{2\sigma^{2}}}\frac{du_{m}dv_{m}}{2\pi\sigma^{2}}\bigg{)}\,.$
(25)
The delta function constrains the averaged second difference to the specific
value $A_{o}$ while the random variables
$u_{1},v_{1},...u_{m},v_{m},...u_{N/2}$ range over their (normally
distributed) values. There is no integral for $v_{N/2}$. Inspecting this
probability and the Eq. (23)for the second difference indicates that we can
dispense with the factors of $\sigma^{-1}$ and work with normally distributed
random variables having variance unity. Henceforth we set $\sigma=1$.
The exponent involving the random variables is a quadratic form that can be
written in matrix form by introducing the $N-1$ dimension column vector $U$
(the zero frequency component is excluded)
$U^{T}=[u_{1}\,v_{1}\,...u_{m}\,v_{m},...v_{N/2-1},u_{N/2}]\,.$ (26)
Then
$\frac{1}{2}\sum_{m>0}(u_{m}^{2}+v_{m}^{2})=\frac{1}{2}U^{T}U=\frac{1}{2}U^{T}{\bf
1}U,$ (27)
where ${\bf 1}$ represents the unit matrix. The delta-function in Eq. (25) can
be written in exponential form by introducing one of its well-known
representations, an integral over all angular frequencies $\omega$:[5]
$P(A_{o})=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}e^{i\omega\big{(}A_{o}-\frac{1}{N-2s}\sum_{j}\big{(}\Delta_{j,s}^{(2)}\big{)}^{2}\big{)}}\prod_{m>0}\bigg{(}e^{-\frac{u_{m}^{2}+v_{m}^{2}}{2\sigma^{2}}}\frac{du_{m}dv_{m}}{2\pi\sigma^{2}}\bigg{)}\,.$
(28)
The contour of integration goes along the real axis in the complex $\omega$
plane.
The squared second difference is a complicated quadratic form in the random
variables $u_{1},v_{1},...u_{m},v_{m},...u_{N/2}$. If this quadratic form
could be diagonalized without materially changing the other quadratic terms in
the exponent, then the integrals could be performed in spite of the imaginary
factor $i$ in the exponent. To accomplish this we introduce a column vector
$C^{j}$ that depends on $j,m,s,N$ and whose transpose is
$\displaystyle(C^{j})^{T}=[F_{1}^{j},G_{1}^{j},...F_{m}^{j},G_{m}^{j},...G_{N/2-1}^{j},F_{N/2}^{j}]\,.$
(29)
Dependence on $s$ is not written explicitly but is understood. The column
vector has $N-1$ real components. It contains all the dependence of the second
difference on frequency and on the particular power law noise. We use indices
$\\{m,n\\}$ as matrix (frequency) indices. The (scalar) second difference
operator can be written very compactly as a matrix product
$\Delta_{j,s}^{(2)}=\sqrt{K}(C^{j})^{T}U=\sqrt{K}U^{T}C^{j}.$ (30)
Then
$\frac{1}{N-2s}\sum_{j}\bigg{(}\Delta_{j,s}^{(2)}\bigg{)}^{2}=U^{T}\bigg{(}\frac{K}{N-2s}\sum_{j}C^{j}(C^{j})^{T}\bigg{)}U.$
(31)
The matrix
$H_{o}=\frac{K}{N-2s}\sum_{j}C^{j}(C^{j})^{T}$ (32)
is real and symmetric. $H_{o}$ is also Hermitian and therefore has real
eigenvalues. A real symmetric matrix can be diagonalized by an orthogonal
transformation,[1, 2] which we denote by $O$. Although we shall not need to
determine this orthogonal transformation explicitly, it could be found by
first finding the eigenvalues $\epsilon$ and eigenvectors $\psi_{\epsilon}$ of
$H_{o}$, by solving the equation
$H_{o}\psi_{\epsilon}=\epsilon\psi_{\epsilon}\,.$ (33)
The transformation $O$ is a matrix of dimension $(N-1)\times(N-1)$ consisting
of the components of the normalized eigenvectors placed in columns. Then
$H_{o}O=OE\,,$ (34)
where $E$ is a diagonal matrix with entries equal to the eigenvalues of the
matrix $H_{o}$. Then since the transpose of an orthogonal matrix is the
inverse of the matrix,
$O^{T}H_{o}O=E\,.$ (35)
The matrix $H_{o}$ is thus diagonalized, at the cost of introducing a linear
transformation of the random variables:
$\frac{K}{N-2s}\sum_{j}\big{(}\Delta_{j,s}^{(2)}\big{)}^{2}=U^{T}H_{o}U=U^{T}OO^{T}H_{o}OO^{T}U=(U^{T}O)E(O^{T}U)\,.$
(36)
We introduce $N-1$ new random variables by means of the transformation:
$V=O^{T}U\,.$ (37)
Then the term in the exponent representing the Gaussian distributions is
$-\frac{1}{2}U^{T}{\bf 1}U=-\frac{1}{2}U^{T}O1O^{T}U=-\frac{1}{2}V^{T}{\bf
1}V=-\frac{1}{2}\sum_{n=1}^{N-1}V_{n}^{2}\,.$ (38)
The Gaussian distributions remain basically unchanged.
Further, the determinant of an orthogonal matrix is $\pm 1$, because the
transpose of the matrix is also the inverse:
$\det(O^{-1}O)=1=\det(O^{T}O)=\big{(}\det(O)\big{)}^{2}.$ (39)
Therefore, changes in the volume element are simple since the volume element
for the new variables is
$\displaystyle dV_{1}dV_{2}...dV_{N-1}=\bigg{|}\det\bigg{(}\frac{\partial
V_{m}}{\partial U_{n}}\bigg{)}\bigg{|}dU_{1}dU_{2}...dU_{N-1}$
$\displaystyle=|\det(O)|dU_{1}dU_{2}...dU_{N-1}$
$\displaystyle=dU_{1}dU_{2}...dU_{N-1}.$ (40)
After completing the diagonalization,
$\frac{1}{N-2s}\sum_{j}\big{(}\Delta_{j,s}^{(2)}\big{)}^{2}=\sum_{i}\epsilon_{i}V_{i}^{2}\,.$
(41)
The probability is therefore
$P(A_{o})=\int\frac{d\omega}{2\pi}e^{i\omega\big{(}A_{o}-\sum_{k}\epsilon_{k}V_{k}^{2}\big{)}}\prod_{i}\bigg{(}e^{-\frac{V_{i}^{2}}{2}}\frac{dV_{i}}{\sqrt{2\pi}}\bigg{)}\,.$
(42)
An eigenvalue of zero will not contribute in any way to this probability since
the random variable corresponding to a zero eigenvalue just integrates out.
Let the eigenvalue $\epsilon_{i}$ have multiplicity $\mu_{i}$, by which is
meant that the eigenvalue $\epsilon_{i}$ is repeated $\mu_{i}$ times.
Integration over the random variables then gives a useful form for the
probability:
$P(A_{o})=\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}\frac{e^{i\omega
A_{o}}}{\prod_{i}(1+2i\epsilon_{i}\omega)^{\mu_{i}/2}}\,.$ (43)
Finally the contour integral may be deformed and closed in the upper half
complex plane where it encloses the singularities of the integrand. This is
discussed in Appendix 1. Knowing the probability, one may integrate with
respect to the variance to find the cumulative distribution, and then find the
limits corresponding to a 50% probability of observing the variance.
Properties of the eigenvalues. First, it is easily checked that the
probability is correctly normalized by integrating over all $A_{o}$ and using
properties of the delta-function:
$\displaystyle\int
P(A_{o})dA_{o}=\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}\frac{\int
e^{i\omega
A_{o}}dA_{o}}{\prod_{i}(1+2i\epsilon_{i}\omega)^{\mu_{i}/2}}=\int_{-\infty}^{+\infty}\frac{\delta(\omega)d\omega}{\prod_{i}(1+2i\epsilon_{i}\omega)^{\mu_{i}/2}}$
$\displaystyle=\int_{-\infty}^{+\infty}d\omega\delta(\omega)=1\,.$ (44)
Second, the eigenvalues are all either positive or zero. The eigenvalue
equation for the eigenvector labeled by $\epsilon$ is:
$\frac{K}{N-2s}\sum_{j}C^{j}(C^{j})^{T}\psi_{\epsilon}=\epsilon\psi_{\epsilon}\,.$
(45)
Multiply on the left by $\psi_{\epsilon}^{T}$; assuming the vector has been
normalized, we obtain
$\epsilon=\frac{K}{N-2s}\sum_{j}\bigg{(}(C^{j})^{T}\psi_{\epsilon}\bigg{)}^{2}\geq
0\,.$ (46)
Thus every eigenvalue must be positive or zero.
Next let us calculate the trace of $H_{o}$. Since the trace is not changed by
an orthogonal transformation,
$\displaystyle{\rm Trace}(O^{T}H_{o}O)={\rm Trace}(H_{o}OO^{T})={\rm
Trace}(H_{o}OO^{-1})$ $\displaystyle={\rm
Trace}(H_{o})=\sum_{i}\epsilon_{i}\,.$ (47)
The sum of the diagonal elements of $H_{o}$ equals the sum of the eigenvalues
of $H_{o}$. If we then explicitly evaluate the sum of the diagonal elements of
$H_{o}$ we find
$\displaystyle\sum_{i}\epsilon_{i}=\frac{K}{N-2s}\sum_{j}{\rm
Trace}\big{(}C^{j}(C^{j})^{T})$
$\displaystyle=\frac{K}{N-2s}\sum_{j}\sum_{m>0}\bigg{(}(F_{m}^{j}\big{)}^{2}+\big{(}G_{m}^{j})^{2})\bigg{)}$
$\displaystyle=K\sum_{m>0}\frac{\bigg{(}\sin\frac{\pi
ms}{N}\bigg{)}^{4}}{|f_{m}|^{2-2\alpha}}=\sigma_{y}^{2}(\tau)\,.$ (48)
Every term labeled by $j$ contributes the same amount. We obtain the useful
result that the overlapping Allan variance is equal to the sum of the
eigenvalues of the matrix $H_{o}$. Similar results can be established for many
of the other types of variances.
Distribution of eigenvalues. The eigenvalue equation
$H_{o}\psi_{\mu}=\epsilon\psi_{\mu}$ produces many zero eigenvalues,
especially when $\tau$ is large. The dimension of the matrix $H_{o}$ is
therefore much larger than necessary. Numerical calculation indicates that for
the completely overlapping Allan variance, the eigenvalue equation has a total
of $N-1$ eigenvalues, but only $N-2s$ non-zero eigenvalues; the number of
significant eigenvalues is in fact equal to the number of terms in the sum
over $j$ in the equations:
$\frac{K}{N-2s}\sum_{n}\sum_{j}^{N-2s}(C_{m})^{j}(C_{n})^{j}\psi_{n}=\epsilon\psi_{m}\,.$
(49)
The factorized form of $H_{o}$, that arises on squaring a difference operator,
permits the reduction of the size of the matrix that is to be diagonalized. We
introduce the quantities
$\phi_{\mu}^{j}=\sum_{n}(C_{n})^{j}\psi_{n\mu}\,.$ (50)
We are using the Greek index $\mu$ to label a non-zero eigenvalue and the
index $\nu$ to label a zero eigenvalue. The eigenvalue equation becomes
$\frac{K}{N-2s}\sum_{j}(C_{m})^{j}\phi_{\mu}^{j}=\epsilon\psi_{m\mu}\,.$ (51)
Multiply by $(C_{m})^{l}$ and sum over the frequency index $m$. Then
$\frac{K}{N-2s}\sum_{m,j}(C_{m})^{l}(C_{m})^{j}\phi_{\mu}^{j}=\epsilon\phi_{\mu}^{l}\,.$
(52)
This is an eigenvalue equation with reduced dimension $N-2s$ rather than
$N-1$, since the number of possible values of $j$ is $N-2s$. The eigenvalue
equation can be written in terms of a reduced matrix $H_{red}$, given by
$(H_{red})^{lj}=\frac{K}{N-2s}\sum_{m}(C_{m})^{l}(C_{m})^{j}\,.$ (53)
The indices $l,j$ run from 1 to $N-2s$. Eigenvalues generated by Eq. (52) are
all non-zero. To prove this, multiply Eq. (52) by $\phi_{\mu}^{l}$ and sum
over $l$. We obtain
$\frac{K}{N-2s}\sum_{m}\bigg{(}\sum_{l}(C_{m})^{l}\phi_{\mu}^{l}\bigg{)}^{2}=\epsilon\sum_{l}\big{(}\phi_{\mu}^{l}\big{)}^{2}\,.$
(54)
The eigenvalue cannot be zero unless
$\sum_{l}(C_{m})^{l}\phi_{\mu}^{l}=0$ (55)
for every $m$. The number of such conditions however is larger than the number
$N-2s$ of variables, so the only way this can be satisfied is if
$\phi_{\mu}^{l}=0$, a trivial solution. Therefore to obtain normalizable
eigenvectors from Eq. (52), the corresponding eigenvalues must all be
positive. This is true even through some of these conditions may be trivially
satisfied if the factor $\sin(\pi ms/N)$ vanishes, which happens sometimes
when
$ms=MN$ (56)
where M is an integer. Every time a solution of Eq. (56) occurs, two equations
relating components of $\phi_{\mu}^{l}$ are lost. Suppose there were $n$
solutions to Eq. (56); then the number of conditions lost would be $2n$. The
number of variables is $N-2s$ and the number of conditions left in Eq. (55)
would be $N-1-2n$. The excess of conditions over variables is thus
$N-1-2n-(N-2s)=2(s-n)-1\,.$ (57)
In Appendix 2 we prove that under all circumstances $2(s-n)-1>0$.
We temporarily drop the subscript $o$ since the remainder of the results in
this section are valid for any of the variances. If the eigenvalues are found
and the appropriate matrix is diagonalized, we may compute the probability for
observing a value of the overlapping variance, denoted by the random variable
$A$, by
$\displaystyle
P(A)=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}e^{i\omega\big{(}A-V^{T}EV\big{)}}\prod_{i}\bigg{(}\frac{e^{-V_{i}^{2}/2}dV_{i}}{\sqrt{2\pi}}\bigg{)}$
$\displaystyle=\int\frac{d\omega}{2\pi}\frac{e^{i\omega
A}}{\prod(1+2i\epsilon_{i}\omega)^{\mu_{i}/2}}\,.$ (58)
Case of a single eigenvalue. If a single eigenvalue occurs once only, the
general probability expression, Eq. (43), has a single factor in the
denominator:
$P(A)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{d\omega e^{i\omega
A}}{\sqrt{1+2i\omega\epsilon}}\,.$ (59)
The integral is performed by closing the contour in the upper half complex
$\omega$ plane. There is a branch point on the imaginary axis at
$\omega=i/(2\epsilon)$, and a branch line from that point to infinity.
Evaluation of the integral gives:
$P(A)=\frac{1}{\sqrt{2\pi\sigma_{y}^{2}(\tau)}}\frac{e^{-A/(2\sigma_{y}^{2}(\tau))}}{\sqrt{A}}$
(60)
This is a chi-squared distribution with exactly one degree of freedom. The
computation of the confidence interval for a given $s$ is simple. The
cumulative probability obtained from Eq. (60) is
$\Phi(A)=\int_{0}^{A}\frac{1}{\sqrt{2\pi\sigma_{y}^{2}(\tau)}}\frac{e^{-x/(2\sigma_{y}^{2}(\tau))}}{\sqrt{x}}dx={\rm
erf}\big{(}\frac{A}{\sqrt{2\sigma_{y}^{2}(\tau)}}\big{)}\,.$ (61)
The $\pm 25$% limits on the probability of observing a value $A$ are then
found to be $1.323\sigma_{y}^{2}$ and $0.1015\sigma_{y}^{2}$, respectively. An
example of this is plotted in Figure 2. ($A$ is a variance, not a deviation.)
Case of two distinct non-zero eigenvalues. For the overlapping variance, when
$s$ has its maximum value $N/2-1$ there are two unequal eigenvalues. The
probability integral can be performed by closing the contour in the upper half
plane and gives the expression
$P(A)=\frac{1}{2\sqrt{\epsilon_{1}\epsilon_{2}}}e^{-\frac{A}{4}\big{(}\frac{1}{\epsilon_{1}}+\frac{1}{\epsilon_{2}}\big{)}}I_{0}\bigg{(}\frac{A}{4}\big{(}\frac{1}{\epsilon_{2}}-\frac{1}{\epsilon_{1}}\big{)}\bigg{)}\,.$
(62)
where $I_{0}$ is the modified Bessel function of order zero. The probability
is correctly normalized. It differs from a chi-squared distribution in that
the density does not have a singularity at $A=0$. This is illustrated in
Figure 1.
Evaluation of the contour integral when there are more than two distinct
eigenvalues is discussed in Appendix 1. If an eigenvalue occurs an even number
$2n$ times, the corresponding singularity becomes a pole of order $n$ and a
chi-squared probability distribution may result; this has only been observed
to occur for white PM.
## 5 Variance of the Variance
One of the goals of this investigation is to estimate the uncertainty in the
variance when some value of the variance is measured. This can be rigorously
defined if the probability $P(A_{o})$ is available, but it may be difficult to
evaluate the integral in Eq. (43). In this case, one might be interested in a
measure such as the rms value of the variance, measured with respect to the
mean variance $\sigma_{y}^{2}(\tau)$. One method for obtaining this measure is
obtained from the probability without introducing a Fourier integral
representation for the delta function:
$\displaystyle\int A_{o}^{2}P(A_{o})dA_{o}=\int
A_{o}^{2}\delta(A_{o}-\sum_{i}\epsilon_{i}V_{i}^{2})\prod_{m}\bigg{(}e^{-V_{m}^{2}/2}\frac{dV_{m}}{\sqrt{2\pi}}\bigg{)}dA_{o}$
$\displaystyle=\int\bigg{(}\sum_{i}\epsilon_{i}V_{i}^{2}\bigg{)}^{2}\delta(A_{o}-\sum_{i}\epsilon_{i}V_{i}^{2})\prod_{m}\bigg{(}e^{-V_{m}^{2}/2}\frac{dV_{m}}{\sqrt{2\pi}}\bigg{)}dA_{o}$
$\displaystyle=\int\bigg{(}\sum_{i}\epsilon_{i}V_{i}^{2}\bigg{)}^{2}\prod_{m}\bigg{(}e^{-V_{m}^{2}/2}\frac{dV_{m}}{\sqrt{2\pi}}\bigg{)}\,.$
(63)
Expanding the square in the integrand, there are $N$ fourth-order terms and
$N(N-1)$ cross terms so we get
$\displaystyle\int
A_{o}^{2}P(A_{o})dA_{o}=\sum_{i}\epsilon_{i}^{2}\big{<}V^{4}\big{>}+2\sum_{i\neq
j}\epsilon_{i}\epsilon_{j}\big{<}V^{2}\big{>}^{2}$
$\displaystyle=3\sum_{i}\epsilon^{2}+2\sum_{i\neq
j}\epsilon_{i}\epsilon_{j}\,.$ (64)
To obtain the variance of the variance, we must subtract
$\big{<}A_{o}\big{>}^{2}=\sum_{i}\epsilon_{i}^{2}+2\sum_{i\neq
j}\epsilon_{i}\epsilon_{j}\,.$ (65)
The result is
$\big{<}A_{o}^{2}\big{>}-\big{<}A_{o}\big{>}^{2}=2\sum_{i}\epsilon_{i}^{2}\,.$
(66)
Therefore the rms deviation of the variance from the mean variance equals
$\sqrt{2}$ times the square root of the sum of the squared eigenvalues. This
result is not very useful if there are only a small number of eigenvalues,
since the factor $\sqrt{2}$ can make the rms deviation from the mean of the
variance larger than the variance. This is because the probability
distribution of the variance is not a normal distribution, and can lead to
unreasonably large estimates of confidence intervals.
If the eigenvalues or the probabilities are not readily available, a similar
confidence estimate can be obtained from an alternative form of Eq. (66) by
considering the trace of the square of $H_{o}$:
$2\sum_{i}\epsilon_{i}^{2}=2\sum_{i,j}\big{(}\sum_{m>0}(F_{m}^{i}F_{m}^{j}+G_{m}^{i}G_{m}^{j})\big{)}^{2}\,.$
(67)
To prove this, consider diagonalization of $H_{o}$ by the orthogonal
transformation $O$, and compute
$\displaystyle 2\sum_{i}\epsilon_{i}^{2}=2{\rm Trace}(E^{2})=2{\rm
Trace}(O^{T}H_{o}OO^{T}H_{o}O)=2{\rm Trace}H_{o}^{2}$
$\displaystyle=\frac{2K^{2}}{(N-2s)^{2}}\sum_{m,n}\bigg{(}\sum_{i}\big{(}(C_{m})^{i}(C_{m})^{i}\big{)}\sum_{j}\big{(}(C_{n})^{j}(C_{n})^{j}\big{)}\bigg{)}$
$\displaystyle=\frac{2K^{2}}{(N-2s)^{2}}\sum_{i,j}\bigg{(}\sum_{m}\big{(}(C_{m})^{i}(C_{m})^{j}\big{)}\sum_{n}\big{(}(C_{n})^{j}(C_{n})^{i}\big{)}\bigg{)}$
$\displaystyle=\frac{2K^{2}}{(N-2s)^{2}}\sum_{i,j}\bigg{(}\sum_{m}\big{(}F_{m}^{i}F_{m}^{j}+G_{m}^{i}G_{m}^{j}\big{)}\bigg{)}^{2}\,.$
(68)
If there are not too many terms in the sums over $i$ and $j$ then the sum over
the frequency index in Eq. (67) becomes
$\sum_{m}\big{(}F_{m}^{i}F_{m}^{j}+G_{m}^{i}G_{m}^{j}\big{)}=\sum_{m>0}\frac{\big{(}\sin\frac{\pi
ms}{N}\big{)}^{4}}{f_{m}^{2\lambda}}\cos\frac{2\pi m(i-j)}{N}\,.$ (69)
This leads to some useful approximation schemes, but these will not be
discussed further in this paper.
Usually the variance of the variance is larger than the range of possible
values of the variance computed from the $\pm 25$% limits obtained from
calculated probabilities. An example is given in Figure 1, where the
confidence intervals are plotted for the non-overlapping variance for
$N=1024$. For $s$ ranging between 342 and 511 there is only one eigenvalue, so
the rms deviation of the variance from the variance is $\sqrt{2}$ times the
variance. This gives an upper limit on the confidence interval that is 1.414
times the variance, larger than that obtained from the actual probability
distribution. The medium heavy lines in Figure 1 show the true $\pm 25$%
probability limits obtained from calculations such as Eq. (10)
Figure 1: Comparisons of average non-overlapping Allan variance (light line)
with rms deviations from the mean of the variance (heavy lines) and true $\pm
25$% limits (medium heavy lines) for $N=1024$ data items in the time series.
Results for the Allan variance for two independent simulation runs are plotted
for comparison.
## 6 Modified Allan Variance
The modified Allan variance is defined so that averages over time are
performed before squaring and averaging. We use a subscript $m$ to distinguish
this form from the overlapping form of the variance. The definition is
$\sigma_{m}^{2}(\tau)=\bigg{<}\bigg{(}\frac{1}{s}\sum_{j=1}^{s}\Delta_{j,s}^{(2)}\bigg{)}^{2}\bigg{>}\,.$
(70)
Summing the expression for the second-difference given in Eq. (3),
$\displaystyle\frac{1}{s}\sum_{j}^{s}\Delta_{j,s}^{(2)}=-\sqrt{\frac{h_{\alpha}}{2s^{2}\pi^{2}\tau^{2}\sigma^{2}(N\tau_{0})}}\sum_{m}\frac{w_{m}}{|f_{m}|^{\lambda}}\big{(}\sin\frac{\pi
ms}{N}\big{)}^{2}$ $\displaystyle\times\bigg{(}\frac{\sin\frac{\pi
ms}{N}}{\sin\frac{\pi m}{N}}\bigg{)}e^{-\frac{\pi im(3s+1)}{N}}\,.$ (71)
Squaring this, we write the complex conjugate of one factor and obtain the
ensemble average
$\
\sigma_{m}^{2}(\tau)=\frac{h_{\alpha}}{2s^{2}\pi^{2}\tau^{2}\sigma^{2}(N\tau_{0})}\sum_{m,n}\frac{\big{<}w_{m}w_{n}^{*}\big{>}}{|f_{m}f_{n}|^{\lambda}}\frac{\bigg{(}\sin\frac{\pi
ms}{N}\bigg{)}^{6}}{\big{(}\sin\frac{\pi m}{N}\big{)}^{2}}\,.$ (72)
Using Eq. (5)and writing the result in terms of a sum over positive
frequencies only,
$\sigma_{m}^{2}(\tau)=2\sum_{m>0}\frac{S_{y}(f_{m})}{\pi^{2}s^{2}\tau^{2}f_{m}^{2}(N\tau_{0})}\frac{\bigg{(}\sin\frac{\pi
ms}{N}\bigg{)}^{6}}{\bigg{(}\sin\frac{\pi m}{N}\bigg{)}^{2}}\,.$ (73)
Passing to an integral when the frequency spacing is sufficiently small,
$\sigma_{m}^{2}(\tau)=2\int_{0}^{f_{h}}\frac{dfS_{y}(f)}{\pi^{2}s^{2}\tau^{2}f^{2}}\frac{\big{(}\sin(\pi
f\tau)\big{)}^{6}}{\big{(}\sin(\pi f\tau_{0})\big{)}^{2}}\,.$ (74)
This agrees with previously derived expressions[12]. The appearance of the
sine function in the denominator of this expression makes the explicit
evaluation of the integral difficult. In Table 1 we give the leading
contributions to the modified Allan variance for very large values of the
averaging time $\tau$. In general there are additional oscillatory
contributions with small amplitudes that are customarily neglected. These
results do not agree however with those published in[7]; the leading terms do
agree with those published in[8].
Noise Type | $S_{y}(f)$ | Mod $\sigma_{y}^{2}(\tau)$ | $\lambda$ | $\alpha$
---|---|---|---|---
White PM | $h_{2}f^{2}$ | $\frac{3h_{2}}{\pi^{2}\tau^{3}}\big{(}1+\frac{5}{18s^{3}}\big{)}$ | 0 | 2
Flicker PM | $h_{1}f$ | $\frac{3h_{1}\ln(256/27)}{8\pi^{2}\tau^{2}}$ | $\frac{1}{2}$ | 1
White FM | $h_{0}$ | $\frac{h_{0}}{4\tau}\big{(}1+\frac{1}{2s^{2}}\big{)}$ | 1 | 0
Flicker FM | $h_{-1}f^{-1}$ | $2h_{-1}\ln\big{(}\frac{3^{27/16}}{4}\big{)}$ | $\frac{3}{2}$ | -1
Random Walk | $h_{-2}f^{-2}$ | $h_{-2}\pi^{2}\tau\big{(}\frac{11}{20}+\frac{1}{12s^{2}}+\frac{1}{40s^{4}}\big{)}$ | 2 | -2
Table 1: Asymptotic expressions for the Modified Allan Variance, in the limit
of large sampling times $\tau=s\tau_{0}$.
Eigenvalue structure for modified Allan variance. For the modified case,
instead of Eq. (32) there are two summations; we use a subscript $m$ on $H$ to
denote the modified form:
$H_{m}=\frac{K}{s^{2}}\sum_{j}C^{j}\sum_{k}(C^{k})^{T}\,.$ (75)
We then seek solutions of the eigenvalue equation
$H_{m}\psi_{\epsilon}=\frac{K}{s^{2}}\sum_{j}C^{j}\sum_{k}(C^{k})^{T}\psi_{\epsilon}=\epsilon\psi_{\epsilon}\,.$
(76)
Multiply by $\sum_{l}(C^{l})^{T}$ and sum over the frequency index.
$\frac{K}{s^{2}}\sum_{l,j,m}\big{(}(C_{m})^{l})^{T}(C_{m})^{j}\big{(}\sum_{k}(C^{k})^{T}\psi_{\epsilon}\big{)}=\epsilon\big{(}\sum_{l}(C^{l})^{T}\psi_{\epsilon}\big{)}\,.$
(77)
The matrix equation has been reduced to a scalar equation for the quantity
$\phi=\sum_{k}(C^{k})^{T}\psi_{\epsilon}\,.$ (78)
Here the matrix product of $(C^{k})^{T}$ with $\psi_{\epsilon}$ entails a sum
over all frequencies. Since we have a scalar eigenvalue equation for
$\epsilon$, there can be one and only one eigenvalue, which is easily seen to
be the same as given by Eq. (73) for each sampling time $\tau$:
$\epsilon=\frac{K}{s^{2}}\sum_{l,k,m}\big{(}(C_{m})^{l}\big{)}^{T}(C_{m})^{k}=\sigma_{m}^{2}(\tau)\,.$
(79)
The probability distribution will be of the form of Eq. (60), a chi-squared
distribution with one degree of freedom.
## 7 Non-overlapping Allan variance
In the non-overlapping form of the Allan variance, the only change with
respect to Eq. (14) is that the sum over $j$ goes from 1 in steps of $s$ up to
$j_{max}\leq N-2s$. We denote the number of values of $j$ by $n_{max}$. The
average non-overlapping variance is the same as that for the overlapping case,
but the probability distributions are different. The matrix $H_{no}$ takes the
form
$(H_{no})_{mn}=\frac{K}{n_{max}}\sum_{j=1,1+2s..}^{j_{max}}(C_{m})^{j}(C_{n})^{j}\,.$
(80)
The subscript “no” labels a non-overlapping variance, and the indices $m$ and
$n$ label frequencies. Here there are only $n_{max}$ terms in the sum since
the values of $j$ skip by $s$. $n_{max}$ is given by:
$n_{max}=\lfloor\frac{N-1}{s}\rfloor-1\,,$ (81)
where $\lfloor Q\rfloor$ denotes the largest integer less than or equal to
$Q$. In order to diagonalize $H_{no}$ and compute probabilities, we look for
eigenvalues by seeking solutions of:
$\sum_{n}(H_{no})_{mn}\psi_{n}=\epsilon\psi_{m}.$ (82)
We define a reduced eigenvector by
$\phi_{j}=\sum_{n}(C_{n})^{j}\psi_{n}.$ (83)
The eigenvalue equation reduces to
$\frac{K}{n_{max}}\sum_{j}(C_{m})^{j}\phi_{j}=\epsilon\psi_{m}.$ (84)
Multiply this equation by $(C_{m})^{l}$ and sum over the frequency labels $m$.
Then
$\frac{K}{n_{max}}\sum_{j}\sum_{m}(\big{(}C_{m})^{l}(C_{m})^{j}\big{)}\phi_{j}=\epsilon\phi_{l}\,.$
(85)
Let
$J_{lj}=\frac{K}{n_{max}}\sum_{m}(C_{m})^{l}(C_{m})^{j}=\frac{K}{n_{max}}(C^{l})^{T}C^{j};$
(86)
the sum over frequencies is accomplished by the matrix multiplication. Just as
for the overlapping case, if we calculate the trace of the matrix $J_{lj}$, we
find that since each term in the sum over $j$ contributes the same amount,
${\rm
Trace}(J_{lj})=\frac{K}{n_{max}}\sum_{j}C_{j}^{T}C_{j}=\sigma_{no}^{2}(\tau)\,.$
(87)
Because the trace remains unchanged under an orthogonal transformation, the
non-overlapping Allan variance will be equal to the sum of the eigenvalues of
Eq. (85). The probability functions for a given $\tau$ can differ from those
for the overlapping or modified cases because the number of eigenvalues and
their multiplicities may be different. The eigenvalues are still all greater
than or equal to zero.
If the eigenvalues are found and the matrix $J_{lj}$ is diagonalized, we may
compute the probability for observing a value of the overlapping variance,
denoted by the random variable $A_{no}$, by
$\displaystyle
P(A_{no})=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}e^{i\omega\big{(}A_{no}-U^{T}JU\big{)}}\prod_{i}\frac{e^{-U_{i}^{2}/2}}{\sqrt{2\pi}}$
$\displaystyle=\int\frac{d\omega}{2\pi}\frac{e^{i\omega
A_{no}}}{\prod(1+2i\epsilon_{i}\omega)^{\mu_{i}/2}}\,.$ (88)
For example, when there are only two distinct non-zero eigenvalues, The matrix
$J_{lk}$ will have elements
$\displaystyle J_{1,1}=J_{1+s,1+s}=\frac{1}{2}\sigma_{no}^{2};\hbox
to72.26999pt{}$ $\displaystyle
J_{1,1+s}=J_{1+s,1}=\int_{0}^{f_{h}}\frac{S_{y}(f)df}{(\pi\tau
f)^{2}}\big{(}\sin(\pi\tau f)\big{)}^{4}\cos(2\pi\tau f)\,.$ (89)
The eigenvalues are then
$\displaystyle\epsilon_{1}=\frac{1}{2}\sigma_{no}^{2}+|J_{1,1+s}|$
$\displaystyle\epsilon_{2}=\frac{1}{2}\sigma_{no}^{2}-|J_{1,1+s}|\,.$ (90)
The probability is of the form of Eq. (62).
Figure 2: Simulation of Allan variance with $N=64$, $s=19$. For this case
there are two distinct eigenvalues in the matrix for the non-overlapping case.
The variance for $s=19$ was extracted from each of 4000 independent runs and a
histogram of the values obtained was constructed for comparison with the
probability, Eq. (62). Chi-squared distributions with 1, 2, and 3 degrees of
freedom are plotted for comparison.
Figure 1 shows an example of this for flicker FM noise, for $N=64$ items in
the time series. The variance corresponding to $\tau=19\tau_{0}$ was extracted
from each of 4000 independent simulation runs and a histogram of the resulting
values was plotted. Good agreement with the predicted probability distribution
can be seen.
## 8 Other variances
The analysis methods developed in this paper can be extended to other
variances. For example, the Theo variance[13] is defined by an overlapping
average,
$\displaystyle\sigma_{Th}^{2}(s,\tau_{0},N)=\frac{1}{N-s}\sum_{i=1}^{N-m}\frac{4}{3\tau^{2}}\times\hbox
to108.405pt{}$
$\displaystyle\sum_{\delta=0}^{s/2-1}\frac{1}{s/2-\delta}\bigg{(}(X_{i}-X_{i-\delta+s/2})-(X_{i+s}-X_{i+\delta+s/2})\bigg{)}^{2}\,.$
(91)
where $s$ is assumed to be even, and $\tau=3s\tau_{0}/4$. For any of the power
law noises, and after passing to an integral, this expression can be
transformed with the aid of elementary trigonometric identities to
$\sigma_{Th}^{2}(\tau,\tau_{0},N)=\frac{8}{3}\int_{0}^{f_{h}}\frac{S_{y}(f)df}{(\pi\tau
f)^{2}}\sum_{\kappa=1}^{2\tau/3\tau_{0}}\frac{1}{\kappa}\bigg{(}\sin(\pi
f\kappa\tau_{0})\sin\big{(}\pi
f(\frac{4}{3}\tau-\kappa\tau_{0})\big{)}\bigg{)}^{2}\,.$ (92)
The integer $\kappa$ in the denominator makes this variance more difficult to
evaluate, but a factorized quadratic form can still be constructed and the
eigenvalue structure and resulting probabilities can be analyzed.
The Hadamard variance is defined in terms of a third difference, and is widely
used to characterize clock stability when clock drift is a significant
issue[14]. A third difference operator may be defined as
$\Delta_{j,s}^{(3)}=\frac{1}{\sqrt{6\tau^{2}}}\big{(}X_{j+3s}-3X_{j+2s}+X_{j+s}-X_{j}\big{)}\,.$
(93)
The completely overlapping Hadamard variance is the average of the square of
this third difference:
$\displaystyle\sigma_{H}^{2}(\tau)=\frac{1}{N-3s}\sum_{j=1}^{N-3s}\big{(}\Delta_{j,s}^{(3)}\big{)}^{2}$
$\displaystyle\rightarrow\frac{8}{3}\int_{0}^{f_{h}}\frac{S_{y}(f)df}{(\pi\tau
f)^{2}}\big{(}\sin(\pi\tau f)\big{)}^{6}\,.$ (94)
These methods can be applied to cases in which there is dead time between
measurements of average frequency during the sampling intervals. Suppose for
example that the measurements consist of intervals of length $\tau=s\tau_{0}$
during which an average frequency is measured, separated by dead time
intervals of length $D-\tau$ during which no measurements are available. Let
the index $j$ label the measurement intervals with $j=1,2,...N$. An
appropriate variance can be defined in terms of the difference between the
average frequency in the $j^{th}$ interval and that in the interval labeled by
$j+r$:
$\Delta_{j,r,s}^{(2)}=\frac{1}{\sqrt{2}}\big{(}\overline{y}_{j+r,s}-\overline{y}_{j,s}\big{)}\,,$
(95)
where $\overline{y}_{j,s}$ is the average frequency in the interval $j$ of
length $s\tau_{0}$. Then an appropriate variance can be defined as
$\Psi(\tau,D)=\bigg{<}\big{(}\Delta_{j,r,s}^{(2)}\big{)}^{2}\bigg{>}\,.$ (96)
If the measurements are sufficiently densely spaced that it is possible to
pass to an integral, this can be shown to reduce to
$\Psi(\tau,D)=\frac{2}{D\tau}\int_{0}^{f_{h}}df\frac{S_{y}(f)}{(\pi
f)^{2}}\big{(}\sin(\pi frD)\big{)}^{2}\big{(}\sin(\pi f\tau)\big{)}^{2}\,.$
(97)
When $D=\tau$ and $r=1$ there is no real dead time and this variance reduces
to the ordinary Allan variance.
## 9 Summary and Conclusion
In this paper a method of simulating time series for the common power-law
noises has been developed and applied to several variances used to
characterize clock stability. These include overlapping and non-overlapping
forms of the Allan variance, and the Modified Allan variance. Diagonalization
of quadratic forms for the average variances leads to expressions for the
probabilities of observing particular values of the variance for a given
sampling time $\tau=s\tau_{0}$. The probabilities are expressed in terms of
integrals depending on the eigenvalues of matrices formed from squares of the
second differences that are used to define the variance. Generally speaking,
the number of eigenvalues is equal to the number of terms occurring in the sum
used to define averages of the second-difference operator, and this number
gets smaller as the sampling time $\tau$ gets larger. The probability
distribution $P(A)$ for some variance $A$ is useful in estimating the $\pm
25$% confidence interval about the average variance. The eigenvalues are
usually distinct; only for white PM have eigenvalues been observed to occur
with multiplicities other than unity. This must happen in order that chi-
square probability distributions result. Methods for computing the
probabilities have been presented in a few useful cases. Results presented in
this paper are confined to various forms of the Allan variance.
Other methods of simulating power law noise have been published; the present
approach differs from that of[3] in that no causality condition is imposed.
The application of the methods developed here will be applied to other
variances in future papers.
## 10 Appendix 1. Evaluation of contour integrals for probability
In almost all cases except for white PM, the eigenvalues are all distinct. We
show here how then the probability function, Eq. (43), can be reduced to a sum
of real integrals. For each of the eigenvalues, we introduce the quantity
$r_{k}=1/(2\epsilon_{k})\,.$ (98)
The integral becomes
$P(A)=\prod_{k}\big{(}\frac{\sqrt{r_{k}}}{e^{\pi
i/4}}\big{)}\int\frac{d\omega}{2\pi}\frac{e^{i\omega A}}{\prod_{k}(\omega-
ir_{k})^{1/2}}\,.$ (99)
By Jordan’s lemma[11], the contour of integration can be deformed into the
upper half plane. The addition of a circular arcs at radius $|\omega|=\infty$
contributes nothing. Each of the square root factors in Eq. (99) has a branch
point at $\omega=ir_{k}$ with a branch line extending to $+i\infty$ along the
imaginary axis. We define the complex argument of each such factor by
$-\frac{3\pi}{2}<\arg(\omega-ir_{k})<\frac{\pi}{2}\,.$ (100)
Figure 3: Contour deformed to run along branch line on the $y-$axis. A pair of
segments is shown with an odd number of singularities below the segments. The
branch line required by square root factors of the form $\sqrt{\omega-r_{k}}$
is defined by the angles–either $\pi/2$ or $-3\pi/2$–of the directed segments
relative to the Re$(\omega)$ axis. Contributions from pairs of segments that
are above an even number of branch points cancel out.
All branch lines extend along the positive $y-$axis to infinity. The largest
eigenvalues give singularities closest to the real axis. Fig. 3 illustrates
the resulting contour. Around each branch point is a circular portion which
contributes nothing because as the radius $\delta$ of each circle approaches
zero, the contribution to the integral approaches zero as $\sqrt{\delta}$. The
integral then consists of straight segments where $\omega=iy\pm\delta$, where
$\delta$ approaches zero. Two such straight segments are illustrated in Fig.
3, one on each side of the branch line along the $y-$axis. Suppose the
interval of interest is placed so that out of a total of $M$ eigenvalues, $n$
of them are below the interval (three are shown in the figure). For the
contribution to the integral on the left, $y$ is decreasing and we can account
for this by reversing the limits and introducing a minus sign. The phase
factor contributed by $n$ factors with branch points below the interval is
$\frac{1}{\big{(}e^{-\frac{3\pi i}{4}}\big{)}^{n}}=\bigg{(}e^{\frac{3\pi
i}{4}}\bigg{)}^{n}\,.$ (101)
The contribution from branch points above the interval of interest is the same
on both sides of the $y-$axis, and is
$\bigg{(}e^{\frac{i\pi}{4}}\bigg{)}^{M-n}\,.$ (102)
The factor in front of the integral sign in Eq. (99) is the same on both sides
of the $y-$axis and is
$\bigg{(}e^{-\frac{i\pi}{4}}\bigg{)}^{M}\,.$ (103)
The total phase factor of this contribution, including a factor $i$ that comes
from setting $\omega=iy$ is thus
$-i\bigg{(}e^{-\frac{i\pi}{4}}\bigg{)}^{M}\bigg{(}e^{\frac{i\pi}{4}}\bigg{)}^{M-n}\bigg{(}e^{\frac{3\pi
i}{4}}\bigg{)}^{n}=-i\bigg{(}e^{\frac{\pi i}{2}}\bigg{)}^{n}\,.$ (104)
For the integral along the segment on the positive side of the $y-$axis, the
only difference is that the phase of each of the $n$ contributions from branch
points below the interval changes from $3\pi/4$ to $-\pi/4$. The phase factor
for this part of the contour is thus
$+i\bigg{(}e^{-\frac{i\pi}{4}}\bigg{)}^{M}\bigg{(}e^{\frac{i\pi}{4}}\bigg{)}^{M-n}\bigg{(}e^{\frac{-\pi
i}{4}}\bigg{)}^{n}=+i\bigg{(}e^{\frac{-\pi i}{2}}\bigg{)}^{n}\,.$ (105)
If $n$ is even, the two contributions cancel. If $n=2m+1$ is odd, then the
contributions add up with a factor $2(-1)^{m}$. The probability is thus always
real and consists of contributions with alternating signs, with every other
interval left out.
In summary, the contour integral contributions from portions of the imaginary
axis in the complex $\omega$ plane that have an even number of branch points
below the interval will not contribute to the integral. For example, if there
are four distinct eigenvalues the probability will reduce to
$\displaystyle
P(A)=\frac{\sqrt{r_{1}r_{2}r_{3}r_{4}}}{\pi}\int_{r_{1}}^{r_{2}}\frac{e^{-yA}dy}{\sqrt{(y-r_{1})(r_{2}-y)(r_{3}-y)(r_{4}-y)}}$
$\displaystyle-\frac{\sqrt{r_{1}r_{2}r_{3}r_{4}}}{\pi}\int_{r_{3}}^{r_{4}}\frac{e^{-yA}dy}{\sqrt{(y-r_{1})(y-r_{2})(y-r_{3})(r_{4}-y)}}\,.$
(106)
Such results have been used to evaluate the probabilities for certain sampling
intervals for flicker fm noise in Sect. 4.
## 11 Appendix 2. Proof that Eq. (52) generates positive eigenvalues
In this Appendix we show that under all circumstances Eq. (56) gives rise to
positive eigenvalues.
Obviously if $N$ is a prime number Eq. (56) can never be satisfied ($n=0$).
Consider the case $s=N-1$. Then $m=MN$ and there are no solutions since $m\leq
N/2$. In general both $m$ and $s$ may contain factors that divide $M$ or $N$.
Suppose $s=ab$ where at least one of the factors $a,b$ is greater than 1,
where $a$ divides $M$ and $b$ divides $N$. Then let
$M=am_{a},\quad N=bn_{b}\,.$ (107)
Then $m=m_{a}n_{b}$. $m=m_{a}n_{b}>N/2=bn_{b}/2$ cannot happen because $m\leq
N/2$. If $m_{a}n_{b}=N/2=bn_{b}$, then the number of solutions is $n=1$ and
$2(s-n)-1=2(ab-1)-1\geq 2(2-1)-1>0$. If $m=m_{a}n_{b}<N/2=bn_{b}/2$, then
there may be solutions $m=m_{a}n_{b},m=2m_{a}n_{b},m=3m_{a}n_{b},...$up to
$bn_{b}/2$. The number of such solutions is
$n=\lfloor\frac{bn_{b}}{m_{a}n_{b}}\rfloor=\lfloor\frac{b}{m_{a}}\rfloor$
(108)
where $\lfloor x\rfloor$ means the largest integer less than or equal to $x$.
The excess of conditions is then
$2(s-n)-1=2(ab-\lfloor\frac{b}{m_{a}}\rfloor)>1\,,$ (109)
which proves the assertion.
## References
* [1] O. Strang, _Linear Algebra and its Applications_ , $2^{nd}$ ed., Academic Press (New York) 1980, Sect. 5.5.
* [2] R. Stoll,_Linear Algebra and Matrix Theory_ , McGraw-Hill (New York) 1980.
* [3] N. J. Kasdin and T. Walter, _Discrete Simulation of Power Law Noise_ , Proc. 1992 IEEE Frequency Control Symposium, pp 274-283.
* [4] _Characterization of Frequency and Phase Noise_ , Report 580 of the International Radio Consultative Committee (C.C.I.R.), 1986, pp 142-150, Reprinted in NIST Technical Note 1337, pp TN-162-170. (see [12]
* [5] M. J. Lighthill, _Introduction to Fourier Analysis and Generalized Functions_ , Cambridge University Press (1958).
* [6] J. A. Barnes, A. R. Chi, L. S. Cutler, D. J. Healey, D. B. Leeson, T. E. McGunigal, J. A. Mullen, Jr., W. L. Smith, R. L. Sydnor, R. F. C. Vessot, and G. M. R. Winkler, _Characterization of Frequency Stability_ , IEEE Trans. Inst. Meas. 1M-20(2), May 1971; Sect. III. discusses the one-sided spectral density.
* [7] P. Lesage and T. Ayi, _Characterization of Frequency Stability: analysis of the Modified Allan Variance and Properties of Its Estimate_ , IEEE Trans. Inst. Meas. Vol. IM-33 (4), i pp 332-336, December 1984; (See TN-1337, pp TN-259-TN263).
* [8] D. A. Howe and F. Vernotte, _Generalization of the Total variance approach to the modified Allan variance_ , Proc. 1999 PTTI Mtg., pp. 267-276.
* [9] S. Bregni and L. Jmoda, _Improved Estimation of the Hurst Parameter of Long-Range Dependent Traffic Using the Modified Hadamard Variance_ , Proceedings of the IEEE ICC, June 2006.
* [10] M. Abramowitz and I. Stegun, _Handbook of Mathematical Functions_ , NBS
* [11] V. I. Smirnov,_Complex Variables-Special Functions_ , III.2, “A Course of Higher Mathematics,” Pergamon Press 1964 Sect. 60, pp 232-233.
* [12] D. B. Sullivan, D. W. Allan, D. A. Howe and F. L. Walls, eds., _Characterization of Clocks and Oscillators_ NIST Technical Note 1337 March 1990 U. S. Government Printing Office, Washington, D.C. p. TN-9
* [13] D. A. Howe and T. Peppler, “Very long-term frequency stability: Estimation using a special-purpose statistic,” in _Proc. 2003 Joint Meeting IEEE Int. Frequency Control Symp. and European Frequency and Time Forum Conf.,_ April 2004, pp. 233-238.
* [14] S. A. Hutsell, J. A. Buisson, J. D. Crum, H. S. Mobbs and W. G. Reid, _Proc. 27 th Annual PTTI Systems and Applications Meeting_, San Diego, CA. Nov. 29-Dec. 1 1995 pp. 291-301
|
arxiv-papers
| 2011-03-25T18:59:21 |
2024-09-04T02:49:17.934144
|
{
"license": "Public Domain",
"authors": "Neil Ashby",
"submitter": "Neil Ashby",
"url": "https://arxiv.org/abs/1103.5062"
}
|
1103.5071
|
# Study of the effect of cost policies in the convergence of selfish
strategies in Pure Nash Equilibria in Congestion Games
Vissarion Fisikopoulos
###### Abstract
In this work we study of competitive situations among users of a set of global
resources. More precisely we study the effect of cost policies used by these
resources in the convergence time to a pure Nash equilibrium. The work is
divided in two parts. In the theoretical part we prove lower and upper bounds
on the convergence time for various cost policies. We then implement all the
models we study and provide some experimental results. These results follows
the theoretical with one exception which is the most interesting among the
experiments. In the case of coalitional users the theoretical upper bound is
pseudo-polynomial to the number of users but the experimental results shows
that the convergence time is polynomial.
Extended Abstract111The complete thesis is available, in greek format only,
here:
http://users.uoa.gr/$\sim$vfisikop/projects/thesis.pdfhttp://users.uoa.gr/
vfisikop/projects/thesis.pdf
## 1 Introduction
###
General goal of the current work is the study of competitive situations among
users of a set of global resources. In order to analyze and model these
situations we use as tools, game theoretic elements [OR94], such as Nash
Equilibria, congestion games and coordination mechanisms. Every global
resource debit a cost value to its users. We assume that the users are
selfish; ie. their sole objective is the maximization of their personal
benefit. An Nash equilibrium (NE) is a situation in which no user can increase
his personal benefit by changing only his or her own strategy unilaterally.
###
More specific, we are interested in the KP-model or parallel links222Read
[Kon06] for a survey on algorithmic problems in parallel links model. model
with $n$ users(jobs) and $m$ edges(machines) and we study convergence methods
to pure Nash Equilibria, in which all the strategies a user can select are
deterministic. Generally, a game has not always a pure Nash equilibrium.
Although we are going to study cases in which there is always at least one
Nash equilibrium. We define as cost policy of an edge the function which
computes the cost of each user of this edge.
## 2 Framework
###
One method of convergence in a pure Nash equilibrium is, starting from an
initial configuration, to allow all users to selfishly change their strategies
(one after the other) until they reach a pure Nash equilibrium. We are
interested in the convergence time to pure Nash Equilibria, that is the number
of these selfish moves. Firstly, we study the makespan cost policy, in which
each edge debits its total load to everyone that use it. In the most simple
case, the whole procedure is divided into several steps. At each step, the
priority algorithm choose one user from the set of users that benefit by
changing their current strategy. For this model, named ESS-model, the
convergence time is at the worst case exponential to the number of users. We
present the effect of several priority algorithms to the convergence time and
results for the major different cases of edges (identical, related, unrelated)
[EDKM03].
###
Another approach, with applications to distributed systems, is the concurrent
change of strategies (rerouting) [EDM05] in which more than one user can
change simultaneously his strategy. This model is more powerful than ESS
because of its real life applications but we are not analyzing it in this
work.
###
An extension to ESS-model is that of coalitions, in which the users can
contract alliances. This model comes from cooperative game theory. In this
case we have to deal with groups of users changing selfishly their group
strategies. We restrict our attention to coalitions of at most 2 users in the
identical machines introduced in [FKS06]. The pairs of coalitional users can
exchange their machines making a 2-flip move as a kind of pair migration.
There is a pseudo-polynomial upper bound to the convergence time to NE in this
model.
###
Another model of convergence, a little different than the others stated above,
is the construction of an algorithm that delegates strategies to the users
unselfishly without increasing the social cost. Informally, social cost is a
total metric of the system performance depending on the users strategies. This
model is named nashification and the algorithm nashify provides convergence to
a pure Nash equilibrium in polynomial number of steps without increasing the
social cost [FGL+03].
###
As far as the coordination mechanisms are concerned, they are a set of cost
policies for the edges, that provides motives to the selfish users in order to
converge to a pure Nash equilibrium with decreased social cost.
###
In this paper, we study the effect of coordination mechanisms in the
convergence time. We study the following cases:
Cost Policies: (1) Makespan, (2) Shortest Job/User First (SJF), (3) Longest
Job/User First (LJF), (4) First In First Out (FIFO).
Priority Algorithms: (a) max weight job/user (maw), (b) min weight job/user
(miw), (c) fifo, (d) random.
Note that each machine uses a cost policy and the next for migration user is
chosen using a priority algorithm and the above combinations can result in
linear, polynomial or exponential convergence.
In the concept of coalitions there is a need of tight braking when we have to
choose among pairs of users. We study two algorithms:
Coalition Priority Algorithms: (i) max weight pair (map), (ii) min weight pair
(mip).
Note that there is a arrangement of pairs comparing the subtraction of weights
for each pair.
## 3 Results
The paper’s results are divided in two categories: theoretical and
experimental.
### Theoretical Results.
We study the convergence time for SJF, LJF and FIFO policies. Especially for
FIFO we prove in identical machines case a tight linear bound and a pseudo-
polynomial bound in unrelated machines case.
###### Lemma 1
Under the FIFO cost policy in the ESS-model, there is an upper bound of
$\frac{n^{2}}{2}w_{max}$ steps for convergence to NE in the unrelated machines
case and $n-1$ steps in the identical machines case. This result is
independent from the priority algorithm.
Proof Sketch. Note that in every step the cost of at least one user will be
decreased so the cost of each user will not be increased. Also there is a
potential function that decrease at least one unit at each step (under the
integer weights assumption). Combining this with the max potential
$P(0)\leq\frac{n^{2}}{2}w_{max}$ where $w_{max}$ the maximum weight we have
the desired bound for unrelated machines. In the identical machines case we
only have to observe that when a user migrates then it has no benefit to
migrate again. This is true because at each step the minimum load did not
increase so our result holds. $\square$
###### Lemma 2
Under the LJF(SJF) cost policy in the ESS-model, the miw(maw) priority
algorithm gives an upper bound of $n$ steps for the identical machines case.
Proof Sketch. Similar with the FIFO policy we have to prove that when a user
migrates then it has no benefit to migrate again. Let $N=\\{1,2,\ldots,n\\}$
the set of users sorted by not increasing weight order and
$N_{t}=\\{t+1,\ldots,n\\}$ the set of users with weight greater or equal to
that of user $t$. We prove the desired result by induction to the number of
steps. $\square$
###
In [ILMS05] there is a proof of a quadratic upper bound to the convergence
time of unrelated machines using LJF and SJF policies but only for a
restricted case. However, our experiments provide us with exponential lower
bounds for the general case.
###
### Experimental Results.
We implement333The code in C is open and available here:
http://users.uoa.gr/$\sim$vfisikop/projects/thesis_code/kp_velt.chttp://users.uoa.gr/
vfisikop/projects/thesis_code/kp_velt.c all the above mentioned models and
analyze them experimentally.
The main usage of this analysis is to provide us with lower bounds. In our
experiments there are 3 parameters: the priority algorithm, the cost policy,
and the priority algorithm of coalitions. The following weight distributions
are used:
* (a) user’s $10\%$ have weight $10^{n/10}$ and $90\%$ have weight $1$.
* (b) user’s $50\%$ have weight $10^{n/10}$ and $50\%$ have weight $1$.
* (c) user’s $90\%$ have weight $10^{n/10}$ and $10\%$ have weight $1$.
* (d) random distribution from the set $[1,10^{n/10}]$.
* (e) each user has as weight his/her id ($w_{1}=1,w_{2}=2,\ldots,w_{n}=n$).
In all cases the experimental results follows the theoretical with one
exception which is the most interesting among the experiments. In the case of
coalitions with at most 2 users the theoretical upper bound is pseudo-
polynomial to the number of users but the experimental results shows that the
convergence time is polynomial. Especially the experiments has shown that the
number of 2-flips is quadratic using mip algorithm and linear using map
algorithm (Figures 4, 4). In addition to that, the 2-flips are a large percent
of the total migration steps so we can conjecture that the pseudo-polynomial
theoretical upper bound can improve to a polynomial one.
Figure 1: Maw algorithm with SJF policy and weight case (d) and (e) (y axis in
logarithmic scale)
Figure 2: Miw algorithm with LJF policy and weight case (d) and (e) (y axis in
logarithmic scale)
Figure 3: Maw algorithm with Makespan policy using mip algorithm for
coalitions and weight case (d)
Figure 4: Maw algorithm with Makespan policy using map algorithm for
coalitions and weight case (d)
## 4 Conclusions
### Identical Machines.
The linear convergence is achieved when the priority algorithm benefits from
the cost policy and chooses sequences of migrating users in order to satisfy
the property: “a user is stabilized after his migration”. In makespan and SJF
policies this obtained by the maw priority algorithm. In LJF policy this
obtained by the miw priority algorithm. In FIFO policy this is obtained by any
priority algorithm. Additionally, FIFO has the advantage that each machine
doesn’t need the knowledge of its users’ weights, which is useful in
applications in which the users don’t have the knowledge of their weights.
The exponential convergence in identical machine case is achieved when the
priority algorithm operates “against” the machines’ policy. Intuitively, this
happens because the algorithm chooses the most “unstable” user for migration.
In SJF(LJF) policies this obtained by the maw(miw) priority algorithm.
Table 1 sums up the results of convergence time to NE for the identical
machine case. The lower bounds are constructed from the experiments (see for
example figures 2 and 2). The upper bounds for FIFO, SJF, LJF policies are
lemmas 1 and 2 and the results for Makespan policy are taken from [EDKM03].
IDENTICAL Makespan FIFO SJF LJF MACHINES Lower Upper Lower Upper Lower Upper
Lower Upper maw $n$ $n$ $n$ $n$ $c^{n}$ $n$ $n$ miw $n^{2}$ $n$ $n$ $n$ $n$
$c^{n}$
Table 1: Lower and upper bounds of convergence time to NE for the identical
machine case. The results for Makespan policy are taken from [EDKM03].
### Unrelated Machines.
In this case the upper bounds are either exponential or pseudo-polynomial and
FIFO policy seems to be the best again. Table 2 sums up the results of
convergence time to NE for the unrelated machine case.
UNRELATED Makespan FIFO MACHINES Upper Upper maw $2mW_{max}$
$+m2^{W_{max}/m+W_{max}}$ $\frac{n^{2}}{2}+W_{max}$ miw $2^{W_{max}}$
$\frac{n^{2}}{2}+W_{max}$
Table 2: Lower and upper bounds of convergence time to NE for the unrelated
machine case. The results for Makespan policy are taken from [EDKM03].
## 5 Future Work
As a possible extension one can study other cost policies and priority
algorithms and compare with the existing ones. The above models can be studied
under the concept of price of anarchy. Is a good cost policy under the
convergence time to NE concept also good under the price of anarchy concept?
What is FIFO’s price of anarchy? The cases of related and unrelated machines
have also many open problems. Finally in games among coalitions it is
interesting to study coalitions with more than 2 players.
## 6 Acknowledgements
This work is a part of my bachelor’s thesis at the University of Patras,
Department of Computer Engineering & Informatics. It was supervised by Paul
Spirakis and Spiros Kontogiannis.
## References
* [EDKM03] Eyal Even-Dar, Alexander Kesselman, and Yishay Mansour. Convergence time to nash equilibria. In ICALP, volume 2719 of Lecture Notes in Computer Science, pages 502–513. Springer, 2003.
* [EDM05] Eyal Even-Dar and Yishay Mansour. Fast convergence of selfish rerouting. In SODA, pages 772–781. SIAM, 2005.
* [FGL+03] Feldmann, Gairing, Lucking, Monien, and Rode. Nashification and the coordination ratio for a selfish routing game. In ICALP: Annual International Colloquium on Automata, Languages and Programming, 2003.
* [FKS06] Fotakis, Kontogiannis, and Spirakis. Atomic congestion games among coalitions. In ICALP: Annual International Colloquium on Automata, Languages and Programming, 2006.
* [ILMS05] Nicole Immorlica, Li Li, Vahab S. Mirrokni, and Andreas Schulz. Coordination mechanisms for selfish scheduling. In Xiaotie Deng and Yinyu Ye, editors, WINE, volume 3828 of Lecture Notes in Computer Science, pages 55–69. Springer, 2005.
* [Kon06] S. C. Kontogiannis. Algorithms for pure nash equilibria in the game of parallel links. 2006\.
* [OR94] M.J. Osborne and A. Rubinstein. A Course in Game Theory. MIT Press, 1994.
|
arxiv-papers
| 2011-03-25T19:57:07 |
2024-09-04T02:49:17.940911
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vissarion Fisikopoulos",
"submitter": "Vissarion Fisikopoulos",
"url": "https://arxiv.org/abs/1103.5071"
}
|
1103.5169
|
11institutetext: Ritchie Lee 22institutetext: Carnegie Mellon University
Silicon Valley, NASA Ames Research Park, Mail Stop 23-11, Moffett Field, CA,
94035 22email: ritchie.lee@sv.cmu.edu 33institutetext: David Wolpert
44institutetext: NASA Ames Research Center, Mail Stop 269-1, Moffett Field,
CA, 94035 44email: david.h.wolpert@nasa.gov
# Game theoretic modeling of pilot behavior during mid-air encounters
Ritchie Lee and David Wolpert
###### Abstract
We show how to combine Bayes nets and game theory to predict the behavior of
hybrid systems involving both humans and automated components. We call this
novel framework “Semi Network-Form Games,” and illustrate it by predicting
aircraft pilot behavior in potential near mid-air collisions. At present, at
the beginning of such potential collisions, a collision avoidance system in
the aircraft cockpit advises the pilots what to do to avoid the collision.
However studies of mid-air encounters have found wide variability in pilot
responses to avoidance system advisories. In particular, pilots rarely
perfectly execute the recommended maneuvers, despite the fact that the
collision avoidance system’s effectiveness relies on their doing so. Rather
pilots decide their actions based on all information available to them
(advisory, instrument readings, visual observations). We show how to build
this aspect into a semi network-form game model of the encounter and then
present computational simulations of the resultant model.
## 1 Introduction
Bayes nets have been widely investigated and commonly used to describe
stochastic systems BishopBook ; Darwiche09 ; RussellBook . Powerful techniques
already exist for the manipulation, inference, and learning of probabilistic
networks. Furthermore, these methods have been well-established in many
domains, including expert systems, robotics, speech recognition, and
networking and communications KollerBook . On the other hand, game theory is
frequently used to describe the behavior of interacting humans Crawford02 ;
Crawford08 . A vast amount of experimental literature exists (especially in
economic contexts, such as auctions and negotiations), which analyze and
refine human behavior models CamererBook ; Caplin10 ; Selten08 . These two
fields have traditionally been regarded as orthogonal bodies of work. However,
in this work we propose to create a modeling framework that leverages the
strengths of both.
Building on earlier approaches Camerer10 ; Koller03 , we introduce a novel
framework, “Semi Network-Form Game,” (or “semi net-form game”) that combines
Bayes nets and game theory to model hybrid systems. We use the term “hybrid
systems” to mean such systems that may involve multiple interacting human and
automation components. The semi network-form game is a specialization of the
complete framework “network-form game,” formally defined and elaborated in
WolpertNFG .
The issue of aircraft collision avoidance has recently received wide attention
from aviation regulators due to some alarming near mid-air collision (NMAC)
statistics SfeArticle . Many discussions call into question the effectiveness
of current systems, especially that of the onboard collision avoidance system.
This system, called “ Traffic Alert and Collision Avoidance System (TCAS),” is
associated with many weaknesses that render it increasingly less effective as
traffic density grows exponentially. Some of these weaknesses include complex
contorted advisory logic, vertical only advisories, and unrealistic pilot
models. In this work, we demonstrate how the collision avoidance problem can
be modeled using a semi net-form game, and show how this framework can be used
to perform safety and performance analyses.
The rest of this chapter is organized as follows. In Section 2, we start by
establishing the theoretical fundamentals of semi net-form games. First, we
give a formal definition of the semi net-form game. Secondly, we motivate and
define a new game theoretic equilibrium concept called “level-K relaxed
strategies” that can be used to make predictions on a semi net-form game.
Motivated by computational issues, we then present variants of this
equilibrium concept that improve both computational efficiency and prediction
variance. In Section 3, we use a semi net-form game to model the collision
avoidance problem and discuss in detail the modeling of a 2-aircraft mid-air
encounter. First, we specify each element of the semi net-form game model and
describe how we compute a sample of the game theoretic equilibrium. Secondly,
we describe how to extend the game across time to simulate a complete
encounter. Then we present the results of a sensitivity analysis on the model
and examine the potential benefits of a horizontal advisory system. Finally,
we conclude via a discussion of semi net-form game benefits in Section 4 and
concluding remarks in Section 5.
## 2 Semi Network-Form Games
Before we formally define the semi net-form game and various player
strategies, we first define the notation used throughout the chapter.
### 2.1 Notation
Our notation is a combination of standard game theory notation and standard
Bayes net notation. The probabilistic simplex over a space $Z$ is written as
$\Delta_{Z}$. Any Cartesian product $\times_{y\in Y}\Delta_{Z}$ is written as
$\Delta_{Z\mid Y}$. So $\Delta_{Z\mid Y}$ is the space of all possible
conditional distributions of $z\in Z$ conditioned on a value $y\in Y$.
We indicate the size of any finite set $A$ as $|A|$. Given a function $g$ with
domain $X$ and a subset $Y\subset X$, we write $g(Y)$ to mean the set
$\\{g(x):x\in Y\\}$. We couch the discussion in terms of countable spaces, but
much of the discussion carries over to the uncountable case, e.g., by
replacing Kronecker deltas $\delta_{a,b}$ with Dirac deltas $\delta(a-b)$.
We use uppercase letters to indicate a random variable or its range, with the
context making the choice clear. We use lowercase letters to indicate a
particular element of the associated random variable’s range, i.e., a
particular value of that random variable. When used to indicate a particular
player $i$, we will use the notation $-i$ to denote all players excluding
player $i$. We will also use primes to indicate sampled or dummy variables.
### 2.2 Definition
A semi net-form game uses a Bayes net to serve as the underlying probabilistic
framework, consequently representing all parts of the system using random
variables. Non-human components such as automation and physical systems are
described using “chance” nodes, while human components are described using
“decision” nodes. Formally, chance nodes differ from decision nodes in that
their conditional probability distributions are pre-specified. Instead each
decision node is associated with a utility function, which maps an
instantiation of the net to a real number quantifying the player’s utility. To
fully specify the Bayes net, it is necessary to determine the conditional
distributions at the decision nodes to go with the distributions at the chance
nodes. We will discuss how to arrive at the players’ conditional distributions
(over possible actions), also called their “strategies,” later in Section 2.6.
We now formally define a semi network-form game as follows:
###### Definition 1
An ($N$-player) semi network-form game is a quintuple $(G,X,u,R,\pi)$ where
1. 1.
$G$ is a finite directed acyclic graph $\\{V,E\\}$, where $V$ is the set of
vertices and $E$ is the set of connecting edges of the graph. We write the set
of parent nodes of any node $v\in V$ as $pa(v)$ and its successors as
$succ(v)$.
2. 2.
$X$ is a Cartesian product of $|V|$ separate finite sets, each with at least
two elements, with the set for element $v\in V$ written as $X_{v}$, and the
Cartesian product of sets for all elements in $pa(v)$ written as $X_{pa(v)}$.
3. 3.
$u$ is a function $X\rightarrow{\mathbb{R}}^{N}$. We will typically view it as
a set of $N$ utility functions $u_{i}:X\rightarrow\mathbb{R}$.
4. 4.
$R$ is a partition of $V$ into $N+1$ subsets the first $N$ of which have
exactly one element. The elements of $R(1)$ through $R(N)$ are called
“Decision” nodes, and the elements of $R(N+1)$ are “Chance” nodes. We write
$\textsf{D}\equiv\cup_{i=1}^{N}R(i)$ and $\textsf{C}\equiv R(N+1)$.
5. 5.
$\pi$ is a function from $v\in
R(N+1)\rightarrow\Delta_{{X_{v}}\mid\times_{v^{\prime}\in
pa(v)}{X_{v^{\prime}}}}$. (In other words, $\pi$ assigns to every $v\in
R(N+1)$ a conditional probability distribution of $v$ conditioned on the
values of its parents.)
Intuitively, ${X_{v}}$ is the set of all possible states at node $v$, $u_{i}$
is the utility function of player $i$, $R(i)$ is the decision node set by
player $i$, and $\pi$ is the fixed set of distributions at chance nodes. As an
example, a normal-form game MyersonBook is a semi net-form game in which $E$
is empty. As another example, let $v$ be a decision node of player $i$ that
has one parent, $v^{\prime}$. Then the conditional distribution
$P(X_{v^{\prime}}\mid X_{pa(v^{\prime})})$ is a generalization of an
information set.
A semi net-form game is a special case of a general network-form game
WolpertNFG . In particular, a semi net-form game allows each player to control
only one decision node, whereas the full network-form game makes no such
restrictions allowing a player to control multiple decision nodes in the net.
Branching (via “branch nodes”) is another feature not available in semi net-
form games. Like a net-form game, Multi-Agent Influence Diagrams Koller03
also allow multiple nodes to be controlled by each player. Unlike a net-form
game, however, they do not consider bounded rational agents, and have special
utility nodes rather than utility functions.
### 2.3 A Simple Semi Network-Form Game Example
We illustrate the basic understandings of semi net-form games using the simple
example shown in Figure 1. In this example, there are 6 random variables
($A,B,C,D,P_{1},P_{2}$) represented as nodes in the net; the edges between
nodes define the conditional dependence between random variables. For example,
the probability of $D$ depends on the values of $P_{1}$ and $P_{2}$, while the
probability of $A$ does not depend on any other variables. We distinguish
between the two types of nodes: chance nodes ($A,B,C,D$), and decision nodes
($P_{1},P_{2}$). As discussed previously, chance nodes differ from decision
nodes in that their conditional probability distributions are specified
a-priori. Decision nodes do not have these distributions pre-specified, but
rather what is pre-specified are the utility functions ($U_{1}$ and $U_{2}$)
of those players. Using their utility functions, their strategies $P(P_{1}\mid
B)$ and $P(P_{2}\mid C)$ are computed to complete the Bayes net. This
computation requires the consideration of the Bayes net from each player’s
perspective.
Figure 1: A simple net-form game example: Fixed conditional probabilities are
specified for chance nodes ($A,B,C,D$), while utility functions are specified
for decision nodes ($P_{1},P_{2}$). Players try to maximize their expected
utility over the Bayes net.
Figure 2 illustrates the Bayes net from $P_{1}$’s perspective. In this view,
there are nodes that are observed ($B$), there are nodes that are controlled
($P_{1}$), and there are nodes that do not fall into any of these categories
($A,C,P_{2},D$), but appear in the player’s utility function. This arises from
the fact that in general the player’s utility function can be a function of
any variable in the net. As a result, in order to evaluate the expected value
of his utility function for a particular candidate action (sometimes we will
use the equivalent game theoretic term “move”), $P_{1}$ must perform inference
over these variables based on what he observes111We discuss the computational
complexity of a particular equilibrium concept later in Section 2.7.. Finally,
the player chooses the action that gives the highest expected utility.
Figure 2: A simple net-form example game from player 1’s perspective: Using
information that he observes, the player infers over unobserved variables in
the Bayes net in order to set the value of his decision node.
### 2.4 Level-K Thinking
Level-K thinking is a game theoretic equilibrium concept used to predict the
outcome of human-human interactions. A number of studies Camerer10 ; Camerer89
; CamererBook ; CostaGomes09 ; Crawford07 ; Wright10 have shown promising
results predicting experimental data in games using this method. The concept
of level-K is defined recursively as follows. A level $K$ player plays (picks
his action) as though all other players are playing at level $K-1$, who, in
turn, play as though all other players are playing at level $K-2$, etc. The
process continues until level 0 is reached, where the player plays according
to a prespecified prior distribution. Notice that running this process for a
player at $K\geq 2$ results in ricocheting between players. For example, if
player A is a level 2 player, he plays as though player B is a level 1 player,
who in turn plays as though player A is a level 0 player. Note that player B
in this example may not be a level 1 player in reality – only that player A
assumes him to be during his reasoning process. Since this ricocheting process
between levels takes place entirely in the player’s mind, no wall clock time
is counted (we do not consider the time it takes for a human to run through
his reasoning process). We do not claim that humans actually think in this
manner, but rather that this process serves as a good model for predicting the
outcome of interactions at the aggregate level. In most games, $K$ is a fairly
low number for humans; experimental studies CamererBook have found $K$ to be
somewhere between 1 and 2.
Although this work uses level-K exclusively, we are by no means wedded to this
equilibrium concept. In fact, semi net-form games can be adapted to use other
models, such as Nash equilibrium, Quantal Response Equilibrium, Quantal
Level-K, and Cognitive Hierarchy. Studies CamererBook ; Wright10 have found
that performance of an equilibrium concept varies a fair amount depending on
the game. Thus it may be wise to use different equilibrium concepts for
different problems.
### 2.5 Satisficing
Bounded rationality as coined by Simon Simon56 stems from observing the
limitations of humans during the decision-making process. That is, humans are
limited by the information they have, cognitive limitations of their minds,
and the finite amount of time they have to make decisions. The notion of
satisficing Caplin10 ; Simon56 ; Simon82 states that humans are unable to
evaluate the probability of all outcomes with sufficient precision, and thus
often make decisions based on adequacy rather than by finding the true
optimum. Because decision-makers lack the ability and resources to arrive at
the optimal solution, they instead apply their reasoning only after having
greatly simplified the choices available.
Studies have shown evidence of satisficing in human decision-making. In recent
experiments Caplin10 , subjects were given a series of calculations (additions
and subtractions), and were told that they will be given a monetary prize
equal to the answer of the calculation that they choose. Although the
calculations were not difficult in nature, they did take effort to perform.
The study found that most subjects did not exhaustively perform all the
calculations, but instead stopped when a “high enough” reward was found.
### 2.6 Level-K Relaxed Strategies
We use the notions of level-K thinking and satisficing to motivate a new game
theoretic equilibrium concept called “level-K relaxed strategies.” For a
player $i$ to perform classic level-K reasoning CamererBook requires $i$ to
calculate best responses222We use the term best response in the game theoretic
sense. i.e. the player chooses the move with the highest expected utility.. In
turn, calculating best responses often involves calculating the Bayesian
posterior probability of what information is available to the other players,
$-i$, conditioned on the information available to $i$. That posterior is an
integral, which typically cannot be evaluated in closed form.
In light of this, to use level-K reasoning, players must approximate those
Bayesian integrals. We hypothesize that real-world players do this using Monte
Carlo sampling. Or more precisely, we hypothesize that their behavior is
consistent with their approximating the integrals that way.
More concretely, given a node $v$, to form their best-response, the associated
player $i=R^{-1}(v)$ will want to calculate quantities of the form
argmax${}_{x_{v}}[\mathbb{E}(u_{i}\mid x_{v},x_{pa(v)})]$, where $u_{i}$ is
the player’s utility, $x_{v}$ is the variable set by the player (i.e. his
move), and $x_{pa(v)}$ is the realization of his parents that he observes. We
hypothesize that he (behaves as though he) approximates this calculation in
several steps. First, $M$ candidate moves are chosen via IID sampling the
player’s satisficing distribution. Now, for each candidate move, he must
estimate the expected utility resulting from playing that move. He does this
by sampling the posterior probability distribution $P^{K}(X_{V}\mid
x_{v},x_{pa(v)})$ (which accounts for what he knows), and computing the sample
expectation $\hat{u}^{K}_{i}$. Finally, he picks the move that has the highest
estimated expected utility. Formally, we give the following definition:
###### Definition 2
Consider a semi network-form game $(G,X,u,R,\pi)$ with level $K-1$ relaxed
strategies333We will define level-K relaxed strategies in Definition 3.
$\Lambda^{K-1}(X_{v^{\prime}}\mid X_{pa(v^{\prime})})$ defined for all
$v^{\prime}\in\textsf{D}$ and $K\geq 1$. For all nodes $v$ and sets of nodes
$Z$ in such a semi net-form game, define
1. 1.
$U=V\setminus\\{v,pa(v)\\}$,
2. 2.
$P^{K}(X_{v}\mid X_{pa(v)})=\pi(X_{v}\mid X_{pa(v)})$ if $v\in\textsf{C}$,
3. 3.
$P^{K}(X_{v}\mid X_{pa(v)})=\Lambda^{K-1}(X_{v}\mid X_{pa(v)})$ if
$v\in\textsf{D}$, and
4. 4.
$P^{K}(X_{Z})=\prod_{v^{\prime\prime}\in Z}P^{K}(X_{v^{\prime\prime}}\mid
X_{pa(v^{\prime\prime})})$.
###### Definition 3
Consider a semi network-form game $(G,X,u,R,\pi)$. For all $v\in\textsf{D}$,
specify an associated level 0 distribution $\Lambda^{0}(X_{v}\mid
x_{pa(v)})\in\Delta_{{X_{v}}\mid\times_{v^{\prime}\in pa(v)}{X_{v^{\prime}}}}$
and an associated satisficing distribution $\lambda(X_{v}\mid
x_{pa(v)})\in\Delta_{{X_{v}}\mid\times_{v^{\prime}\in
pa(v)}{X_{v^{\prime}}}}$. Also specify counting numbers $M$ and $M^{\prime}$.
For any $K\geq 1$, the level $K$ relaxed strategy of node $v\in\textsf{D}$ is
the conditional distribution $\Lambda^{K}(X_{v}\mid
x_{pa(v)})\in\Delta_{{X_{v}}\mid\times_{v^{\prime}\in pa(v)}{X_{v^{\prime}}}}$
sampled by running the following stochastic process independently for each
$x_{pa(v)}\in X_{pa(v)}$:
1. 1.
Form a set $\\{x^{\prime}_{v}(j):j=1,\ldots,M\\}$ of IID samples of
$\lambda(X_{v}\mid x_{pa(v)})$ and then remove all duplicates. Let $m$ be the
resultant size of the set;
2. 2.
For $j=1,\dots,m$, form a set
$\\{x^{\prime}_{V}(k;x^{\prime}_{v}(j)):k=1,\ldots M^{\prime}\\}$ of IID
samples of the joint distribution
$\displaystyle P^{K}(X_{V}\mid x^{\prime}_{v}(j),x_{pa(v)})$ $\displaystyle=$
$\displaystyle\prod_{v^{\prime}\in V}P^{K}(X_{v^{\prime}}\mid
X_{pa(v^{\prime})})\delta_{X_{pa(v)},x_{pa(v)}}\delta_{X_{v},x^{\prime}_{v}(j)};$
and compute
$\displaystyle\hat{u}^{K}_{i}(x^{\prime}_{U}(;x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)})$
$\displaystyle=$
$\displaystyle\frac{1}{M^{\prime}}\sum_{k=1}^{M^{\prime}}u_{i}(x^{\prime}_{V}(k,x^{\prime}_{v}(j)));$
where $x^{\prime}_{V}(;x^{\prime}_{v}(j))$ is shorthand for
$\\{x^{\prime}_{v^{\prime}}(k,x^{\prime}_{v}(j)):v^{\prime}\in
V,k=1,\ldots,M^{\prime}\\}$
3. 3.
Return $x^{\prime}_{v}(j^{*})$ where $j^{*}\equiv$
argmax${}_{j}[\hat{u}^{K}_{i}(x^{\prime}_{U}(;x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)})]$.
Intuitively, the counting numbers $M$ and $M^{\prime}$ can be interpreted as a
measure of a player’s rationality. Take, for example, $M\rightarrow\infty$ and
$M^{\prime}\rightarrow\infty$. Then the player’s entire movespace would be
considered as candidate moves, and the expected utility of each candidate move
would be perfectly evaluated. Under these circumstances, the player will
always choose the best possible move, making him perfectly rational. On the
other hand if $M=1$, this results in the player choosing his move according to
his satisficing distribution, corresponding to random behavior.
One of the strengths of Monte Carlo expectation estimation is that it is
unbiased RobertBook . This property carries over to level-K relaxed
strategies. More precisely, consider a level $K$ relaxed player $i$, deciding
which of his moves $\\{x^{\prime}_{v}(j):j\in 1,\ldots,m\\}$ to play for the
node $v$ he controls, given a particular set of values $x_{pa(v)}$ that he
observes. To do this he will compare the values
$\hat{u}^{K}_{i}(x^{\prime}_{U}(;x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)})$.
These values are all unbiased estimates of the associated conditional expected
utility444Note that the true expected conditional utility is not defined
without an associated complete Bayes net. However, we show in Theorem 2.1
Proof that the expected conditional utility is actually independent of the
probability $P_{\Gamma_{i}}(X_{v}\mid X_{pa(v)})$ and so it can chosen
arbitrarily. We make the assumption that $P_{\Gamma_{i}}(x_{v}\mid
x_{pa(v)})\neq 0$ for mathematical formality to avoid dividing by zero in the
proof. evaluated under an equivalent Bayes Net $\Gamma_{i}$ defined in Theorem
2.1. Formally, we have the following:
###### Theorem 2.1
Consider a semi net-form game $(G,X,u,R,\pi)$ with associated satisficing
$\lambda(X_{v}\mid x_{pa(v)})$ and level 0 distribution $\Lambda^{0}(X_{v}\mid
x_{pa(v)})$ specified for all players.
Choose a particular player $i$ of that game, a particular level $K$, and a
player move $x_{v}=x^{\prime}_{v}(j)$ from Definition 3 for some particular
$j$. Consider any values $x_{pa(v)}$ where $v$ is the node controlled by
player $i$. Define $\Gamma_{i}$ as any Bayes net whose directed acyclic graph
is $G$, where for all nodes $v^{\prime}\in\textsf{C}$,
$P_{\Gamma_{i}}(X_{v^{\prime}}\mid X_{pa(v^{\prime})})=\pi(X_{v^{\prime}}\mid
X_{pa(v^{\prime})})$, for all nodes $v^{\prime\prime}\in\textsf{D}$,
$P_{\Gamma_{i}}(X_{v^{\prime\prime}}\mid X_{pa(v^{\prime\prime})})$, and where
$P_{\Gamma_{i}}(X_{v}\mid X_{pa(v)})$ is arbitrary so long as
$P_{\Gamma_{i}}(x_{v}\mid x_{pa(v)})\neq 0$. We also define the notation
$P_{\Gamma_{i}}(X_{Z})$ for a set Z of nodes to mean $\prod_{v^{\prime}\in
Z}P_{\Gamma_{i}}(X_{v^{\prime}}\mid X_{pa(v^{\prime})})$.
Then the expected value $\mathbb{E}(\hat{u}^{K}_{i}\mid
x^{\prime}_{v}(j),x_{pa(v)})$ evaluated under the associated level-K relaxed
strategy equals $\mathbb{E}(u_{i}\mid x_{v},x_{pa(v)})$ evaluated under the
Bayes net $\Gamma_{i}$.
###### Proof
Write
$\displaystyle\mathbb{E}(\hat{u}^{K}_{i}\mid x^{\prime}_{v}(j),x_{pa(v)})\;=$
$\displaystyle\int
dx^{\prime}_{V}(;x^{\prime}_{v}(j))\;P(x^{\prime}_{V}(;x^{\prime}_{v}(j))\mid
x^{\prime}_{v}(j),x_{pa(v)})\hat{u}^{K}_{i}(x^{\prime}_{V}(;x^{\prime}_{v}(j)))$
$\displaystyle=$ $\displaystyle\int
dx^{\prime}_{V}(;x^{\prime}_{v}(j))\;P(x^{\prime}_{V}(;x^{\prime}_{v}(j))\mid
x^{\prime}_{v}(j),x_{pa(v)})\frac{1}{M^{\prime}}\sum_{k=1}^{M^{\prime}}u_{i}(x^{\prime}_{V}(k,x^{\prime}_{v}(j)))$
$\displaystyle=$ $\displaystyle\frac{1}{M^{\prime}}\sum_{k=1}^{M^{\prime}}\int
dx^{\prime}_{V}(k,x^{\prime}_{v}(j))\;P^{K}(x^{\prime}_{V}(k,x^{\prime}_{v}(j))\mid
x^{\prime}_{v}(j),x_{pa(v)})u_{i}(x^{\prime}_{V}(k,x^{\prime}_{v}(j)))$
$\displaystyle=$ $\displaystyle\frac{1}{M^{\prime}}\sum_{k=1}^{M^{\prime}}\int
dX_{V}\;P^{K}(X_{V}\mid x_{v},x_{pa(v)})u_{i}(X_{U},x_{v},x_{pa(v)})$
$\displaystyle=$ $\displaystyle\int dX_{V}\;P^{K}(X_{V}\mid
x_{v},x_{pa(v)})u_{i}(X_{U},x_{v},x_{pa(v)})$ $\displaystyle=$
$\displaystyle\frac{\int
dX_{U}\;P^{K}(X_{U},x_{v},x_{pa(v)})u_{i}(X_{U},x_{v},x_{pa(v)})}{\int
dX_{U}\;P^{K}(X_{U},x_{v},x_{pa(v)})}$ $\displaystyle=$
$\displaystyle\frac{\int dX_{U}\;\prod_{v^{\prime}\in
U}P^{K}(X_{v^{\prime}}\mid X_{pa(v^{\prime})})\prod_{v^{\prime\prime}\in
pa(v)}P^{K}(x_{v^{\prime\prime}}\mid X_{pa(v^{\prime\prime})})P^{K}(x_{v}\mid
x_{pa(v)})u_{i}(X_{U},x_{v},x_{pa(v)})}{\int dX_{U}\;\prod_{z^{\prime}\in
U}P^{K}(X_{z^{\prime}}\mid X_{pa(z^{\prime})})\prod_{z^{\prime\prime}\in
pa(v)}P^{K}(x_{z^{\prime\prime}}\mid X_{pa(z^{\prime\prime})})P^{K}(x_{v}\mid
x_{pa(v)})}$ $\displaystyle=$ $\displaystyle\frac{\int
dX_{U}\;\prod_{v^{\prime}\in U}P^{K}(X_{v^{\prime}}\mid
X_{pa(v^{\prime})})\prod_{v^{\prime\prime}\in
pa(v)}P^{K}(x_{v^{\prime\prime}}\mid
X_{pa(v^{\prime\prime})})u_{i}(X_{U},x_{v},x_{pa(v)})}{\int
dX_{U}\;\prod_{z^{\prime}\in U}P^{K}(X_{z^{\prime}}\mid
X_{pa(z^{\prime})})\prod_{z^{\prime\prime}\in
pa(v)}P^{K}(x_{z^{\prime\prime}}\mid X_{pa(z^{\prime\prime})})}$
$\displaystyle=$ $\displaystyle\frac{\int dX_{U}\;\prod_{v^{\prime}\in
U}P_{\Gamma_{i}}(X_{v^{\prime}}\mid
X_{pa(v^{\prime})})\prod_{v^{\prime\prime}\in
pa(v)}P_{\Gamma_{i}}(x_{v^{\prime\prime}}\mid
X_{pa(v^{\prime\prime})})u_{i}(X_{U},x_{v},x_{pa(v)})}{\int
dX_{U}\;\prod_{z^{\prime}\in U}P_{\Gamma_{i}}(X_{z^{\prime}}\mid
X_{pa(z^{\prime})})\prod_{z^{\prime\prime}\in
pa(v)}P_{\Gamma_{i}}(x_{z^{\prime\prime}}\mid X_{pa(z^{\prime\prime})})}$
$\displaystyle=$ $\displaystyle\int dX_{V}\;P_{\Gamma_{i}}(X_{V}\mid
x_{v},x_{pa(v)})u_{i}(X_{U},x_{v},x_{pa(v)})$ $\displaystyle=$
$\displaystyle\mathbb{E}(u_{i}\mid
x_{v},x_{pa(v)})\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qed$
In other words, we can set $P^{K}(x_{v}\mid x_{pa(v)})$ arbitrarily (as long
as it is nonzero) and still have the utility estimate evaluated under the
associated level-K relaxed strategy be an unbiased estimate of the expected
utility conditioned on $x_{v}$ and $x_{pa(v)}$ evaluated under $\Gamma_{i}$.
Unbiasness in level-K relaxed strategies is important because the player must
rely on a limited number of samples to estimate the expected utility of each
candidate move. The difference of two unbiased estimates is itself unbiased,
enabling the player to compare estimates of expected utility without bias.
### 2.7 Level-K d-Relaxed Strategies
A practical problem with relaxed strategies is that the number of samples may
grow very quickly with depth of the Bayes net. The following example
illustrates another problem:
###### Example 1
Consider a net form game with two simultaneously moving players, Bob and
Nature, both making $\mathbb{R}$-valued moves. Bob’s utility function is given
by the difference between his and Nature’s move.
So to determine his level 1 relaxed strategy, Bob chooses $M$ candidate moves
by sampling his satisficing distribution, and then Nature chooses $M^{\prime}$
(“level 0”) moves for each of those $M$ moves by Bob. In truth, one of Bob’s
$M$ candidate moves, $x_{Bob}^{*}$, is dominant555We use the term dominant in
the game theoretic sense. i.e., the move $x_{Bob}^{*}$ gives Bob the highest
expected utility no matter what move Nature makes. over the other $M-1$
candidate moves due to the definition of the utility function. However since
there are an independent set of $M^{\prime}$ samples of Nature for each of
Bob’s moves, there is nonzero probability that Bob won’t return $x_{Bob}^{*}$,
i.e., his level 1 relaxed strategy has nonzero probability of returning some
other move.
As it turns out, a slight modification to the Monte Carlo process defining
relaxed strategies results in Bob returning $x_{Bob}^{*}$ with probability $1$
in Example 5 for many graphs $G$. This modification also reduces the explosion
in the number of Monte Carlo samples required for computing the players’
strategies.
This modified version of relaxed strategies works by setting aside a set $Y$
of nodes which are statistically independent of the state of $v$. Nodes in $Y$
do not have to be resampled for each value $x_{v}$. Formally, the set $Y$ will
be defined using the dependence-separation (d-separation) property concerning
the groups of nodes in the graph $G$ that defines the semi net-form game
KollerBook ; Koller03 ; Pearl00 . Accordingly, we call this modification
“d-relaxed strategies.” Indeed, by _not_ doing any such resampling, we can
exploit the “common random numbers” technique to improve the Monte Carlo
estimates RobertBook . Loosely speaking, to choose the move with the highest
estimate of expected utility requires one to compare all pairs of estimates
and thus implicitly evaluate their differences. Recall that the variance of a
difference of two estimates is given by
$Var(\chi-\upsilon)=Var(\chi)+Var(\upsilon)-2Cov(\chi,\upsilon)$. By using
d-relaxed strategies, we expect the covariance $Cov(\chi,\upsilon)$ to be
positive, reducing the overall variance in the choice of the best move.
###### Definition 4
Consider a semi network-form game $(G,X,u,R,\pi)$ with level $K-1$ d-relaxed
strategies666We will define level-K d-relaxed strategies in Definition 5.
$\bar{\Lambda}^{K-1}(X_{v^{\prime}}\mid X_{pa(v^{\prime})})$ defined for all
$v^{\prime}\in\textsf{D}$ and $K\geq 1$. For all nodes $v$ and sets of nodes
$Z$ in such a semi net-form game, define
1. 1.
$S^{v}=succ(v)$,
2. 2.
$S^{-v}=V\setminus\\{v\cup S^{v}\\}$,
3. 3.
$Y=V\setminus\\{v\cup pa(v)\cup S^{v}\\}$,
4. 4.
$\bar{P}^{K}(X_{v}\mid X_{pa(v)})=\pi(X_{v}\mid X_{pa(v)})$ if
$v\in\textsf{C}$,
5. 5.
$\bar{P}^{K}(X_{v}\mid X_{pa(v)})=\bar{\Lambda}^{K-1}(X_{v}\mid X_{pa(v)})$ if
$v\in\textsf{D}$, and
6. 6.
$\bar{P}^{K}(X_{Z})=\prod_{v^{\prime\prime}\in
Z}\bar{P}^{K}(X_{v^{\prime\prime}}\mid X_{pa(v^{\prime\prime})})$.
Note that $Y\cup pa(v)=S^{-v}$ and $v\cup S^{v}\cup S^{-v}=V$. The motivation
for these definitions comes from the fact that $Y$ is precisely the set of
nodes that are d-separated from $v$ by $pa(v)$. As a result, when the player
who controls $v$ samples $X_{v}$ conditioned on the observed $x_{pa(v)}$, the
resultant value $x_{v}$ is statistically independent of the values of all the
nodes in $Y$. Therefore the same set of samples of the values of the nodes in
$Y$ can be reused for each new sample of $X_{v}$. This kind of reuse can
provide substantial computational savings in the reasoning process of the
player who controls $v$. We now consider the modified sampling process noting
that a level-K d-relaxed strategy is defined recursively in $K$, via the
sampling of $\bar{P}^{K}$. Note that in general, Definition 3 and Definition 5
do not lead to the same player strategies (conditional distributions) as seen
in Example 5.
###### Definition 5
Consider a semi network-form game $(G,X,u,R,\pi)$ with associated level 0
distributions $\Lambda^{0}(X_{v}\mid x_{pa(v)})$ and satisficing distributions
$\lambda(X_{v}\mid x_{pa(v)})$. Also specify counting numbers $M$ and
$M^{\prime}$.
For any $K\geq 1$, the level $K$ d-relaxed strategy of node $v\in\textsf{D}$,
where $v$ is controlled by player $i$, is the conditional distribution
$\bar{\Lambda}^{K}(X_{v}\mid
x_{pa(v)})\in\Delta_{{X_{v}}\mid\times_{v^{\prime}\in pa(v)}{X_{v^{\prime}}}}$
that is sampled by running the following stochastic process independently for
each $x_{pa(v)}\in X_{pa(v)}$:
1. 1.
Form a set $\\{x^{\prime}_{v}(j):j=1,\ldots,M\\}$ of IID samples of
$\lambda(X_{v}\mid x_{pa(v)})$ and then remove all duplicates. Let $m$ be the
resultant size of the set;
2. 2.
Form a set $\\{x^{\prime}_{S^{-v}}(k):k=1,\ldots,M^{\prime}\\}$ of IID samples
of the distribution over $X_{S^{-v}}$ given by
$\displaystyle\bar{P}^{K}(X_{S^{-v}}\mid x_{pa(v)})$ $\displaystyle=$
$\displaystyle\prod_{v^{\prime}\in{S^{-v}}}\bar{P}^{K}(X_{v^{\prime}}\mid
X_{pa(v^{\prime})})\delta_{X_{pa(v)},x_{pa(v)}};$
3. 3.
For $j=1,\dots,m$, form a set
$\\{x^{\prime}_{S^{v}}(k,x^{\prime}_{v}(j)):k=1,\ldots,M^{\prime}\\}$ of IID
samples of the distribution over $X_{S^{v}}$ given by
$\displaystyle\bar{P}^{K}(X_{S^{v}}\mid
x^{\prime}_{Y}(;),x^{\prime}_{v}(j),x_{pa(v)})$ $\displaystyle=$
$\displaystyle\prod_{v^{\prime}\in S^{v}}\bar{P}^{K}(X_{v^{\prime}}\mid
X_{pa(v^{\prime})})\prod_{v^{\prime\prime}\in
S^{-v}}\delta_{X_{v^{\prime\prime}},x^{\prime}_{v^{\prime\prime}}(k)}\delta_{X_{v},x^{\prime}_{v}(j)};$
and compute
$\displaystyle\bar{u}^{K}_{i}(x^{\prime}_{Y}(;),x^{\prime}_{S^{v}}(;x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)})\;=$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\frac{1}{M^{\prime}}\sum_{k=1}^{M^{\prime}}u_{i}(x^{\prime}_{Y}(k),x^{\prime}_{S^{v}}(k,x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)});$
where $x^{\prime}_{Y}(;)$ is shorthand for $\\{x^{\prime}_{v}(k):v\in
Y,k=1,\ldots,M^{\prime}\\}$ and $x^{\prime}_{S^{v}}(;x^{\prime}_{v}(j))$ is
shorthand for
$\\{x^{\prime}_{S^{v}}(k,x^{\prime}_{v}(j)):k=1,\ldots,M^{\prime}\\}$.
4. 4.
Return $x^{\prime}_{v}(j^{*})$ where $j^{*}\equiv$
argmax${}_{j}[\bar{u}^{K}_{i}(x^{\prime}_{Y}(;),x^{\prime}_{S^{v}}(;x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)})]$.
Definition 5 requires directly sampling from a conditional probability, which
requires rejection sampling. This is highly inefficient if $pa(v)$ has low
probability, and actually impossible if $pa(v)$ is continuous. For these
computational considerations, we introduce a variation of the previous
algorithm based on likelihood-weighted sampling rather than rejection
sampling. Although the procedure, as we shall see in Definition 7, is only
able to estimate the player’s expected utility up to a proportionality
constant (due to the use of likelihood-weighted sampling KollerBook ), we
point out that this is sufficient since proportionality is all that is
required to choose between candidate moves. Note that un-normalized
likelihood-weighted level-K d-relaxed strategy, like level-K d-relaxed
strategy, is defined recursively in $K$.
###### Definition 6
Consider a semi network-form game $(G,X,u,R,\pi)$ with unnormalized
likelihood-weighted level $K-1$ d-relaxed strategies777We will define
unnormalized likelihood-weighted level-K d-relaxed strategies in Definition 7.
$\tilde{\Lambda}^{K-1}(X_{v^{\prime}}\mid X_{pa(v^{\prime})})$ defined for all
$v^{\prime}\in\textsf{D}$ and $K\geq 1$. For all nodes $v$ and sets of nodes
$Z$ in such a semi net-form game, define
1. 1.
$\tilde{P}^{K}(X_{v}\mid X_{pa(v)})=\pi(X_{v}\mid X_{pa(v)})$ if
$v\in\textsf{C}$,
2. 2.
$\tilde{P}^{K}(X_{v}\mid X_{pa(v)})=\tilde{\Lambda}^{K-1}(X_{v}\mid
X_{pa(v)})$ if $v\in\textsf{D}$, and
3. 3.
$\tilde{P}^{K}(X_{Z})=\prod_{v^{\prime\prime}\in
Z}\tilde{P}^{K}(X_{v^{\prime\prime}}\mid X_{pa(v^{\prime\prime})})$.
###### Definition 7
Consider a semi network-form game $(G,X,u,R,\pi)$ with associated level 0
distributions $\Lambda^{0}(X_{v}\mid x_{pa(v)})$ and satisficing distributions
$\lambda(X_{v}\mid x_{pa(v)})$. Also specify counting numbers $M$ and
$M^{\prime}$, and recall the meaning of set $Y$ from Definition 4.
For any $K\geq 1$, the un-normalized likelihood-weighted level $K$ d-relaxed
strategy of node $v$, where node $v$ is controlled by player $i$, is the
conditional distribution $\tilde{\Lambda}^{K}(X_{v}\mid
x_{pa(v)})\in\Delta_{{X_{v}}\mid\times_{v^{\prime}\in pa(v)}{X_{v^{\prime}}}}$
that is sampled by running the following stochastic process independently for
each $x_{pa(v)}\in X_{pa(v)}$:
1. 1.
Form a set $\\{x^{\prime}_{v}(j):j=1,\ldots,M\\}$ of IID samples of
$\lambda(X_{v}\mid x_{pa(v)})$, and then remove all duplicates. Let $m$ be the
resultant size of the set;
2. 2.
Form a set of weight-sample pairs
$\\{(w^{\prime}(k),x^{\prime}_{S^{-v}}(k)):k=1,\ldots M^{\prime}\\}$ by
setting $x^{\prime}_{pa(v)}=x_{pa(v)}$, IID sampling the distribution over
$X_{Y}$ given by
$\displaystyle\tilde{P}^{K}(X_{Y})$ $\displaystyle=$
$\displaystyle\prod_{v^{\prime}\in Y}\tilde{P}^{K}(X_{v^{\prime}}\mid
X_{pa(v^{\prime})})$
and setting
$\displaystyle w^{\prime}(k)$ $\displaystyle=$
$\displaystyle\prod_{v^{\prime}\in pa(v)}\tilde{P}^{K}(x_{v^{\prime}}\mid
x^{\prime}_{pa(v^{\prime})}(k));$
3. 3.
For $j=1,\dots,m$, form a set
$\\{x^{\prime}_{S^{v}}(k,x^{\prime}_{v}(j)):k=1,\ldots M^{\prime}\\}$ of IID
samples of the distribution over $X_{S^{v}}$ given by
$\displaystyle\tilde{P}^{K}(X_{S^{v}}\mid
x^{\prime}_{Y}(;),x^{\prime}_{v}(j),x_{pa(v)})$ $\displaystyle=$
$\displaystyle\prod_{v^{\prime}\in S^{v}}\tilde{P}^{K}(X_{v^{\prime}}\mid
X_{pa(v^{\prime})})\prod_{v^{\prime\prime}\in
S^{-v}}\delta_{X_{v^{\prime\prime}},x^{\prime}_{v^{\prime\prime}}(k)}\delta_{X_{v},x^{\prime}_{v}(j)};$
and compute
$\displaystyle\tilde{u}_{i}(x^{\prime}_{Y}(;),x^{\prime}_{S^{v}}(;x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)})\;=$
$\displaystyle\qquad\qquad\qquad\qquad\frac{1}{M^{\prime}}\sum_{k=1}^{M^{\prime}}w^{\prime}(k)u_{i}(x^{\prime}_{Y}(k),x^{\prime}_{S^{v}}(k,x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)});$
4. 4.
Return $x^{\prime}_{v}(j^{*})$ where $j^{*}\equiv$
argmax${}_{j}[\tilde{u}_{i}(x^{\prime}_{Y}(k),x^{\prime}_{S^{v}}(k,x^{\prime}_{v}(j)),x^{\prime}_{v}(j),x_{pa(v)})]$.
#### Computational Complexity
Let $N$ be the number of players. Intuitively, as each level $K$ player
samples the Bayes net from their perspective, they initiate samples by all
other players at level $K-1$. These players, in turn, initiate samples by all
other players at level $K-2$, continuing until level 1 is reached (since level
0 players do not sample the Bayes net).
As an example, Figure 3 enumerates the number of Bayes net samples required to
perform level-K d-relaxed sampling for $N=3$ where all players reason at
$K=3$. Each square represents performing the Bayes net sampling process once.
As shown in the figure, the sampling process of $P_{A}$ at level 3 initiates
sampling processes in the two other players, $P_{B}$ and $P_{C}$, at level 2.
This cascading effect continues until level 1 is reached, and is repeated from
the top for $P_{B}$ and $P_{C}$ at level 3. In general, when all players play
at the same level $K$, this may be conceptualized as having $N$ trees of
degree $N-1$ and depth $K$; therefore having a computational complexity
proportional to $\sum_{j=0}^{K-1}(N-1)^{j}N$, or $O(N^{K})$. In other words,
the computational complexity is polynomial in the number of players and
exponential in the number of levels. Fortunately, experiments CamererBook ;
CostaGomes09 have found $K$ to be small in human reasoning.
Figure 3: Computational complexity of level-K d-relaxed strategies with $N=3$
and $K=3$: Each box represents a single execution of the algorithm. The
computational complexity is found to be $O(N^{K})$.
## 3 Using Semi Net-Form Games to Model Mid-Air Encounters
TCAS is an aircraft collision avoidance system currently mandated by the
International Civil Aviation Organization to be fitted to all aircraft with a
maximum take-off mass of over 5700 kg (12,586 lbs) or authorized to carry more
than 19 passengers. It is an onboard system designed to operate independently
of ground-based air traffic management systems to serve as the last layer of
safety in the prevention of mid-air collisions. TCAS continuously monitors the
airspace around an aircraft and warns pilots of nearby traffic. If a potential
threat is detected, the system will issue a Resolution Advisory (RA), i.e.,
recommended escape maneuver, to the pilot. The RA is presented to the pilot in
both a visual and audible form. Depending on the aircraft, visual cues are
typically implemented on either an instantaneous vertical speed indicator, a
vertical speed tape that is part of a primary flight display, or using pitch
cues displayed on the primary flight display. Audibly, commands such as
“Climb, Climb!” or “Descend, Descend!” are heard.
If both (own and intruder) aircraft are TCAS-equipped, the issued RAs are
coordinated, i.e., the system will recommend different directions to the two
aircraft. This is accomplished via the exchange of “intents” (coordination
messages). However, not all aircraft in the airspace are TCAS-equipped, i.e.,
general aviation. Those that are not equipped cannot issue RAs.
While TCAS has performed satisfactorily in the past, there are many
limitations to the current TCAS system. First, since TCAS input data is very
noisy in the horizontal direction, issued RAs are in the vertical direction
only, greatly limiting the solution space. Secondly, TCAS is composed of many
complex deterministic rules, rendering it difficult for authorities
responsible for the maintenance of the system (i.e., Federal Aviation
Administration) to understand, maintain, and upgrade. Thirdly, TCAS assumes
straight-line aircraft trajectories and does not take into account flight plan
information. This leads to a high false-positive rate, especially in the
context of closely-spaced parallel approaches.
This work focuses on addressing one major weakness of TCAS: the design
assumption of a deterministic pilot model. Specifically, TCAS assumes that a
pilot receiving an RA will delay for 5 seconds, and then accelerate at 1/4 g
to execute the RA maneuver precisely. Although pilots are trained to obey in
this manner, a recent study of the Boston area Kuchar07 has found that only
13% of RAs are obeyed – the aircraft response maneuver met the TCAS design
assumptions in vertical speed and promptness. In 64% of the cases, pilots were
in partial compliance – the aircraft moved in the correct direction, but did
not move as promptly or as aggressively as instructed. Shockingly, the study
found that in 23% of RAs, the pilots actually responded by maneuvering the
aircraft in the opposite direction of that recommended by TCAS (a number of
these cases of non-compliance may be attributed to visual flight
rules888Visual flight rules are a set of regulations set forth by the Federal
Aviation Administration which allow a pilot to operate an aircraft relying on
visual observations (rather than cockpit instruments).). As air traffic
density is expected to double in the next 30 years FAA10 , the safety risks of
using such a system will increase dramatically.
Pilot interviews have offered many insights toward understanding these
statistics. The main problem is a mismatch between the pilot model used to
design the TCAS system and the behavior exhibited by real human pilots. During
a mid-air encounter, the pilot does not blindly execute the RA maneuver.
Instead, he combines the RA with other sources of information (i.e.,
instrument panel, visual observation) to judge his best course of action. In
doing this, he quantifies the quality of a course of action in terms of a
utility function, or degree of happiness, defined over possible results of
that course of action. That utility function does not only involve proximity
to the other aircraft in the encounter, but also involves how drastic a
maneuver the pilot makes. For example, if the pilot believes that a collision
is unlikely based on his observations, he may opt to ignore the alarm and
continue on his current course, thereby avoiding any loss of utility incurred
by maneuvering. This is why a pilot will rationally decide to ignore alarms
with a high probability of being false.
When designing TCAS, a high false alarm rate need not be bad in and of itself.
Rather what is bad is a high false alarm rate combined with a pilot’s utility
function to result in pilot behavior which is undesirable at the system level.
This more nuanced perspective allows far more powerful and flexible design of
alarm systems than simply worrying about the false positive rate. Here, this
perspective is elaborated. We use a semi net-form game for predicting the
behavior of a system comprising automation and humans who are motivated by
utility functions and anticipation of one another’s behavior.
Recall the definition of a semi net-form game via a quintuple ($G,X,u,R,\pi$)
in Definition 1. We begin by specifying each component of this quintuple. To
increase readability, sometimes we will use (and mix) the equivalent notation
$Z=X_{Z}$, $z=x_{Z}$, and $z^{\prime}=x^{\prime}_{Z}$ for a node $Z$
throughout the TCAS modeling.
### 3.1 Directed Acyclic Graph $G$
The directed acyclic graph $G$ for a 2-aircraft encounter is shown in Figure
4. At any time $t$, the true system state of the mid-air encounter is
represented by the world state $S$, which includes the states of all aircraft.
Since the pilots (the players in this model) and TCAS hardware are not able to
observe the world state perfectly, a layer of nodes is introduced to model
observational noise and incomplete information. The variable $W_{i}$
represents pilot $i$’s observation of the world state, while $W_{TCAS_{i}}$
represents the observations of TCAS $i$’s sensors. A simplified model of the
current TCAS logic is then applied to $W_{TCAS_{i}}$ to emulate an RA $T_{i}$.
Each pilot uses his own observations $W_{i}$ and $T_{i}$ to choose an aircraft
maneuver command $A_{i}$. Finally, we produce the outcome $H$ by simulating
the aircraft states forward in time using a model of aircraft kinematics, and
calculate the social welfare $F$. We will describe the details of these
variables in the following sections.
Figure 4: Bayes net diagram of a 2-aircraft mid-air encounter: Each pilot
chooses a vertical rate to maximize his expected utility based on his TCAS
alert and a noisy partial observation of the world state.
### 3.2 Variable Spaces $X$
#### Space of World State $S$
The world state $S$ contains all the states used to define the mid-air
encounter environment. It includes 10 states per aircraft to represent
kinematics and pilot commands (see Table 3.2) and 2 states per aircraft to
indicate TCAS intent. Recall that TCAS has coordination functionality, where
it broadcasts its intents to other aircraft to avoid issuing conflicting RAs.
The TCAS intent variables are used to remember whether an aircraft has
previously issued an RA, and if so, what was the sense (direction).
Table 1: A description of aircraft kinematic states and pilot inputs.
Variable | Units | Description
---|---|---
$x$ | ft | Aircraft position in x direction
$y$ | ft | Aircraft position in y direction
$z$ | ft | Aircraft position in z direction
$\theta$ | rad | Heading angle
$\dot{\theta}$ | rad/s | Heading angle rate
$\dot{z}$ | ft/s | Aircraft vertical speed
$f$ | ft/s | Aircraft forward speed
$\phi_{c}$ | rad | Commanded aircraft roll angle
$\dot{z}_{c}$ | ft/s | Commanded aircraft vertical speed
$f_{c}$ | ft/s | Commanded aircraft forward speed
#### Space of TCAS Observation $W_{TCAS_{i}}$
Being a physical system, TCAS does not have full and direct access to the
world state. Rather, it must rely on noisy partial observations of the world
to make its decisions. $W_{TCAS_{i}}$ captures these observational
imperfections, modeling TCAS sensor noise and limitations. Note that each
aircraft has its own TCAS hardware and makes its own observations of the
world. Consequently, observations are made from a particular aircraft’s
perspective. To be precise, we denote $W_{TCAS_{i}}$ to represent the
observations that TCAS $i$ makes of the world state, where TCAS $i$ is the
TCAS system on board aircraft $i$. Table 3.2 describes each variable in
$W_{TCAS_{i}}$. Variables are real-valued (or positively real-valued where the
negative values do not have physical meaning).
Table 2: A description of $W_{TCAS_{i}}$ variables.
1\. No intent received. 2\. Intent received with an up sense. 3\. Intent
received with a level-off sense. 4\. Intent received with a down sense. We
choose $Q(W_{TCAS_{i}}\mid w_{TCAS_{i}})$ to be a tight Gaussian
distribution131313We assume independent noise for each variable
$r_{h},\dot{r}_{h},\dot{h},h,h_{i}$ with standard deviations of $5,2,2,2,2$,
respectively. These variables are described in Table 3.2. centered about
$w_{TCAS_{i}}$.
The trick, as always with importance sampling Monte Carlo, is to choose a
proposal distribution that will result in low variance, and that is nonzero at
all values where the integrand is nonzero RobertBook . In this case, so long
as the proposal distribution over $s^{\prime}$ has full support, the second
condition is met. So the remaining issue is how much variance there will be.
Since $Q(W_{TCAS_{i}}\mid w_{TCAS_{i}})$ is a tight Gaussian by the choice of
proposal distribution, values of $w^{\prime}_{TCAS_{i}}$ will be very close to
values of $w_{TCAS_{i}}$, causing $P(t_{i}\mid w^{\prime}_{TCAS_{i}})$ to be
much more likely to equal 1 than 0. To reduce the variance even further,
rather than form $M^{\prime}$ samples of the distribution, samples of the
proposal distribution are generated until $M^{\prime}$ of them have nonzero
posterior.
We continue at step 3. For each candidate action $a^{\prime}_{i}(j)$, we
estimate its expected utility by sampling the outcome $H$ from
$\tilde{P}^{K}(H\mid x^{\prime}_{Y}(;),a^{\prime}_{i}(j),w_{i},t_{i})$, and
computing the estimate $\tilde{u}_{i}^{K}$. The weighting factor compensates
for our use of a proposal distribution to sample variables rather than
sampling them from their natural distributions. Finally, in step 4, we choose
the move $a^{\prime}_{i}(j^{*})$ that has the highest expected utility
estimate.
### 3.7 Encounter Simulation
Up until now, we have presented a game which describes a particular instant
$t$ in time. In order to simulate an encounter to any degree of realism, we
must consider how this game evolves with time.
#### Time Extension of the Bayes Net
Note that the timing of decisions141414We are referring to the time at which
the player makes his decision, not the amount of time it takes for him to
decide. Recall that level-K reasoning occurs only in the mind of the decision-
maker and thus does not require any wall clock time. is in reality stochastic
as well as asynchronous. However, to consider a stochastic and asynchronous
timing model would greatly increase the model’s complexity. For example, the
pilot would need to average over the other pilots’ decision timing and vice
versa. As a first step, we choose a much simpler timing model and make several
simplifying assumptions. First, each pilot only gets to choose a single move,
and he does so when he receives his initial RA. This move is maintained for
the remainder of the encounter. Secondly, each pilot decides his move by
playing a simultaneous move game with the other pilots (the game described by
($G,X,u,R,\pi$)). These assumptions effectively remove the timing
stochasticity from the model.
The choice of modeling as a simultaneous move game is an approximation, as it
precludes the possibility of the player anticipating the timing of players’
moves. Formally speaking, this would introduce an extra dimension in the
level-K thinking, where the player would need to sample not only players’
moves, but also the timing of such a move for all time in the past and future.
However, it is noted that since the players are not able to observe each
other’s move directly (due to delays in pilot and aircraft response), it does
not make a difference to him whether it was made in the past or
simultaneously. This makes it reasonable to model the game as simultaneous
move at the time of decision. The subtlety here is that the player’s thinking
should account for when his opponent made his move via imagining what his
opponent would have seen at the time of decision. However, in this case, since
our time horizons are short, this is a reasonable approximation.
Figure 5 shows a revised Bayes net diagram – this time showing the extension
to multiple time steps. Quantities indicated by hatching in the figure are
passed between time steps. There are two types of variables to be passed.
First, we have the aircraft states. Second, recall that TCAS intents are
broadcasted to other aircraft as a coordination mechanism. These intents must
also be passed on to influence future RAs.
Figure 5: Time-extended Bayes net diagram of a 2-aircraft mid-air encounter:
We use a simple timing model that allows each pilot to make a single move at
the time he receives his TCAS alert.
#### Simulation Flow Control
Using the time-extended Bayes net as the basis for an inner loop, we add flow
control to manage the simulation. Figure 6 shows a flow diagram for the
simulation of a single encounter. An encounter begins by randomly initializing
a world state from the encounter generator (to be discussed in Section 3.7).
From here, the main loop begins.
First, the observational ($W_{i}$ and $W_{TCAS_{i}}$) and TCAS ($T_{i}$) nodes
are sampled. If a new RA is issued, the pilot receiving the new RA is allowed
to choose a new move via a modified level-K d-relaxed strategy (described in
Section LABEL:sec:computeStrategies). Otherwise, the pilots maintain their
current path. Note that in our model, a pilot may only make a move when he
receives a new RA. Since TCAS strengthenings and reversals (i.e., updates or
revisions to the initial RA) are not modeled, this implies that each pilot
makes a maximum of one move per encounter. Given the world state and pilot
commands, the aircraft states are simulated forward one time step, and social
welfare (to be discussed in Section 3.8) is calculated. If a near mid-air
collision (NMAC) is detected (defined as having two aircraft separated less
than 100 ft vertically and 500 ft horizontally) then the encounter ends in
collision and a social welfare value of zero is assigned. If an NMAC did not
occur, successful resolution conditions (all aircraft have successfully passed
each other) are checked. On successful resolution, the encounter ends without
collision and the minimum approach distance $d_{min}$ is returned. If neither
of the end conditions are met, the encounter continues at the top of the loop
by sampling observational and TCAS nodes at the following time step.
Figure 6: Flow diagram of the encounter simulation process: We initialize the
encounter using an encounter generator, and simulate forward in time using
pilot commands and aircraft kinematics. The encounter ends when the aircraft
have either collided or have successfully passed each other.
#### Encounter Generation
The purpose of the encounter generator is to randomly initialize the world
states in a manner that is genuinely representative of reality. For example,
the encounters generated should be of realistic encounter geometries and
configurations. One way to approach this would be to use real data, and
moreover, devise a method to generate fictitious encounters based on learning
from real ones, such as that described in KochenderferATC344 ;
KochenderferATC345 . For now, the random geometric initialization described in
KochenderferItaly11 Section 6.1 is used151515The one modification is that
$t_{target}$ (the initial time to collision between aircraft) is generated
randomly from a uniform distribution between 40 s and 60 s rather than fixed
at 40 s..
#### Aircraft Kinematics Model
Since aircraft kinematic simulation is performed at the innermost step, its
implementation has an utmost impact on the overall system performance. To
address computational considerations, a simplified aircraft kinematics model
is used in place of full aircraft dynamics. We justify these first-order
kinematics in 2 ways: First, we note that high-frequency modes are unlikely to
have a high impact at the time scales ($\sim 1$ min.) that we are dealing with
in this modeling. Secondly, modern flight control systems operating on most
(especially commercial) aircraft provide a fair amount of damping of high-
frequency modes as well as provide a high degree of abstraction. We make the
following assumptions in our model:
1. 1.
Only kinematics are modeled. Aerodynamics are not modeled. The assumption is
that modern flight control systems abstract this from the pilot.
2. 2.
No wind. Wind is not considered in this model.
3. 3.
No sideslip. This model assumes that the aircraft velocity vector is always
fully-aligned with its heading.
4. 4.
Pitch angle is abstracted. Pitch angle is ignored. Instead, the pilot directly
controls vertical rate.
5. 5.
Roll angle is abstracted. Roll angle is ignored. Instead, the pilot directly
controls heading rate.
Figure 7 shows the functional diagram of the kinematics model. The input
commands are first applied as inputs to first-order linear filters to update
$\dot{\theta}$, $\dot{z}$, and $f$, these quantities are then used in the
kinematic calculations to update the position and heading of the aircraft at
the next time step. Intuitively, the filters provide the appropriate time
response (transient) characteristics for the system, while the kinematic
calculations approximate the effects of the input commands on the aircraft’s
position and heading.
Figure 7: Aircraft kinematics model functional diagram: Pilot commands are
passed to filters to model aircraft transient response to first order. Then
aircraft kinematic equations based on forward Euler integration are applied.
The kinematic update equations, based on forward Euler integration method, are
given by:
$\displaystyle\theta_{t+\Delta t}$ $\displaystyle=$
$\displaystyle\theta_{t}+\Delta t\cdot\dot{\theta}_{t}$ $\displaystyle
x_{t+\Delta t}$ $\displaystyle=$ $\displaystyle x_{t}+\Delta t\cdot
f_{t}\cdot\cos\theta_{t}$ $\displaystyle y_{t+\Delta t}$ $\displaystyle=$
$\displaystyle y_{t}+\Delta t\cdot f_{t}\cdot\sin\theta_{t}$ $\displaystyle
z_{t+\Delta t}$ $\displaystyle=$ $\displaystyle z_{t}+\Delta
t\cdot\dot{z}_{t}$
Recall that a first-order filter requires two parameters: an initial value and
a time constant. We set the filter’s initial value to the pilot commands at
the start of the encounter, thus starting the filter at steady-state. The
filter time constants are chosen by hand (using the best judgment of the
designers) to approximate the behavior of mid-size commercial jets. Refinement
of these parameters is the subject of future work.
#### Modeling Details Regarding the Pilot’s Move $A_{i}$
Recall that a pilot only gets to make a move when he receives a new RA. In
fact, since strenghtenings and reversals are not modeled, the pilot will begin
the scenario with a vertical speed, and get at most one chance to change it.
At his decision point, the pilots engage in a simultaneous move game
(described in Section LABEL:sec:computeStrategies) to choose an aircraft
escape maneuver. To model pilot reaction time, a 5-second delay is inserted
between the time the player chooses his move, and when the aircraft maneuver
is actually performed.
### 3.8 Social Welfare $F$
Social welfare function is a function specified a-priori that maps an
instantiation of the Bayes net variables to a real number $F$. As a player’s
degree of happiness is summarized by his utility, social welfare is used to
quantify the degree of happiness for the system as a whole. Consequently, this
is the system-level metric that the system designer or operator seeks to
maximize. As there are no restrictions on how to set the social utility
function, it is up to the system designer to decide the system objectives. In
practice, regulatory bodies, such as Federal Aviation Administration, or
International Civil Aviation Organization, will likely be interested in
defining the social welfare function in terms of a balance of safety and
performance metrics. For now, social welfare is chosen to be the minimum
approach distance $d_{min}$. In other words, the system is interested in
aircraft separation.
### 3.9 Example Encounter
To look at average behavior, one would execute a large number of encounters to
collect statistics on $F$. To gain a deeper understanding of encounters,
however, we examine encounters individually. Figure 8 shows 10 samples of the
outcome distribution for an example encounter. Obviously, only a single
outcome occurs in reality, but the trajectory spreads provide an insightful
visualization of the distribution of outcomes. In this example, we can see (by
visual inspection) that a mid-air collision is unlikely to occur in this
encounter. Furthermore, we see that probabilistic predictions by semi net-form
game modeling provide a much more informative picture than the deterministic
predicted trajectory that the TCAS model assumes (shown by the thicker
trajectory).
Figure 8: Predicted trajectories sampled from the outcome distribution of an
example encounter: Each aircraft proceeds on a straight-line trajectory until
the pilot receives an RA. At that point, the pilot uses level-K d-relaxed
strategies to decide what vertical rate to execute. The resultant trajectories
from 10 samples of the vertical rate are shown. The trajectory assumed by TCAS
is shown as the thicker trajectory.
### 3.10 Sensitivity Analysis
Because of its sampling nature, level-K relaxed strategy and its variants are
all well-suited for use with Monte Carlo techniques. In particular, such
techniques can be used to assess performance of the overall system by
measuring social welfare $F$ (as defined in Section 3.8). Observing how $F$
changes while varying parameters of the system can provide invaluable insights
about a system. To demonstrate the power of this capability, parameter studies
were performed on the mid-air encounter model, and sample results are shown in
Figures 9-12. In each case, we observe expected social welfare while selected
parameters are varied. Each datapoint represents the average of 1800
encounters.
In Figure 9, the parameters $M_{w}$ and $M_{W_{TCAS}}$, which are multiples on
the noise standard deviations of $W$ and $W_{TCAS}$ respectively, are plotted
versus social welfare $F$. It can be seen that as the pilot and TCAS system’s
observations get noisier (e.g. due to fog or faulty sensors), social welfare
decreases. This agrees with our intuition. A noteworthy observation is that
social welfare decreases faster with $M_{w}$ (i.e., when the pilot has a poor
visual observation of his environment) than with $M_{W_{TCAS}}$ (i.e., noisy
TCAS sensors). This would be especially relevant to, for example, a funder who
is allocating resources for developing less noisy TCAS sensors versus more
advanced panel displays for better situational awareness.
Figure 9: Impacts of observational noise on social welfare: Social welfare is
plotted against multiples on the noise standard deviations of $W$ and
$W_{TCAS}$. We observe that social welfare decreases much faster with increase
in $M_{W}$ than with increase in $M_{W_{TCAS}}$. This means that according to
our model, pilots receive more information from their general observations of
the world state than from the TCAS RA.
Figure 10 shows the dependence of social welfare on selected TCAS internal
logic parameters DMOD and ZTHR KochenderferATC360 . These parameters are
primarily used to define the size of safety buffers around the aircraft, and
thus it makes intuitive sense to observe an increase in $F$ (in the manner
that we’ve defined it) as these parameters are increased. Semi net-form game
modeling gives full quantitative predictions in terms of a social welfare
metric.
Figure 10: Impacts of TCAS parameters DMOD and ZTHR on social welfare: We
observe that social welfare increases as DMOD and ZTHR are increased. This
agrees with our intuition since these parameters are used to define the size
of safety buffers around the aircraft.
Figure 11 plots player utility weights versus social welfare. In general, the
results agree with intuition that higher $\alpha_{1}$ (stronger desire to
avoid collision) and lower $\alpha_{2}$ (weaker desire to stay on course) lead
to higher social welfare. These results may be useful in quantifying the
potential benefits of training programs, regulations, incentives, and other
pilot behavior-shaping efforts.
Figure 11: Impacts of player utility weights (see Section LABEL:sec:UtilFcn)
on social welfare: We observe that higher $\alpha_{1}$ (more weight to
avoiding collision) and lower $\alpha_{2}$ (less weight to maintaining current
course) leads to higher social welfare.
Figure 12 plots model parameters $M$ and $M^{\prime}$ versus $F$. Recall from
our discussion in Section 2.6 that these parameters can be interpreted as a
measure of the pilot’s rationality. As such, we point out that these
parameters are not ones that can be controlled, but rather ones that should be
set as closely as possible to reflect reality. One way to estimate the “true”
$M$ and $M^{\prime}$ would be to learn them from real data. (Learning model
parameters is the subject of a parallel research project.) A plot like Figure
12 allows a quick assessment of the sensitivity of $F$ to $M$ and
$M^{\prime}$.
Figure 12: Impacts of pilot model parameters $M$ and $M^{\prime}$ (see
Definition 7) on social welfare: We observe that as these parameters are
increased, there is an increase in social welfare. This agrees with our
interpretation of $M$ and $M^{\prime}$ as measures of rationality.
### 3.11 Potential Benefits of a Horizontal RA System
Recall that due to high noise in TCAS’ horizontal sensors, the current TCAS
system issues only vertical RAs. In this section, we consider the potential
benefits of a horizontal RA system. The goal of this work is not to propose a
horizontal TCAS system design, but to demonstrate how semi net-form games can
be used to evaluate new technologies.
In order to accomplish this, we make a few assumptions. Without loss of
generality, we refer to the first aircraft to issue an RA as aircraft 1, and
the second aircraft to issue an RA as aircraft 2. First, we notice that the
variable $W_{TCAS_{i}}$ does not contain relative heading information, which
is crucial to properly discriminating between various horizontal geometric
configurations. In KochenderferATC371 , Kochenderfer et al. demonstrated that
it is possible to track existing variables (range, range rate, bearing to
intruder, etc.) over time using an unscented Kalman filter to estimate
relative heading and velocity of two aircraft. Furthermore, estimates of the
steady-state tracking variances for these horizontal variables were provided.
For simplicity, this work does not attempt to reproduce these results, but
instead simply assumes that these variables exist and are available to the
system.
Secondly, until now the pilots have been restricted to making only vertical
maneuvers. This restriction is now removed, allowing pilots to choose moves
that have both horizontal and vertical components. However, we continue to
assume enroute aircraft, and thus aircraft heading rates are initialized to
zero at the start of the encounter. Finally, we assume that the horizontal RA
system is an augmentation to the existing TCAS system rather than a
replacement. As a result, we first choose the vertical component using mini
TCAS as was done previously, then select the horizontal RA component using a
separate process.
As a first step, we consider a reduced problem where we optimize the
horizontal RA for aircraft 2 only; aircraft 1 is always issued a maintain
heading horizontal RA. (Considering the full problem would require backward
induction, which we do not tackle at this time.) For the game theoretic
reasoning to be consistent, we make the assumption that the RA issuing order
is known to not only the TCAS systems, but also the pilots. Presumably, the
pilots would receive this order information via their intrument displays. To
optimize the RA horizontal component for aircraft 2, we perform an exhaustive
search over each of the five candidate horizontal RAs (hard left, moderate
left, maintain heading, moderate right, and hard right) to determine its
expected social welfare. The horizontal RA with the highest expected social
welfare is selected and issued to the pilot. To compute expected social
welfare, we simulate a number of counterfactual scenarios of the remainder of
the encounter, and then average over them.
To evaluate its performance, we compare the method described above (using
exhaustive search) to a system that issues a ‘maintain heading’ RA to both
aircraft. Figure 13 shows the distribution of social welfare for each system.
Not only does the exhaustive search method show a higher expected value of
social welfare, it also displays an overall distribution shift, which is
highly desirable. By considering the full shape of the distribution rather
than just its expected value, we gain much more insight into the behavior of
the underlying system.
Figure 13: A comparison of social welfare for two different horizontal RA
systems: Not only does the expected value of social welfare increase by using
the exhaustive search method, we also observe a shift upwards in the entire
probability distribution.
## 4 Advantages of Semi Net-Form Game Modeling
There are many distinct benefits to using semi net-form game modeling. We
elaborate in the following section.
1. 1.
Fully probabilistic. Semi net-form game is a thoroughly probabilistic model,
and thus represents all quantities in the system using random variables. As a
result, not only are the probability distributions available for states of the
Bayes net, they are also available for any metrics derived from those states.
For the system designer, the probabilities offer an additional dimension of
insight for design. For regulatory bodies, the notion of considering full
probability distributions to set regulations represents a paradigm shift from
the current mode of aviation operation.
2. 2.
Modularity. A huge strength to using a Bayes net as the basis of modeling is
that it decomposes a large joint probability into smaller ones using
conditional independence. In particular, these smaller pieces have well-
defined inputs and outputs, making them easily upgraded or replaced without
affecting the entire net. One can imagine an ongoing modeling process that
starts by using very crude models at the beginning, then progressively
refining each component into higher fidelity ones. The interaction between
components is facilitated by using the common language of probability.
3. 3.
Computational human behavior model. Human-In-The-Loop (HITL) experiments
(those that involve human pilots in mid- to high-fidelity simulation
environments) are very tedious and expensive to perform because they involve
carefully crafted test scenarios and human participants. For the above
reasons, HITL experiments produce very few data points relative to the number
needed for statistical significance. On the other hand, semi net-form games
rely on mathematical modeling and numerical computations, and thus produce
data at much lower cost.
4. 4.
Computational convenience. Because semi net-form game algorithms are based on
sampling, they enjoy many inherent advantages. First, Monte Carlo algorithms
are easily parallelized to multiple processors, making them highly scalable
and powerful. Secondly, we can improve the performance of our algorithms by
using more sophisticated Monte Carlo techniques.
## 5 Conclusions and Future Work
In this chapter, we defined a framework called “Semi Network-Form Games,” and
showed how to apply that framework to predict pilot behavior in NMACs. As we
have seen, such a predictive model of human behavior enables not only powerful
analyses but also design optimization. Moreover, that method has many
desirable features which include modularity, fully-probabilistic modeling
capabilities, and computational convenience.
The authors caution that since this study was performed using simplified
models as well as uncalibrated parameters, that further studies be pursued to
verify these findings. The authors point out that the primary focus of this
work is to demonstrate the modeling technology, and thus a follow-on study is
recommended to refine the model using experimental data.
In future work, we plan to further develop the ideas in semi network-form
games in the following ways. First, we will explore the use of alternative
numerical techniques for calculating the conditional distribution describing a
player’s strategy $P(X_{v}\mid x_{pa(v)})$, such as using variational
calculations and belief propagation KollerBook . Secondly, we wish to
investigate how to learn semi net-form game model parameters from real data.
Lastly, we will develop a software library to facilitate semi net-form game
modeling, analysis and design. The goal is to create a comprehensive tool that
not only enables easy representation of any hybrid system using a semi net-
form game, but also houses ready-to-use algorithms for performing learning,
analysis and design on those representations. We hope that such a tool would
be useful in augmenting the current verification and validation process of
hybrid systems in aviation.
By building powerful models such as semi net-form games, we hope to augment
the current qualitative methods (i.e., expert opinion, expensive HITL
experiments) with computational human models to improve safety and performance
for all hybrid systems.
###### Acknowledgements.
We give warm thanks to Mykel Kochenderfer, Juan Alonso, Brendan Tracey, James
Bono, and Corey Ippolito for their valuable feedback and support. We also
thank the NASA Integrated Resilient Aircraft Control (IRAC) project for
funding this work.
## References
* (1) Bishop, C.M.: Pattern recognition and machine learning. Springer (2006)
* (2) Brunner, C., Camerer, C.F., Goeree, J.K.: A correction and re-examination of ’stationary concepts for experimental 2x2 games’. American Economic Review, forthcoming (2010)
* (3) Camerer, C.F.: An experimental test of several generalized utility theories. Journal of Risk and Uncertainty. 2(1), 61–104 (1989)
* (4) Camerer, C.F.: Behavioral game theory: experiments in strategic interaction. Princeton University Press (2003)
* (5) Caplin, A., Dean, M., Martin, D.: Search and satisficing. NYU working paper (2010)
* (6) Costa-Gomes, M.A., Crawford, V.P., Iriberri, N.: Comparing models of strategic thinking in Van Huyck, Battalio, and Beil’s coordination games. Journal of the European Economic Association (2009)
* (7) Crawford, V.P.: Introduction to experimental game theory (Symposium issue). Journal of Economic Theory. 104, 1–15 (2002)
* (8) Crawford, V.P.: Level-k thinking. Plenary lecture. 2007 North American Meeting of the Economic Science Association. Tucson, Arizona (2007)
* (9) Crawford, V.P.: Modeling behavior in novel strategic situations via level-k thinking. GAMES 2008. Third World Congress of the Game Theory Society (2008)
* (10) Darwiche, A.: Modeling and reasoning with Bayesian networks. Cambridge U. Press (2009)
* (11) Endsley, M.R.: Situation Awareness Global Assessment Technique (SAGAT). Proceedings of the National Aerospace and Electronics Conference (NAECON), pp. 789 -795. IEEE, New York (1988)
* (12) Endsley, M.R.: Final report: Situation awareness in an advanced strategic mission (No. NOR DOC 89-32). Northrop Corporation. Hawthorne, CA (1989)
* (13) Federal Aviation Administration Press Release: Forecast links aviation activity and national economic growth (2010) http://www.faa.gov/news/press_releases/news_story.cfm?newsId=11232 Cited 15 Mar 2011
* (14) Kochenderfer, M.J., Espindle, L.P., Kuchar, J.K., Griffith, J.D.: Correlated encounter model for cooperative aircraft in the national airspace system. Massachusetts Institute of Technology, Lincoln Laboratory, Project Report ATC-344 (2008)
* (15) Kochenderfer, M.J., Espindle, L.P., Kuchar J.K., Griffith, J.D.: Uncorrelated encounter model of the national airspace system. Massachusetts Institute of Technology, Lincoln Laboratory, Project Report ATC-345 (2008)
* (16) Kochenderfer, M.J., Chryssanthacopoulos, J.P., Kaelbling, L.P., Lozano-Perez, T., Kuchar, J.K.: Model-based optimization of airborne collision avoidance logic. Massachusetts Institute of Technology, Lincoln Laboratory. Project Report ATC-360 (2010)
* (17) Kochenderfer, M.J., Chryssanthacopoulos, J.P.: Partially-cntrolled Markov decision processes for collision avoidance systems. International Conference on Agents and Artificial Intelligence. Rome, Italy (2011)
* (18) Kochenderfer, M.J., Chryssanthacopoulos, J.P.: Robust airborne collision avoidance through dynamic programming. Massachusetts Institute of Technology Lincoln Laboratory. Project Report ATC-371 (2011)
* (19) Koller, D., Friedman, N.: Probabilistic graphical models: principles and techniques. MIT Press (2009)
* (20) Koller, D., Milch, B.: Multi-agent influence diagrams for representing and solving games. Games and Economic Behavior. 45(1), 181–221 (2003)
* (21) Kuchar, J.K., Drumm, A.C.: The Traffic Alert and Collision Avoidance System, Lincoln Laboratory Journal. 16(2) (2007)
* (22) Myerson, R.B.: Game theory: Analysis of conflict. Harvard University Press (1997)
* (23) Pearl, J.: Causality: Models, reasoning and inference. Games and Economic Behavior. Cambridge University Press (2000)
* (24) Reisman, W.: Near-collision at SFO prompts safety summit. The San Francisco Examiner (2010) http://www.sfexaminer.com/local/near-collision-sfo-prompts...-safety-summit Cited 15 Mar 2011.
* (25) Robert, C.P., Casella, G.: Monte Carlo statistical methods 2nd ed. Springer (2004)
* (26) Russell, S., Norvig, P.: Artificial intelligence a modern approach. 2nd Edition. Pearson Education (2003)
* (27) Selten, R., Chmura, T.: Stationary concepts for experimental 2x2 games. American Economic Review. 98(3), 938–966 (2008)
* (28) Simon, H.A.: Rational choice and the structure of the environment. Psychological Review. 63(2), 129–138 (1956)
* (29) Simon, H.A.: Models of bounded rationality. MIT Press (1982)
* (30) Taylor, R.M.: Situational Awareness Rating Technique (SART): The development of a tool for aircrew systems design. AGARD, Situational Awareness in Aerospace Operations 17 (SEE N90-28972 23-53) (1990)
* (31) Watts, B.D.: Situation awareness in air-to-air combat and friction. Chapter 9 in Clausewitzian Friction and Future War, McNair Paper no. 68 revised ed. Institute of National Strategic Studies, National Defense University (2004)
* (32) Wolpert, D., Lee, R.: Network-form games: using Bayesian networks to represent non-cooperative games. NASA Ames Research Center working paper. Moffett Field, California (in preparation)
* (33) Wright, J.R., Leyton-Brown, K.: Beyond equilibrium: predicting human behavior in normal form games. Twenty-Fourth Conference on Artificial Intelligence (AAAI-10) (2010)
*[NMAC]: near mid-air collision
*[TCAS]: Traffic Alert and Collision Avoidance System
*[RA]: Resolution Advisory
*[RAs]: Resolution Advisory
*[HITL]: Human-In-The-Loop
*[NMACs]: near mid-air collision
*[IRAC]: Integrated Resilient Aircraft Control
|
arxiv-papers
| 2011-03-26T21:52:52 |
2024-09-04T02:49:17.947237
|
{
"license": "Public Domain",
"authors": "Ritchie Lee and David H. Wolpert",
"submitter": "Ritchie Lee",
"url": "https://arxiv.org/abs/1103.5169"
}
|
1103.5301
|
# On the Growth Rate of Non-Enzymatic Molecular Replicators
Harold Fellermann Center for Fundamental Living Technology, University of
Southern Denmark, Campusvej 55, 5230 Odense M, Denmark ICREA-Complex Systems
Lab, Universitat Pompeu Fabra (GRIB), Dr Aiguader 80, 08003 Barcelona, Spain
harold@ifk.sdu.dk Steen Rasmussen Center for Fundamental Living Technology,
University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark Santa Fe
Institute, 1399 Hyde Park Road, Santa Fe NM 87501, USA
###### Abstract
It is well known that non-enzymatic template directed molecular replicators
$X+nO\stackrel{{\scriptstyle k}}{{\longrightarrow}}2X$ exhibit parabolic
growth $d[X]/dt\propto k[X]^{1/2}$. Here, we analyze the dependence of the
effective replication rate constant $k$ on hybridization energies,
temperature, strand length, and sequence composition. First we derive
analytical criteria for the replication rate $k$ based on simple thermodynamic
arguments. Second we present a Brownian dynamics model for oligonucleotides
that allows us to simulate their diffusion and hybridization behavior. The
simulation is used to generate and analyze the effect of strand length,
temperature, and to some extent sequence composition, on the hybridization
rates and the resulting optimal overall rate constant $k$. Combining the two
approaches allows us to semi-analytically depict a fitness landscape for
template directed replicators. The results indicate a clear replication
advantage for longer strands at low temperatures.
non-enzymatic molecular replication; growth rate; product inhibition; reaction
kinetics; Brownian dynamics
## I Introduction
Optimizing the yield of non-enzymatically self-replicating biopolymers is of
great interest for many basic science and application areas. Clearly, the
early organisms could not emerge with a fully developed enzymatic gene
replication machinery, so it is plausible that the first organisms had to rely
on non-enzymatic replication Gil:1986 ; Mon:2008 ; Cle:2009 . Most bottom up
protocell models also rely on non-enzymatic biopolymer replication Ras:2004 ;
Ras:2004b ; Ras:2008b ; Szo:2001 ; Man:2008 ; Han:2009 , which is also true
for a variety of prospective molecular computing and manufacturing
applications.111For example, see the European Commission sponsored projects
MatchIT and ECCell. Common for all of these research areas is the interest to
obtain an optimal replication yield in the absence of modern enzymes.
Depending on the details the biopolymer can be deoxyribonucleic acid (DNA),
ribonucleic acid (RNA), peptide nucleic acid (PNA), etc. In the following
we’ll refer to them as XNA.
Figure 1: Minimal template directed replicator: two complementary oligomers
hybridize to a template strand (upper part). An irreversible ligation reaction
transforms the oligomers into the complementary copy of the template. The
newly obtained double strand can dehybridize (lower part) thus allowing for
iteration of the process. We assume that ligation is rate limiting, which
implies that hybridization and dehybridization are in local equilibrium.
Conceptually, XNA replication proceeds in three basic steps: (a) association,
or _hybridization_ of $n$ nucleotide monomers or oligomers with a single
stranded, complementary template; (b) formation of covalent bonds in a
condensation reaction, called _polymerization_ in case of monomer condensation
and _ligation_ in case of oligomers; and finally (c) dissociation, or
_dehybridization_ of the newly formed complementary strand:
$X+n\,O\underbrace{\xrightleftharpoons[k^{-}_{\text{O}}]{k^{+}_{\text{O}}}}_{\text{(a)}}XO_{n}\underbrace{\xrightleftharpoons[k^{-}_{\text{L}}]{k^{+}_{\text{L}}}}_{\text{(b)}}X\bar{X}+(n-1)\,W\underbrace{\xrightleftharpoons[k^{+}_{\text{T}}]{k^{-}_{\text{T}}}}_{\text{(c)}}X+\bar{X}+(n-1)\,W.$
(1)
Here, $X$ and $\bar{X}$ denote the template strand and its complement, $O$
denotes monomers/oligomers, $W$ is the leaving group of the condensation
reaction, and
$K_{i}=\frac{k^{+}_{i}}{k^{-}_{i}}=e^{-\Delta G_{i}/k_{\text{B}}T}\quad
i\in\\{\text{O},\text{L},\text{T}\\}$ (2)
are the equilibrium constants of the three reactions. Note that the left hand
transition of reaction scheme (1) is an abbreviation of a multi-step process
that accounts for all $n$ individual oligomer hybridizations and
dehybridizations, which is only partly captured by the net reaction process.
The covalent condensation reaction is entirely activation limited. For
nucleotide monophosphates, the leaving group corresponds to water which (due
to its high concentration in aqueous solution) pushes the equilibrium to the
hydrolyzed state. Product yields are significantly increased when using
activated nucleotides, such as nucleotide triphosphate or imidazole.
Due to its complex reaction mechanism, non-enzymatic XNA replication from
monomers suffers from various complications, namely extension of sequences
containing consecutive adenine and thymine nucleotides Wu:1992 ; Wu:1992b , as
well as side reactions such as partial replication and random strand
elongation Fer:2007 . Consequently, experiments with activated nucleotides
typically show little yield in aqueous solution, although results can be
enhanced by employing surfaces (e.g. clay) or up-concentration in water-ice
Mon:2008 ; Mon:2010b .
Replication from short activated oligomers, on the other hand, does produce
high yields for both RNA and DNA (Kie:1986, ; Sie:1994, ; Bag:1996, ;
Joy:1984, ; Lin:2009, , and references therein). In particular, this
observation has lead to the development of _minimal replicator systems_ , in
which ligation of two oligomers is sufficient to form the complementary
replica (see Fig. 1). One of the reasons why these systems outperform
replicators that draw from monomers is that the above side reactions
expectedly occur, if at all, only to a negligible extent.
Neglecting both the production of waste as well as the hydrolysis of the
ligation product, but explicitly taking into account the individual oligomer
associations, minimal replicator systems (here for the case of a self-
complementary template) can be written as
$X+2O\xrightleftharpoons[k^{-}_{\text{O}}]{k^{+}_{\text{O}}}XO+O\xrightleftharpoons[k^{-}_{\text{O}}]{k^{+}_{\text{O}}}XO_{2}\xrightarrow{k_{\text{L}}}X_{2}\xrightleftharpoons[k^{+}_{\text{T}}]{k^{-}_{\text{T}}}2X.$
(3)
In this article, we first develop a theoretical expression for the template
directed replication rate of minimal replicator systems as a function of
strand length and temperature. This analytical model provides transparent
physical relations for how temperature, strand length- and composition impact
the overall replication rate. We then present a 3D, implicit solvent,
constrained Brownian Dynamics model for short nucleotide strands, i.e. strands
with negligible secondary structures. The model does not attempt to be
(quantitatively) predictive. In particular, we do not attempt to calibrate
interaction parameters to experimental data, which prevents any sequence
prediction. On the contrary, it is our aim to demonstrate that much of the
replication properties of oligonucleotides arises from rather general
statistical physics. The simulation is used to measure diffusion coefficients,
effective reaction radii, and hybridization rates and their dependence on
temperature, strand length, and, to some extent, sequence information. This
allows us to qualitatively obtain equilibrium constants $K_{\text{O}}$ and
$K_{\text{T}}$ and to qualitatively sketch the effective replication rate $k$
as a function of strand length and temperature. Our analysis focuses on
minimal replicator systems in the context of chemical replication experiments
as employed in protocell research and manufacturing applications, where the
researcher controls reactant concentrations as well as most experimental
parameters. However, we also discuss the impact of our findings in the context
of origin of life research, were possible side reactions cannot be neglected.
## II Parabolic Growth and Replication Rate
Following and extending the derivation of Refs. Wil:1998 ; Roc:2006 , we
assume that ligation is the rate limiting step. This translates into the
following conditions for the rate constants:
$\begin{array}[]{rlrlrl}k_{\text{L}}[XO_{2}]&\\!\\!\\!\ll
k_{\text{O}}^{+}[X][O]&k_{\text{L}}[XO_{2}]&\\!\\!\\!\ll
k_{\text{O}}^{+}[XO][O]\\\ k_{\text{L}}[XO_{2}]&\\!\\!\\!\ll
k_{\text{O}}^{-}[XO]&k_{\text{L}}[XO_{2}]&\\!\\!\\!\ll
k_{\text{O}}^{-}[XO_{2}]\\\ k_{\text{L}}[XO_{2}]&\\!\\!\\!\ll
k_{\text{T}}^{+}[X]^{2}&k_{\text{L}}[XO_{2}]&\\!\\!\\!\ll
k_{\text{T}}^{-}[X_{2}]\end{array}$ (4)
One can then assume a steady state of the hybridization/dehybridization
reactions and express the total template concentration
$[X]_{\text{total}}=[X]+[XO]+[XO_{2}]+2[X_{2}]$ in terms of equilibrium
constants as
$[X]_{\text{total}}=\left(1+K_{\text{O}}[O]+K_{\text{O}}^{2}[O]^{2}\right)[X]+2K_{\text{T}}[X]^{2}.$
(5)
When solved for $[X]$, this gives
$[X]=\frac{1}{4K_{\text{T}}}\sqrt{8K_{\text{T}}[X]_{\text{total}}+(1+K_{\text{O}}[O]+K_{\text{O}}^{2}[O]^{2})^{2}}\\\
-\frac{1+K_{\text{O}}[O]+K_{\text{O}}^{2}[O]^{2}}{4K_{\text{T}}}.$ (6)
Template directed replication typically suffers from product inhibition, where
most templates are in double strand configuration, i.e.
$K_{\text{T}}[X]_{\text{total}}\gg 1$. Over the course of the reaction, this
is tantamount of saying that $\sqrt{8K_{\text{T}}[X]_{\text{total}}}\gg
1+K_{\text{O}}[O]+K_{\text{O}}^{2}[O]^{2}$. This allows us to approximate
$\sqrt{8K_{\text{T}}[X]_{\text{total}}+(1+K_{\text{O}}[O]+K_{\text{O}}^{2}[O]^{2})^{2}}\\\
=\sqrt{8K_{\text{T}}[X]_{\text{total}}}+\sqrt{(1+K_{\text{O}}[O]+K_{\text{O}}^{2}[O]^{2})^{2}}\\\
+\mathcal{O}([X]_{\text{total}})$ (7)
and simplify (6) to
$[X]=\sqrt{\frac{[X]_{\text{total}}}{{2K_{\text{T}}}}}+\mathcal{O}([X]_{\text{total}}).$
(8)
This is a lower bound of the single strand concentration, which is approached
in the limit of vanishing oligomer concentration. By combining (3) and (8), we
get
$\displaystyle\frac{d[X]_{\text{total}}}{dt}$
$\displaystyle=k_{\text{L}}[XO_{2}]=k_{\text{L}}K_{\text{O}}^{2}[O^{2}][X]$
$\displaystyle\approx k\;[O]^{2}\sqrt{[X]_{\text{total}}}$ (9)
with
$k=k_{\text{L}}\frac{K_{\text{O}}^{2}}{\sqrt{2K_{\text{T}}}}.$ (10)
This well-established parabolic growth law is known to qualitatively alter
evolutionary dynamics of XNA based minimal replicators and to promote
coexistence of replicators rather than selection of the fittest Sza:1989 ;
Kie:1991 .222 In particular, it has been shown that under parabolic growth
conditions, competing replicators $X_{i}$ grow when sufficiently rare:
$[X_{i}]<\left(\frac{k_{i}}{k_{\text{base}}}\frac{\sum_{j}[X_{j}]}{\sum_{j}[X_{j}]^{1/2}}\right)^{2}\Longrightarrow\frac{d}{dt}[X_{i}]>0.$
The equation captures the connection between the growth rate $k_{i}$ and its
selective pressure, such that replicator species with a high growth rate are
also assigned a high evolutionary fitness. See Ref. Sza:1989 for the
derivation. It is this relation that allows us to speak of a _fitness
landscape_ when referring to the functional shape of $k$. Consequently,
several strategies have been designed to overcome product inhibition in order
to reestablish Darwinian evolution and survival of _only_ the fittest Lut:1998
; Roc:2006 ; Zha:2006 ; Lin:2009 . Most of these approaches hinge on a
mechanism to lower the hybridization tendency of the product to the template.
In this article, however, we accept parabolic growth and instead focus on the
effective growth rate.
The key observation of equation (10) is that, due to the steady state
assumption, the overall growth rate is independent of the individual
association and dissociation rates $k_{i}^{+},k_{i}^{-}$, but only depends on
the equilibrium constants $K_{\text{O}}$ and $K_{\text{T}}$. Expressed in free
energy changes, Eq. (10) becomes
$k=e^{\left[\log A+\left(\frac{1}{2}\Delta G_{\text{T}}-2\Delta
G_{\text{O}}-\Delta G^{\ddagger}_{\text{L}}\right)\right]/k_{\text{B}}T},$
(11)
where $A$ and $\Delta G^{\ddagger}_{\text{L}}$ are the pre-exponential factor
and activation energy of the ligation reaction, respectively, and we have used
the Arrhenius equation
$k_{\text{L}}=Ae^{-\Delta G^{\ddagger}_{\text{L}}/k_{\text{B}}T}.$ (12)
We further observe that any potential optimum of (10) must obey
$\displaystyle 2{k_{\text{L}}}^{\prime}K_{\text{T}}K_{\text{O}}$
$\displaystyle=\frac{1}{2}k_{\text{L}}K_{\text{O}}{K_{\text{T}}}^{\prime}-4k_{\text{L}}K_{\text{T}}K_{\text{O}}^{\prime}$
(13)
where the prime indicates derivation with respect to any variable. Note that
derivatives of $k_{\text{L}},K_{\text{T}}$, and $K_{\text{O}}$ can be taken
with respect to parameters such as temperature and template length. In
sequence space, however, we do not have an ordering that would allow us to
perform derivatives. Therefore, equation (13) can only give us partial
information about an optimal growth rate.
It is well-known that the equilibrium constants $K_{\text{O}}$ and
$K_{\text{T}}$ depend on various parameters such as temperature, salt
concentration, strand length, and sequence information – all being relevant
control parameters when designing replication experiments or delimiting origin
of life conditions Owc:1998 ; Blo:2000 . Furthermore, the two rates are
interdependent as one expects $K_{\text{T}}$ to rise with increasing
$K_{\text{O}}$.
Qualitatively, the free energy of XNA hybridization obeys a form given by
$\displaystyle\Delta G(N,T)$ $\displaystyle=N\Delta G_{\text{base}}+\Delta
G_{\text{init}}$ $\displaystyle=N(\Delta H_{\text{base}}-T\Delta
S_{\text{base}})$ $\displaystyle\quad+\Delta H_{\text{init}}-T\Delta
S_{\text{init}},$ (14)
where $N$ signifies the strand length, $\Delta G_{\text{base}}$ is the
(maximal) energy change per base, $\Delta G_{\text{init}}$ is the initiation
energy and $\Delta H_{\text{base}},\Delta S_{\text{base}}$ are negative,
whereas $\Delta H_{\text{init}},\Delta S_{\text{init}}$ are positive. The
right hand side of the equation expresses a saturation in the free energy per
base as a function of the strand length; the free energy gain for each base
pairing asymptotically becomes constant for long strands Pol:1966b . Inserting
(14) into (11) and separating out the rate constant for the ligation reaction
$k_{\text{L}}$, we obtain:
$\displaystyle\frac{K_{\text{O}}^{2}}{\sqrt{2K_{\text{T}}}}$
$\displaystyle\propto e^{{\left(\dfrac{1}{2}\Delta G(N,T)-2\Delta
G(N/2,T)\right)/k_{\text{B}}T}}$ $\displaystyle\propto
e^{-\left(\dfrac{1}{2}N\Delta G_{\text{base}}+\dfrac{3}{2}\Delta
G_{\text{init}}\right)/k_{\text{B}}T},$ (15)
which, when differentiated for $T$, yields a positive dependence on
temperature, iff
$\displaystyle\frac{d}{d\mathrm{T}}\frac{K_{\text{O}}^{2}}{\sqrt{2K_{\text{T}}}}>0\quad\Longleftrightarrow\quad$
$\displaystyle N<-\frac{3\Delta H_{\text{init}}}{\Delta H_{\text{base}}}$
Since $\Delta H_{\text{init}}\\!>\\!0$, and $\Delta
H_{\text{base}}\\!\leq\\!0$, this critical strand length is truly positive. It
might surprise that $K_{\text{O}}^{2}/\sqrt{2K_{\text{T}}}$ can increase with
decreasing temperature – the regime where templates are primarily inhibited by
the product. The results become understandable when considering that
oligomers, with their lower hybridization rate, barely associate with the
template if the temperature is raised.
Reintroducing the ligation reaction, this relation gets refined to
$\displaystyle k$
$\displaystyle=k_{\text{L}}\frac{K_{\text{O}}^{2}}{\sqrt{2K_{\text{T}}}}$
$\displaystyle=e^{\log A-\left(\dfrac{1}{2}N\Delta
G_{\text{base}}+\dfrac{3}{2}\Delta G_{\text{init}}+\Delta
G^{\ddagger}_{\text{L}}\right)/k_{\text{B}}T},$ (16)
with the critical strand length
$\frac{dk}{d\mathrm{T}}>0\quad\Longleftrightarrow\quad N<N^{*}=-\frac{3\Delta
H_{\text{init}}+2\Delta H^{\ddagger}_{\text{L}}}{\Delta H_{\text{base}}}.$
(17)
In words: we can identify a critical strand length $N^{*}$ above which the
overall replication rate $k$ increases with decreasing temperature. This
critical strand length is determined by the hybridization enthalpies $\Delta
H_{\text{base}},\Delta H_{\text{init}}$, and activation enthalpy change
$\Delta H^{\ddagger}_{\text{L}}$ of ligation..
Fig. 2 depicts the graph of the replication rate landscape (16) that clearly
identifies the optimum of equation (13) as a saddle point. The corresponding
temperature $T^{*}$ where $k$ changes its scaling with respect to strand
length is – independent of the ligation reaction – given by
$\frac{dk}{d\mathrm{N}}>0\quad\Longleftrightarrow\quad T<T^{*}=\frac{\Delta
H_{\text{base}}}{\Delta S_{\text{base}}}.$ (18)
Figure 2: Effective replication rate $k$ (given by equation 16) as a function
of strand length and temperature. For strands below a critical length $N^{*}$
(here 10) the rate increases with temperature, for strands longer than
$N^{*}$, the replication rate grows with decreasing temperature. The value of
$N^{*}$ is determined through equation (17). Note the saddle point of the
surface where $T^{*}$ and $N^{*}$ intersect (Eq. 13) ($\Delta
H_{\text{base}}\\!=\\!-1.5k_{\text{B}}T^{\prime}$, $\Delta
S_{\text{base}}\\!=\\!-1k_{\text{B}}$, $\Delta
H_{\text{init}}\\!=\\!0.50k_{\text{B}}T^{\prime}$, $\Delta
S_{\text{init}}\\!=\\!1.25k_{\text{B}}$, $\Delta
H^{\ddagger}_{\text{L}}\\!=\\!5.25k_{\text{B}}T^{\prime}$, $A=10^{3}$)
Can we obtain a higher replication rate by using non-symmetric oligomers? The
rational behind this strategy is to increase the binding affinity of one
oligomer to maybe decrease product inhibition. A simple refinement of equation
(15) allows us to capture this approach with our model:
$\displaystyle\frac{K_{\text{O}_{1}}K_{\text{O}_{2}}}{\sqrt{2K_{\text{T}}}}$
$\displaystyle\propto e^{{\left(\frac{1}{2}\Delta G(N,T)-\Delta
G(N_{1},T)-\Delta G(N_{2},T)\right)/k_{\text{B}}T}}$ $\displaystyle\propto
e^{\left(\left(\frac{1}{2}N-N_{1}-N_{2}\right)\Delta
G_{\text{base}}-\frac{3}{2}\Delta G_{\text{init}}\right)/k_{\text{B}}T}$ (19)
where $N_{1}+N_{2}=N$ denote the lengths of oligomer strands $O_{1}$ and
$O_{2}$. Thus, according to our simple thermodynamic considerations, non-
symmetric variants of the replication process do not show more yield than the
corresponding symmetric system: the binding affinity gained for the long
oligomer strand is paid to hybridize the short oligomer strand.
Fig. 2 seemingly implies that replication rates grow beyond any limit for long
templates, which is unphysical. To resolve this inconsistency, it is important
to remember that our findings are only valid in the regime where ligation is
rate limiting. For very long XNA strands, however, double strands are so
stable that dehybridization of the ligation product is expected to become the
rate limiting step. Independent of the exact shape of the growth law, the
dominant factor of the effective growth rate is given by
$k_{\text{T}}^{-}=k^{+}\,e^{(N\Delta G_{\text{base}}+\Delta
G_{\text{init}})/k_{\text{B}}T}$ (20)
where $k^{+}$ summarizes both pathways of either product rehybridization or
hybridization of oligomers followed by ligation. As $k^{+}$ is composed of
hybridization (i.e. diffusion plus orientational alignment) and ligation
events, it varies only slightly with sequence length when compared to
dehybridization rates for the case of large $N$. Therefore, the effective
replication rate will be governed by the scaling
$k\propto e^{N\Delta G_{\text{base}}/k_{\text{B}}T}$ (21)
with the limit
$\lim_{N\rightarrow\infty}k=0,$
since $\Delta G_{\text{base}}<0$. As a consequence, we expect a full non-
equilibrium study of the replication process to show a proper maximum in the
replication rate as a function of strand length.
## III Spatially Resolved Replicator Model
Spatially resolved template-directed replicators have been previously
simulated in the Artificial Life community using two-dimensional cellular
automata and continuous virtual physics Hut:2002 ; Smi:2003 ; Fer:2007 . The
model we present here is conceptually similar to, but simpler than other
coarse-grained DNA models (e.g., Kle:1998, ; Tep:2005, ; Dru:2000, ). Compared
to our earlier work on hybridization and ligation Fel:2007b , the model
presented here is less computationally expensive while simultaneously being
broader in its range of application.
We model nucleic acid strands as chains of hard spheres that are connected by
rigid bonds. Each sphere has mass $m$, radius $r$, position and velocity
$(\mathbf{x}_{i},\mathbf{v}_{i})\in\mathbb{R}^{3\times 3}$, as well as moment
of inertia $\theta$, orientation and angular momentum
$(\bm{\omega}_{i},\mathbf{L}_{i})\in\mathbb{S}^{2}\times\mathbb{R}^{3}$
representing the spatial orientation of the respective nucleotide. Further,
each sphere has a type $t_{i}\in\\{\text{A},\text{T},\text{C},\text{G}\\}$,
and we define A and T (C and G) to be complementary. The model is implicit in
the sense that solvent molecules are not represented explicitly, but only
through their effect on the nucleotide strands. We model the (translational
and rotational) motion of each sphere by a _Langevin equation_
$\displaystyle\stackrel{{\scriptstyle.}}{{\mathbf{x}}}_{i}$
$\displaystyle=\mathbf{v}_{i}$ (22a) $\displaystyle
m\stackrel{{\scriptstyle.}}{{\mathbf{v}}}_{i}$
$\displaystyle=-\bm{\nabla}U_{i}(\mathbf{x},\bm{\omega})-\gamma\mathbf{v}_{i}+\bm{\xi}_{i}$
(22b) $\displaystyle\stackrel{{\scriptstyle.}}{{\bm{\omega}}}_{i}$
$\displaystyle=\frac{1}{\theta}\;\mathbf{L}_{i}\times\bm{\omega}_{i}$ (22c)
$\displaystyle\stackrel{{\scriptstyle.}}{{\mathbf{L}}}_{i}$
$\displaystyle=-\bm{\nabla}\hat{U}_{i}(\mathbf{x},\bm{\omega})-\gamma\frac{\mathbf{L}_{i}}{\theta}+\;\bm{\hat{\xi}}_{i}.$
(22d)
Here, $\gamma$ is the friction coefficient, and $\bm{\xi},\bm{\hat{\xi}}$ are
zero mean random variables accounting for thermal fluctuations. Together,
friction and thermal noise act as a thermostat: they equilibrate the kinetic
energy with an external heat bath whose temperature is given by the following
_fluctuation-dissipation-theorem_ Kub:1966 :
$\displaystyle\left<\bm{\xi}_{i}(t);\bm{\xi}_{j}(t^{\prime})\right>$
$\displaystyle=2\gamma k_{\text{B}}T\;\delta_{ij}\delta(t-t^{\prime})$ (23a)
$\displaystyle\left<\bm{\hat{\xi}}_{i}(t);\bm{\hat{\xi}}_{j}(t^{\prime})\right>$
$\displaystyle=2\frac{\gamma}{\theta}k_{\text{B}}T\;\delta_{ij}\delta(t-t^{\prime}).$
(23b)
Hence, a temperature change directly translates into a change of the Brownian
noise amplitude. We use the moment of inertia for solid spheres
$\theta=\frac{2}{5}mr^{2}$ – noting that one could, in principle, use moment
of inertia tensors to reflect the geometry of the individual nucleobases.
Equations (22a) - (22d) are solved under the constraints
$\displaystyle\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|$
$\displaystyle=r_{\text{bond}}\quad\text{if $i,j$ bonded}$ (24a)
$\displaystyle\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|$
$\displaystyle=2r\quad\text{if
}\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|<2r\>$ (24b)
$\displaystyle\left|\bm{\omega}_{i}\right|$ $\displaystyle=1$ (24c)
to account for rigid bonds (24a) and hard spheres (24b). By setting
$r_{\text{bond}}<2r$, we can assert that strands do not penetrate each other.
We define the following angles (see Fig. 3):
$\displaystyle\cos\theta_{i}=\left<\frac{\mathbf{x}_{j}-\mathbf{x}_{i}}{r_{\text{bond}}}\cdot\frac{\mathbf{x}_{k}-\mathbf{x}_{i}}{r_{\text{bond}}}\right>$
$i,j$ and $i,k$ bonded
$\displaystyle\cos\phi_{ij}=\left<\frac{\mathbf{x}_{j}-\mathbf{x}_{i}}{r_{\text{bond}}}\cdot\bm{\omega}_{i}\right>$
$i,j$ bonded
$\displaystyle\cos\omega_{ij}=\left<\bm{\omega}_{i}\cdot\bm{\omega}_{j}\right>$
$i,j$ bonded
$\displaystyle\cos\psi_{ij}=\left<\frac{\mathbf{x}_{j}-\mathbf{x}_{i}}{\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|}\cdot\bm{\omega}_{i}\right>$
$i,j$ not bonded.
Figure 3: Geometry of the nucleotide strands. The figure shows the angles
that define inner- and intermolecular interactions for one nucleobase (shaded
in grey).
As much of the molecular geometry is already determined through the
constraints, the innermolecular potentials $U$ and $\hat{U}$ only need to
account for strand stiffness (25a), base orientation (25b), and $\pi$-stacking
(25c). We set:
$\displaystyle U^{\text{bend}}_{i}$
$\displaystyle=a_{\text{bend}}\left(\dfrac{\theta_{i}}{\pi}-1\right)^{2}\quad\text{if
$i$ not terminal }$ (25a) $\displaystyle\hat{U}^{\text{ortho}}_{ij}$
$\displaystyle=\hat{a}_{\text{ortho}}\left(\dfrac{\phi_{ij}}{\pi}-\dfrac{1}{2}\right)^{2}$
(25b) $\displaystyle\hat{U}^{\text{parallel}}_{ij}$
$\displaystyle=\hat{a}_{\text{parallel}}\left(\dfrac{\omega_{ij}}{\pi}-0\right)^{2}.$
(25c) The minimum energy state of these definitions are stretched out
nucleotide strands with orientations perpendicular to the strand and parallel
to each other.
In addition, we define the following intermolecular potentials between non-
bonded complementary nucleobases $i$ and $j$:
$\displaystyle U^{\text{hybrid}}_{ij}$
$\displaystyle=-a_{\text{hybrid}}\;d\left(\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|\right)\cos\psi_{ij}$
(25d) $\displaystyle\hat{U}^{\text{hybrid}}_{ij}$
$\displaystyle=-\hat{a}_{\text{hybrid}}\;d\left(\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|\right)\left(\frac{\psi_{ij}}{\pi}-1\right)^{2},$
(25e)
if $\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|<r_{\text{c}}$. The shift and
weighing function
$d(r_{ij})=\frac{1}{2}\left[\cos\left(\frac{r_{ij}-2r}{r_{\text{c}}-2r}\pi\right)+1\right]$
asserts that the potentials take on a minimum at particle contact and level
out to zero at the force cutoff radius $r_{\text{c}}$. Equation (25d) allows
for a nucleobase $i$ to attract its complement $j$ along the direction of
$\bm{\omega}_{i}$, while (25e) orients $\bm{\omega}_{i}$ toward the
complement.
Taking the above potentials together, we define
$\displaystyle U_{i}(\mathbf{x},\bm{\omega})$
$\displaystyle=U^{\text{bend}}_{i}(\mathbf{x})+\\!\\!\\!\\!\\!\\!\\!\\!\sum_{\begin{subarray}{c}i,j\\\
\text{non-bonded}\\\
\text{complementary}\end{subarray}}\\!\\!\\!\\!\\!\\!\\!\\!U^{\text{hybrid}}_{ij}(\mathbf{x},\bm{\omega})$
(25f) $\displaystyle\hat{U}_{i}(\mathbf{x},\bm{\omega})$
$\displaystyle=\\!\\!\sum_{\begin{subarray}{c}i,j\\\
\text{bonded}\end{subarray}}\\!\\!\left(\hat{U}^{\text{ortho}}_{ij}(\mathbf{x},\bm{\omega})+\hat{U}^{\text{parallel}}_{ij}(\bm{\omega})\right)$
$\displaystyle\quad+\\!\\!\\!\\!\\!\\!\\!\\!\sum_{\begin{subarray}{c}i,j\\\
\text{non-bonded}\\\
\text{complementary}\end{subarray}}\\!\\!\\!\\!\\!\\!\\!\\!\hat{U}^{\text{hybrid}}_{ij}(\mathbf{x},\bm{\omega})$
(25g)
Equations (22a) to (24c) are numerically integrated using a Velocity Verlet
algorithm that, in each iteration, first computes unconstrained coordinates
which are afterwards corrected with a Shake algorithm to satisfy the
constraints Ryc:1977 .333Note that our approach would not work in the absence
of a thermostat: to describe rotational motion properly, one would need to
define orientations and angular momenta in a local reference frame that moves
with the extended object to which the oriented point particle belongs. In this
manner, rotational motion of the extended object gets propagated down to the
angular momenta of the particles it consists of (A QShake algorithm would in
addition be needed to properly conserve angular momenta in the constraints).
While this approach is computationally significantly more cumbersome, we
expect the result to be similar for the above model, in which rotation of
extended objects is propagated down to its constituting particles through
angular potentials and an overdamped thermostat. Typical system
configurations are shown in Fig. 1.
parameter | value | comment | Eqs.
---|---|---|---
$m$ | $1$ | particle mass | (22b) - (22d)
$\gamma$ | $3$ | friction coefficient | (22b), (22d)
$k_{\text{B}}T_{0}$ | $1$ | equilibrium temperature | (23a), (23b)
$\Delta t$ | $0.05$ | numerical time step |
$r$ | $0.25$ | particle radius | (24b)
$r_{\text{bond}}$ | $0.45$ | bond length | (24a)
$r_{\text{c}}$ | $1$ | force cutoff radius | (25d) - (25e)
$a_{\text{bend}}$ | $5$ | strand stiffness | (25a)
$\hat{a}_{\text{ortho}}$ | $2.5$ | angular stretching | (25b)
$\hat{a}_{\text{parallel}}$ | $1$ | angular alignment | (25c)
$\hat{a}_{\text{hybrid}}$ | $10$ | angular hybridization | (25e)
$a_{\text{hybrid}}$ | $1$ | complementary attraction | (25d)
Table 1: Model parameters in reduced units (unless otherwise noted).
## IV Simulation Results
In the subsequent analyses, we will employ reduced units, i.e., $m=1$,
$r_{\text{c}}=1$, and $k_{\text{B}}T_{0}=1$ define the units of mass, length,
and energy. From this, the natural unit of time follows as
$\tau=r\sqrt{m/k_{\text{B}}T_{0}}.$
The parameters $r$ and $r_{\text{bond}}$ are chosen to prevent crossing of
strands ($r_{\text{bond}}<2r$). The ratio $r_{\text{bond}}/r$ determines the
double strand geometry which is modeled more sparse than in actual nucleic
acid strands in order to compensate for the relatively shallow potentials of
the coarse-grained model. The ratio $r/r_{\text{c}}$ determines the distance
at which complementary bases “feel each other” and has been set to two times
the bead diameter.
The amplitudes $a_{\text{bend}}$, $\hat{a}_{\text{ortho}}$,
$\hat{a}_{\text{parallel}}$ of the potential functions are chosen in order to
promote stacked single strands for temperatures up to at least
$3k_{\text{B}}T_{0}$ and to loosely match the persistence length of DNA (three
nucleotides) at unit temperature. Finally, the values for
$\hat{a}_{\text{hybrid}}$ and $a_{\text{hybrid}}$ are chosen to enable
hybridization at unit temperature. We point out that our model utilizes a high
value $\hat{a}_{\text{hybrid}}$ in order to promote fast hybridization. A list
of all model parameters (unless otherwise noted) is given in Table 1.
Note that it is not mandatory to relate one bead of the model to one physical
nucleotide. Instead, each bead could also represent a short XNA subsequence
(e.g. 2-4 nucleotides). While this would result in a closer match of the ratio
$r_{\text{bond}}/r$, the amplitudes of the potential functions would need to
be adapted to reflect the changed representation.
### IV.1 Diffusion
In dilute solution, DNA diffusion depends primarily on temperature and strand
length, as opposed to primary or secondary structure. In the limit of low
Reynolds numbers, the diffusion coefficient of a sphere is given by the
_Einstein-Stokes equation_
$D=\frac{k_{\text{B}}T}{6\pi\eta r}$ (26)
where $\eta$ is the viscosity of the medium and $r$ the radius of the sphere.
In order to compare our model polymer diffusion to (26), we perform
simulations of single homopolymers (e.g. poly-C) and determine the diffusion
coefficient from its measured mean square displacement
$D=\frac{1}{6}\frac{\left|\mathbf{x}(\Delta
t)-\mathbf{x}(0)\right|^{2}}{\Delta t}$
Fig. 4 shows results for strands of lengths $N=1,\ldots,10$ and temperatures
$k_{\text{B}}T=1,\ldots,3$.
Figure 4: Diffusion coefficients measured for different strand lengths and
temperatures (symbols) fitted to the prediction of the Einstein-Stokes
relation (solid lines). For each parameter pair, 40 simulation runs over
$1000\tau$ have been averaged.
The data sets a scaling relation between our model parameter $N$ and the
Stokes radius $r$ which is _a priori_ not known. For the general scaling
relation $r\propto N^{\nu}$ we obtain the most likely exponent from data
fitting via $\nu$ and $\eta$ as $1.06$. By setting $\nu=1$, and equivalently
$r\propto N$, we obtain the best agreement between measurement and theory (by
fitting via $\eta$ only) for
$\eta=0.061k_{\text{B}}T\tau^{2}/r_{\text{c}}^{2}$ (see solid lines in Fig.
4).
### IV.2 Radius of gyration
Figure 5: Radius of gyration measured for different strand lengths and
bending potentials (symbols) fitted to the prediction of the Flory mean field
theory (solid lines). For each parameter pair, 40 simulation runs over
$400\tau$ have been averaged. The upper panel shows results for homopolymers
(e.g. poly-C), the lower panel compares those to radii of self-complementary
strands. The plots also show the boundaries for maximally stretched chains
($\nu=1$ – upper dotted line) and the expectation value of an ideal chain
($\nu=3/5$ – lower dotted line).
Again in dilute solution, the radius of gyration
$R_{\text{g}}^{2}=\frac{1}{N-1}\sum_{i=1}^{N}\left|\mathbf{x}_{i}-\mathbf{x}_{\text{mean}}\right|^{2},$
with $\mathbf{x}_{\text{mean}}$ being the center of gravity of the chain, is
expected to depend on chain length and temperature (or equally the backbone
stiffness $a_{\text{bend}}$). As opposed to diffusion, we do expect the radius
of gyration to change with the primary structure of the nucleotide strand. For
homopolymers, we expect $R_{\text{g}}$ to be well described by the Flory mean
field model Ter:2002
$R_{\text{g}}\propto(N-1)^{\nu}.$
We perform simulations of single homopolymers and self-complementary
nucleotide strands and determine the radius of gyration. Fig. 5 shows results
for strands of lengths $N=4,\ldots,16$ and various backbone stiffness values.
It is found that the Flory model is a good prediction, not only for
homopolymers, but also for self-complementary strands. Expectedly, the radius
of gyration is smaller for self-complementary strands. For
$a_{\text{bend}}=0.2$, we find the radius of gyration of self-complementary
strands to be slightly longer than the radius of gyration of a homopolymer
with half the length – implying that the strand is almost always in a hairpin
configuration. For stronger backbone stiffness values, the effect is reduced.
### IV.3 Melting behavior
Figure 6: Systems of size $10^{3}$ are initialized with two complementary
strands of length $N$. The sequence information is taken from the $N$ central
nucleotides of the master sequence denoted in each panel (e.g., $N=6$ implies
sequence CATACG in the first panel). Each system is simulated over
$50000\tau$, and the average fraction $\chi$ of hybridized nucleobases is
determined. Error bars show the average and standard deviation of 40
measurements. Solid lines show the theoretical prediction
$\chi(T)=\left(1+e^{\frac{\Delta H-T\Delta S}{RT}}\right)^{-1}$ fitted
individually to each data set via $\Delta S$ and $\Delta H$. Melting
temperatures $T_{\text{m}}$ are obtained from the relation
$\chi(T_{\text{m}})=0.5$, and their scaling as a function of strand length is
depicted in the inlays for the cases where enough melting points had been
observed.
We analyze the melting behaviour $[X]_{2}\leftrightharpoons 2[X]$ of
complementary nucleotide strands as a function of temperature for various
strand lengths and sequences. We consider a base to be hybridized if there is
a complementary base of another strand within a maximal distance of
$r_{\text{c}}$. Denoting the fraction of hybridized nucleobases with
$0\leq\chi\leq 1$, we can compare the melting curves to the theoretical
prediction
$\chi(T)=\left(1+e^{\frac{\Delta H-T\Delta S}{RT}}\right)^{-1}$ (27)
where $\Delta H,\Delta S$ are constants depending on template length,
sequence, and concentration.
Fig. 6. shows melting curves for 18 different sequences and fits (via $\Delta
H_{i},\Delta S_{i}$) to the theoretical prediction, where each panel analyzes
sequences that are subsequences of a common master sequence denoted in each
panel. The graphs clearly show how the average hybridization increases with
strand length for each master sequence. Inlays, where present, emphasize that
the inverse of the melting point, at which $\chi(T_{\text{m}})=0.5$, scales
linearly with the inverse of the strand length.
Comparing the individual panels to each other, we find that melting
temperatures for strands of equal length are higher for sequences with
identical adjacent bases. In fact, the melting behavior is dominated by the
presence of identical adjacent bases: adding a single nucleobase to a strand
that consists otherwise only of identical pairs (i.e., moving from length 4 to
6 and from length 8 to 10 in panel two) has no significant impact on the
observed melting temperature. We assume that this behavior is due to the fact
that dehybridized nucleotides of a partly molten strand find more potential
binding partners to enforce the stability of the partly molten strand, thereby
promoting recombination of products.
We emphasize that our model is not suited to obtain quantitative sequence
dependent melting temperatures. While it is well known that not only strand
composition, but the actual arrangement of bases influences the melting
temperature San:1998 , the magnitude of this effect is not expected to be
captured quantitatively by our model. Nevertheless, the results confirm that
sequence information affects the melting temperature, and that the melting
temperature rises when a strand is elongated.
Up to now, we have analyzed hybridization of two complementary strands of
equal length. How is the stability of the hybridization complex affected if
one of the strands is replaced by two oligomers of half the length? We analyze
the master sequence CTACTAGGGGGG. Its left half is similar to the first
sequence of Fig. 6 with respect to non-identical neighboring bases. The right
half has been chosen for its strong hybridization tendency. We run experiments
as before and measure the hybridization of the left oligomer. By comparing its
equilibrium rate to the one for two templates of half the length, we can
determine how the dangling right hand side affects the equilibrium rate (e.g.,
we compare the hybridization of a 4-mer to an 8-mer template to the
hybridization of two 4-mers.) Fig. 7 shows that the hybridization fractions
$\chi_{\text{O}}(N)$ and $\chi_{\text{T}}(N/2)$ are comparable for the
analyzed sequence. We expect, however, that $\chi_{\text{O}}$ decreases when
the two oligomers have more interaction possibilities than in the selected
master sequence.
Figure 7: Melting curves for an oligomer that hybridizes to the left hand
side of the master sequence in the presence of the right hand side oligomer.
Data is obtained with the procedure described in Fig. 6. For the analyzed
master sequence, the results are comparable to those of two complementary
strands of length $N/2$ (dotted lines).
### IV.4 Effective replication rate
We can roughly equate
$\chi\equiv\frac{2[X_{2}]}{[X]_{\text{total}}},\qquad
1-\chi\equiv\frac{[X]}{[X]_{\text{total}}}$
and obtain an estimate for the equilibrium constant
$K_{\text{T}}=\frac{[X_{2}]}{[X]^{2}}\equiv\frac{\chi}{2(1-\chi)^{2}}\frac{1}{[X]_{\text{total}}}.$
(28)
from the measurements. This equation has to be taken with some caution because
the measured hybridization times reflect a non-trivial relation between
diffusing reactants and rehybridization of partly molten complexes – both
scaling differently with concentration. To truly obtain $K$, one is advised to
repeat the simulations with varying concentrations, i.e. box size. By means of
Eq. (28), we convert the melting data from Sec. IV.3 to obtain hybridization
energy changes
$\displaystyle\Delta G_{\text{T}}$ $\displaystyle=-k_{\text{B}}T\log
K_{\text{T}}$ $\displaystyle N\Delta G_{\text{base}}+\Delta
H_{\text{init}}-T\Delta S_{\text{init}}$
$\displaystyle=-k_{\text{B}}T\log\frac{\chi}{2(1-\chi)^{2}}$
$\displaystyle\quad+k_{\text{B}}T\log[X]_{\text{total}}.$
In the latter equation, $-k_{\text{B}}\log{[X]_{\text{total}}}$ denotes the
translational entropy for a box of size $[X]_{\text{total}}^{-1}$, while
$\Delta S_{\text{init}}$ accounts for the configurational entropy of a single
strand. We combine both entropy terms, $\Delta S^{\prime}=\Delta
S_{\text{init}}+k_{\text{B}}\log[X]_{\text{total}}^{-1}$, and fit
$\displaystyle N\Delta G_{\text{base}}+\Delta H_{\text{init}}-T\Delta
S^{\prime}$ $\displaystyle=-k_{\text{B}}T\log\frac{\chi}{2(1-\chi)^{2}},$
which allows for better comparison to the melting temperature plots, as
$\Delta G_{\text{T}}=0\Longleftrightarrow\chi=1/2$.
Determining hybridization energies is difficult because hybridization is very
stable at low temperatures, particularly for long XNA strands. Dehybridization
then becomes a rare event, which requires unfeasibly long simulation runs in
order to sample equilibrium distributions. Consequently, data for low
temperatures and long strands is overshadowed by noise and we have excluded
such data from the analysis. For the regime that is accessible to simulation,
Fig. 8 shows the measured hybridization energies fitted to the theoretical
model of Eq. (14) – see figure caption for details. The data follows the
linear trend of the model and can recover the proper temperature scaling.
However, we also observe deviations from the analytical prediction for
$T>1.9$. The plot confirms that hybridization energy changes are close to zero
at the melting temperature of each double strand. For the simulation, where
$[X]_{\text{total}}=0.001$, we obtain $\Delta S_{\text{init}}=1.33$, which
confirms that $\Delta G_{\text{init}}$ is primarily entropic.
Figure 8: Hybridization energy changes $\Delta G_{\text{T}}$ obtained from
the measurements of section IV.3, sequence ACGCATACGCATACGCATAC (symbols),
fitted to the analytical model of equation (14) via the parameters $\Delta
H_{\text{base}}=-1.81$, $\Delta S_{\textbf{\textbullet}}=-0.756$, $\Delta
H_{\text{init}}=0.470$, and $\Delta S^{\prime}=-5.58$. Since
$[X]_{\text{total}}=0.001$, we can estimate $\Delta S_{\text{init}}=1.33$.
Unfortunately, the removal of noisy simulation results implies that we do not
have measurements for the regime where the analytical model predicts the most
features. To nevertheless obtain estimates for these points, we perform the
same simulations as before but start with a perfectly hybridized complex. For
short strands, the difference in initial conditions is negligible, as
hybridization and dehybridization equilibrate within the simulated time span.
For long strands / strands at low temperatures, the sampled $\chi$ values
progressively overestimate the equilibrium time fraction of hybridization.
The rational behind these dehybridization measurements is the following: for
long strands and low temperatures dehybridization of the ligation product
becomes the rate limiting step and we are in the regime of Eq. (21). Here, the
effective replication rate is primarily governed by the rate of product
dehybridization which in turn gives us an upper bound for the replication
rate. By combining the results of the two scenarios, we implicitly relate the
simulated time span with an assumed ligation rate.
Plugging the measured constants $K_{\text{T}}$ and $K_{\text{O}}$ into Eq.
(10), we obtain a fitness landscape for minimal replicators which is depicted
in Fig. 9. The colored surface shows the effective equilibrium constant
$K_{\text{O}}^{2}/\sqrt{2K_{\text{T}}}$, obtained from the original
measurements. The grey surface shows the results from the dehybridization
experiments. Finally, the fitness landscape obtained via Eq. (14) is shown as
a mash. For the analyzed master sequence and range of observation, the
effective oligomer complex concentration
$K_{\text{O}}^{2}/\sqrt{2K_{\text{T}}}$ varies over 13 orders of magnitude
with highest rates for long strands ($N\geq 8$) and low temperatures
($k_{\text{B}}T\leq 0.8$).
Figure 9: Effective equilibrium constant
$K_{\text{O}}^{2}/\sqrt{2K_{\text{T}}}$, obtained from the measurements of
Fig. 8 (colored) compared to the theoretical prediction of Eq. (14) (mesh).
Data shaded in gray is extrapolated from dehybridization experiments.
Contrary to the analytical derivations, the numerical results of the
dehybridization experiments indicate a saturation and possibly a decrease of
the replication rate for long strands at low temperatures, thereby supporting
our hypothesis that the effective rate indeed possesses an optimum when
dehybridization and ligation occurr on comparable time scales.
The numerical simulations do not incorporate the ligation reaction. To include
its temperature dependence, we superpose the Arrhenius equation (12) with
parameters as in Fig. 2 onto Fig. 9 and obtain the replication rate landscape
shown in Fig. 10. The resulting figure features a critical strand length
$N^{*}\approx 8$ at which the temperature scaling inverts. With the parameters
obtained from the data fits, the critical temperature $T^{*}\approx
2.37\;T_{0}$ lies outside the analyzed area.
Figure 10: Final replication rate $k$ as a function of template length and
temperature. The figure is produced by superposing the data from Fig. 9 with
the Arrhenius equation for the ligation reaction following Eq. (16) with
$A=10^{3},\Delta H^{\ddagger}_{\text{L}}=6.52k_{\text{B}}T^{\prime}$. For this
parametrization, the critical strand length $N^{*}$ above which the
temperature dependence of the reaction inverts is $8$.
## V Discussion
The common strategy to increase the yield in template directed replication
experiments is to increase the concentration of oligomers. This is certainly
viable, and the fact that the growth rate $k$ is proportional to the square of
the oligomer concentration encourages this approach. Our investigation,
however, indicates that oligomer concentration can be outweighed drastically
by factors such as temperature, template length, as well as sequence
information, which all influence the replication rate at least exponentially
and thus over many orders of magnitude. These finding are consistent for the
simple analytical expressions (Fig. 2), for the simulations (Fig. 8) as well
as for the combined analysis (Figs. 9 and 10).
Perhaps contrary to intuition, we find the highest growth rates for long
replicators and low temperatures. This finding can be explained by the fact
that the effective growth rate of minimal replicators features a critical
strand length $N^{*}$ at which the temperature dependence of the overall
replication reaction inverts: below $N^{*}$ the replication rate is dominated
by the ligation reaction and its positive temperature scaling, whereas above
$N^{*}$, the negative temperature scaling of the hybridization reactions
becomes dominant, recall Fig. 10.
We observe that hybridization rates are highly sequence dependent. In
particular, our spatially resolved simulations reveal that adjacent identical
nucleobases can drastically stabilize the hybridization complex. We expect
that the overall replication process is primarily sequence specific near to
the ligation sites, as it is known that mismatches near the ligation site
effect the ligation the most Blo:2000 .
We also find that there is no difference in the replication rates of symmetric
versus asymmetric replicators. We see that from equation 18, where only the
sum of the oligomer lengths appear:while the longer oligomer of an asymmetric
replicator has a high binding affinity to the template and therefore promotes
the formation of a hybridization complex, the short oligomer has a smaller
binding affinity, such that the total asymmetric hybridization complex is as
stable as its symmetric counterpart.
We emphasize that our approach hinges on the assumption that ligation is the
rate limiting step of the replication reaction. Due to the temperature scaling
of the diffusion, hybridization, and ligation processes, our approach breaks
down for very low temperatures or very long template strands. As discussed in
Section 2, equation (21), for long strands and low temperatures the
dehybridization of the templates becomes the rate limiting step. Exactly what
happens in the transition region between these two limits requires a more
detailed non-equilibrium analysis and is outside the scope of this
investigation. The grey shaded area in Figs. 9 and 10 depicts the expected
landscape for the replication rate as we approach this transition zone from
the regime where ligation is rate limiting, and it is clearly seen how the
replication rate levels off as temperature decreases and the sequence length
increases. In any event, we would expect the existence of a true optimal
temperature for a given strand length, and equally a true optimal strand
length for a given temperature, such that replication rates are maximized.
In the context of origin of life research, where the temperature is given by
the environment, but nucleic acid strands are subject to mutations, our
findings suggest the existence of a critical temperature $T^{*}$, below which
evolution would select longer nucleic acid replicators that could have
resulted from mutational elongations, thereby promoting an increase of the
potential information storage associated with these molecules. Thus, these
results shed new light on the “Snowball Earth” hypothesis. This argument,
however, assumes that both templates and oligomers elongate to maintain the
structure of a minimal replicator that replicates by ligation of two oligomers
only. Needless to say, long oligomers of a specific sequence are less frequent
in a random pool of substrate, such that they do not necessarily benefit from
their increased stability.
Most importantly, in the context of minimal replicator experiments and
applications, e.g. in protocell as well as molecular computing and fabrication
research, our findings suggest a qualitative recipe for obtaining high
replication yields, as they relate experimentally accessible data such as
melting temperatures and ligation rate to the critical strand length (Eq. 17)
and temperature (Eq. 18).
## Acknowledgements
This work has benefited from discussions with the members of the FLinT Center
for Fundamental Living Technology, University of Southern Denmark. In
particular, we acknowledge P.-A. Monnard and C. Svaneborg, as well as the
anonymous reviewers of the journal _Entropy_ for helpful feedback.
## References
* (1) Gilbert, W. The RNA world. Nature 1986, 319, 618.
* (2) Monnard, P.A. The dawn of the RNA world: RNA Polymerization from monoribonucleotides under prebiotically plausible conditions. In Prebiotic Evolution and Astrobiology; Wong, J.T.F.; Lazcano, A., Eds.; Landes Bioscience: Austin, TX, USA, 2008.
* (3) Cleves, J.H. Prebiotic chemistry, the premordial replicator and modern protocells. In Protocells: Bridging nonliving and living matter; Rasmussen, S.; Bedau, M.; Chen, L.; Deamer, D.; Krakauer, D.; Packard, N.; Stadler, P., Eds.; MIT Press, 2009; p. 583.
* (4) Rasmussen, S.; Chen, L.; Deamer, D.; Krakauer, D.C.; Packard, N.H.; Stadler, P.F.; Bedau, M.A. Transitions from nonliving to living matter. Science 2004, 303, 963–965.
* (5) Rasmussen, S.; Chen, L.; Stadler, B.M.R.; Stadler, P.F. Proto-Organism Kinetics: Evolutionary Dynamics of Lipid Aggregates with Genes and Metabolism. Orig. Life Evol. Biosph. 2004, 34, 171–180.
* (6) Rasmussen, S.; Bailey, J.; Boncella, J.; Chen, L.; Collis, G.; Colgate, S.; DeClue, M.; Fellermann, H.; Goranovic, G.; Jiang, Y.; Knutson, C.; Monnard, P.A.; Mouffouk, F.; Nielson, M.; Sen, A.; Shreve, A.; Tamulis, A.; Travis, B.; Weronski, P.; Zhang, J.; Zhou, X.; Ziock, H.J.; Woodruff, W. Assembly of a minimal protocell. In Protocells: Bridging Nonliving and Living Matter; Rasmussen, S.; Bedau, M.; Chen, L.; Deamer, D.; Krakauer, D.; Packard, N.; Stadler, P., Eds.; MIT Press: Cambridge, USA, 2008; pp. 125–156.
* (7) Szostack, W.; Bartel, D.P.; Luisi, P.L. Synthesizing life. Nature 2001, 409, 387–390.
* (8) Mansy, S.S.; Schrum, J.P.; Krishnamurthy, M.; Tobé, S.; Treco, D.A.; Szostak, J.W. Template-directed synthesis of a genetic polymer in a model protocell. Nature 2008, 454, 122–125.
* (9) Hanczyc, M. Steps towards creating a synthetic protocell. In Protocells: Bridging Nonliving and Living Matter; Rasmussen, S.; Bedau, M.; Chen, L.; Deamer, D.; Krakauer, D.; Packard, N.; Stadler, P., Eds.; MIT Press: Cambridge, USA, 2009; p. 107.
* (10) Wu, T.; Orgel, L.E. Nonenzymic template-directed synthesis on oligodeoxycytidylate sequences in hairpin oligonucleotides. J. Am. Chem. Soc. 1992, 114, 317–322.
* (11) Wu, T.; Orgel, L. Nonenzymatic template-directed synthesis on hairpin oligonucleotides. 3\. Incorporation of adenosine and uridine residues. J. Am. Chem. Soc. 1992, 114, 7963–7969.
* (12) Fernando, C.; Kiedrwoski, G.v.; Szathmáry, E. A stochastic model of nonenzymatic nucleic acid replication: “Elongators” sequester replicators. J. Mol. Evol. 2007, 64, 572–585.
* (13) Monnard, P.A.; Dörr, M.; Löffler, P. Possible role of ice in the synthesis of polymeric compounds. 38th COSPAR Scientific Assembly, held 15-18 July 2010, Bremen, 2010\.
* (14) Kiedrowski, G.v. A Self-replicating hexadeoxynucleotide. Angew. Chem. 1986, 25, 932–935.
* (15) Sievers, D.; Kiedrowski, G.v. Self-replication of complementary nucleotide-based oligomers. Nature 1994, 369, 221–224.
* (16) Bag, B.G.; Kiedrowski, G.v. Templates, autocatalysis and molecular replication. Pure & App. Chem. 1996, 68.
* (17) Joyce, G.F. Non-enzyme template-directed synthesis of RNA copolymers. Orig. Life Evol. Biosph. 1984, 14, 613–620.
* (18) Lincoln, T.A.; Joyce, G.F. Self-sustained replication of an RNA enzyme. Science 2009, 323, 1229–1232.
* (19) Wills, P.; Kauffman, S.; Stadler, B.; Stadler, P. Selection dynamics in autocatalytic systems: Templates replicating through binary ligation. Bulletin of Mathematical Biology 1998, 60, 1073–1098.
* (20) Rocheleau, T.; Rasmussen, S.; Nielson, P.E.; Jacobi, M.N.; Ziock, H. Emergence of protocellular growth laws. Philos. Trans. R. Soc. B 2007, 362, 1841–1845.
* (21) Száthmary, E.; Gladkih, I. Sub-exponential growth and coexistence of non-enzymatically replicating templates. J. Theor. Biol. 1989, 138, 55–58.
* (22) Kiedrowski, G.v.; Wlotzka, B.; Helbing, J.; Matzen, M.; Jordan, S. Parabolic growth of a self-replicating hexadeoxynucleotide bearing a 3’-5’-phosphoamidate linkage. Angew. Chem. Int. Ed. 1991, 30, 423–426.
* (23) Luther, A.; Brandsch, R.; Kiedrowski, G.v. Surface-promoted replication and exponential amplifcation of DNA analogues. Nature 1998, 396, 245–248.
* (24) Zhang, D.Y.; Yurke, B. A DNA superstructure-based replicator without product inhibition. Nat. Comput. 2006, 5, 183–202.
* (25) Owczarzy, R.; Vallone, P.M.; Gallo, F.J.; Paner, T.M.; Lane, M.J.; Benight, A.S. Predicting sequence-dependent melting stability of short duplex DNA oligomers. Biopolymers 1998, 44, 217–239.
* (26) Bloomfield, V.A.; Crothers, D.M.; Tinoco, I. Nucleic Acids; University Science Books: Sausalitos, CA, USA, 2000\.
* (27) Poland, D.; Scheraga, H.A. Occurrence of a Phase Transition in Nucleic Acid Models. J. Chem. Phys. 1966, 45, 1456–1463.
* (28) Hutton, T.J. Evolvable self-replicating molecules in an artificial chemistry. Artif. Life 2002, 8, 341–356.
* (29) Smith, A.; Turney, P.; Ewaschuk, R. Self-replicating machines in continuous space with virtual physics. Artif. Life 2003, 9, 21–40.
* (30) Klenin, K.; Merlitz, H.; Langowski, J. A Brownian Dynamics program for the simulation of linear and circular DNA and other wormlike chain polyelectrolytes. Biophysical Journal 1998, 74, 780–788.
* (31) Tepper, H.L.; Voth, G.A. A coarse-grained model for double-helix molecules in solution: Spontaneous helix formation and equilibrium properties. J. Chem. Phys. 2005, 122.
* (32) Drukker, K.; Schatz, G.C. A model for simulating dynamics of DNA denaturation. J. Chem. Phys. B 2000, 104, 6108–6111.
* (33) Fellermann, H.; Rasmussen, S.; Ziock, H.J.; Solé, R. Life-cycle of a minimal protocell: a dissipative particle dynamics (DPD) study. Artif. Life 2007, 13, 319–345.
* (34) Kubo, R. The fluctuation-dissipation theorem. Rep. Prog. Phys. 1966, 29.
* (35) Ryckaert, J.P.; Ciccotti, G.; Berendsen, H.J.C. Numerical Integration of the Cartesian Equations of Motion of a System with Constraints: Molecular Dynamics of n-Alkanes. J. Comp. Phys. 1977, 23, 327.
* (36) Teraoka, I. Polymer solutions – An introduction to physical properties; Wiley Interscience: New York, NY, USA, 2002.
* (37) SantaLucia, J. A unified view of polymer, dumbbell, and oligonucleotide DNA nearest-neighbor thermodynamics. Proc. Nat. Acad. Sci. USA 1998, 95, 1460–1465.
|
arxiv-papers
| 2011-03-28T08:47:19 |
2024-09-04T02:49:17.956937
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Harold Fellermann and Steen Rasmussen",
"submitter": "Harold Fellermann",
"url": "https://arxiv.org/abs/1103.5301"
}
|
1103.5405
|
# Networked Controller Design using Packet Delivery Prediction in Mesh
Networks
###### Abstract
Much of the current theory of networked control systems uses simple point-to-
point communication models as an abstraction of the underlying network. As a
result, the controller has very limited information on the network conditions
and performs suboptimally. This work models the underlying wireless multihop
mesh network as a graph of links with transmission success probabilities, and
uses a recursive Bayesian estimator to provide packet delivery predictions to
the controller. The predictions are a joint probability distribution on future
packet delivery sequences, and thus capture correlations between successive
packet deliveries. We look at finite horizon LQG control over a lossy
actuation channel and a perfect sensing channel, both without delay, to study
how the controller can compensate for predicted network outages.
Network Estimation and Packet Delivery Prediction for Control over Wireless
Mesh Networks The work was supported by the EU project FeedNetBack, the
Swedish Research Council, the Swedish Strategic Research Foundation, the
Swedish Governmental Agency for Innovation Systems, and the Knut and Alice
Wallenberg Foundation. PHOEBUS CHEN, CHITHRUPA RAMESH, AND KARL H. JOHANSSON
Stockholm 2010 ACCESS Linnaeus Centre Automatic Control School of Electrical
Engineering KTH Royal Institute of Technology SE-100 44 Stockholm, Sweden
TRITA-EE:043
## 1 Introduction
Increasingly, control systems are operated over large-scale, networked
infrastructures. In fact, several companies today are introducing devices that
communicate over low-power wireless mesh networks for industrial automation
and process control [1, 2]. While wireless mesh networks can connect control
processes that are physically spread out over a large space to save wiring
costs, these networks are difficult to design, provision, and manage [3, 4].
Furthermore, wireless communication is inherently unreliable, introducing
packet losses and delays, which are detrimental to control system performance
and stability.
Research in the area of Networked Control Systems (NCSs) [5] addresses how to
design control systems which can account for the lossy, delayed communication
channels introduced by a network. Traditional tasks in control systems design,
like stability/performance analysis and controller/estimator synthesis, are
revisited, with network models providing statistics about packet losses and
delays. In the process, the studies highlight the benefits and drawbacks of
different system architectures. For example, Figure 1 depicts the general
system architecture of a networked control system over a mesh network proposed
by Robinson and Kumar [6]. A fundamental architecture problem is how to choose
the best location to place the controllers, if they can be placed at any of
the sensors, actuators, or communication relay nodes in the network. One
insight from Schenato et al. [7] is that if the controller can know whether
the control packet reaches the actuator, e.g., we place the controller at the
actuator, then the optimal LQG controller and estimator can be designed
separately (the separation principle).
Figure 1: A networked control system over a mesh network, where the
controllers can be located on any node.
To gain more insights on how to architect and design NCSs, two limitations in
the approach of many current NCS research studies need to be addressed. The
first limitation is the use of simple models of packet delivery over a point-
to-point link or a star network topology to represent the network, which are
often multihop and more complex. The second limitation is the treatment of the
network as something designed and fixed a priori before the design of the
control system. Very little information is passed through the interface
between the network and the control system, limiting the interaction between
the two “layers” to tune the controller to the network conditions, and vice
versa.
### 1.1 Related Works
Schenato et al. [7] and Ishii [8] study stability and controller synthesis for
different control system architectures, but they both model networks as i.i.d.
Bernoulli processes that drop packets on a single link. The information passed
through the interface between the network and the control system is the packet
drop probability of the link, which is assumed to be known and fixed. Seiler
and Sengupta [9] study stability and $\mathcal{H}_{\infty}$ controller
synthesis when the network is modeled as a packet-dropping link described by a
two-state Markov chain (Gilbert-Elliott model), where the information passed
through the network-controller interface are the transition probabilities of
the Markov chain. Elia [10] studies stability and the synthesis of a
stabilizing controller when the network is represented by an LTI system with
stochastic disturbances modeled as parallel, independent, multiplicative
fading channels.
Some related work in NCSs do use models of multihop networks. For instance,
work on consensus of multi-agent systems [11] typically study how the
connectivity graph(s) provided by the network affects the convergence of the
system, and is not focused on modeling the links. Robinson and Kumar [6] study
the optimal placement of a controller in a multihop network with i.i.d.
Bernoulli packet-dropping links, where the packet drop probability is known to
the controller. Gupta et al. [12] study how to optimally process and forward
sensor measurements at each node in a multihop network for optimal LQG
control, and analyze stability when packet drops on the links are modeled as
spatially-independent Bernoulli, spatially-independent Gilbert-Elliott, or
memoryless spatially-correlated processes.111Here, “spatially” means “with
respect to other links.” Varagnolo et al. [13] compare the performance of a
time-varying Kalman filter on a wireless TDMA mesh network under unicast
routing and constrained flooding. The network model describes the routing
topology and schedule of an implemented communication protocol, TSMP [14], but
it assumes that transmission successes on the links are spatially-independent
and memoryless. Both Gupta et al. [12] and Varagnolo et al. [13] are concerned
with estimation when packet drops occur on the sensing channel, and the
estimators do not need to know network parameters like the packet loss
probability.
### 1.2 Contributions
Our approach is a step toward using more sophisticated, multihop network
models and passing more information through the interface between the
controller and the network. Similar to Gupta et al. [12], we model the network
routing topology as a graph of independent links, where transmission success
on each link is described by a two-state Markov chain. The network model
consists of the routing topology and a global TDMA transmission schedule. Such
a minimalist network model captures the essence of how a network with bursty
links can have correlated packet deliveries [15], which are particularly bad
for control when they result in bursts of packet losses. Using this model, we
propose a network estimator to estimate, without loss of information, the
state of the network given the past packet deliveries.222Strictly speaking, we
obtain the probability distribution on the states of the network, not a single
point estimate. The network estimate is translated to a joint probability
distribution predicting the success of future packet deliveries, which is
passed through the network-controller interface so the controller can
compensate for unavoidable network outages. The network estimator can also be
used to notify a network manager when the network is broken and needs to be
reconfigured or reprovisioned, a direction for future research.
Section 2 describes our plant and network models. We propose two network
estimators, the Static Independent links, Hop-by-hop routing, Scheduled (SIHS)
network estimator and the Gilbert-Elliott Independent links, Hop-by-hop
routing, Scheduled (GEIHS) network estimator in Section 3. Next, we design a
finite-horizon, Future-Packet-Delivery-optimized (FPD) LQG controller to
utilize the packet delivery predictions provided by the network estimators,
presented in Section 4. Section 5 provides an example and simulations
demonstrating how the GEIHS network estimator combined with the FPD controller
can provide better performance than a classical LQG controller or a controller
assuming i.i.d. packet deliveries. Finally, Section 7 describes the
limitations of our approach and future work.
## 2 Problem Formulation
Figure 2: A control loop for plant ${\mathscr{P}}$ with the network on the
actuation channel. The network estimator $\hat{{\mathscr{N}}}$ passes packet
delivery predictions $f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}$ to the FPD
controller ${\mathscr{C}}$, with past packet delivery information obtained
from the network ${\mathscr{N}}$ over an acknowledgement (ACK) channel.
This paper studies an instance of the general system architecture depicted in
Figure 1, with a single control loop containing one sensor and one actuator.
One network estimator and one controller are placed at the sensor, and we
assume that an end-to-end acknowledgement (ACK) that the controller-to-
actuator packet is delivered is always received at the network estimator, as
shown in Figure 2. For simplicity, we assume that the plant dynamics are
significantly slower than the end-to-end packet delivery deadline, so that we
can ignore the delay introduced by the network. The general problem is to
jointly design a network estimator and controller that can optimally control
the plant using our proposed SIHS and GEIHS network models. In our problem
setup, the controller is only concerned with the past, present, and future
packet delivery sequence and not with the detailed behavior of the network,
nor can it affect the behavior of the network. Therefore, the network
estimation problem decouples from the control problem. The information passed
through the network-controller interface is the packet delivery sequence,
specifically the joint probability distribution describing the future packet
delivery predictions.
### 2.1 Plant and Network Models
The state dynamics of the plant ${\mathscr{P}}$ in Figure 2 is given by
$x_{k+1}=Ax_{k}+\nu_{k}Bu_{k}+w_{k}\quad,$ (1)
where $A\in{\mathbb{R}}^{\ell\times\ell}$, $B\in{\mathbb{R}}^{\ell\times m}$,
and $w_{k}$ are i.i.d. zero-mean Gaussian random variables with covariance
matrix $R_{w}\in{\mathbb{S}}_{+}^{\ell}$, where ${\mathbb{S}}_{+}^{\ell}$ is
the set of $\ell\times\ell$ positive semidefinite matrices. The initial state
$x_{0}$ is a zero-mean Gaussian random variable with covariance matrix
$R_{0}\in{\mathbb{S}}_{+}^{\ell}$ and is mutually independent of $w_{k}$. The
binary random variable $\nu_{k}$ indicates whether a packet from the
controller reaches the actuator ($\nu_{k}=1$) or not ($\nu_{k}=0$), and each
$\nu_{k}$ is independent of $x_{0}$ and $w_{k}$ (but the $\nu_{k}$’s are not
independent of each other).
Let the discrete sampling times for the control system be indexed by $k$, but
let the discrete time for schedule time slots (described below) be indexed by
$t$. The time slot intervals are smaller than the sampling intervals. The time
slot when the control packet at sample time $k$ is generated is denoted
$t_{k}$, and the deadline for receiving the control packet at the receiver is
$t_{k}^{\prime}$. We assume that $t_{k}^{\prime}\leq t_{k+1}$ for all $k$.
Figure 3 illustrates the relationship between $t$ and $k$.
Figure 3: The packet containing the control input $u_{k}$ is generated right
before time slot $t_{k}$. The packet may be in transit through the network in
the shaded time slots, until right before time slot $t_{k}^{\prime}$. Thus,
time $t_{k}$ is aligned with the beginning of the time slot.
The model of the TDMA wireless mesh network (${\mathscr{N}}$ in Figure 2)
consists of a routing topology $G$, a link model describing how the
transmission success of a link evolves over time, and a fixed repeating
schedule $\mathbf{F}^{(T)}$. The SIHS network model and the GEIHS network
model only differ in the link model. Each of these components will be
described in detail below.
The routing topology is described by $G=({\mathcal{V}},{\mathcal{E}})$, a
connected directed acyclic graph with the set of vertices (nodes)
${\mathcal{V}}=\\{1,\dots,M\\}$ and the set of directed edges (links)
${\mathcal{E}}\subseteq\\{(i,j)\>:\>i,j\in{\mathcal{V}},\>i\neq j\\}$, where
the number of edges is denoted $E$. The source node is denoted $a$ and the
sink (destination) node is denoted $b$. Only the destination node has no
outgoing edges.
At any moment in time, the links in $G$ can be either be up (succeeds if
attempt to transmit packet) or down (fails if attempt to transmit packet).
Thus, there are $2^{E}$ possible topology realizations
$\tilde{G}=({\mathcal{V}},\tilde{{\mathcal{E}}})$, where
$\tilde{{\mathcal{E}}}\subseteq{\mathcal{E}}$ represents the edges that are
up.333Symbols with a tilde ($\tilde{\cdot}$) denote values that can be taken
on by random variables, and can be the arguments to probability distribution
functions (pdfs).
At time $t_{k}$, the actual state of the topology is one of the topology
realizations but it is not known to the network estimator. With some abuse of
terminology, we define $G^{{}_{(k)}}$ to be the random variable representing
the state of the topology at time $t_{k}$.444Strictly speaking, $G^{{}_{(k)}}$
is a function mapping events to the set of all topology realizations, not to
the set of real numbers.
This paper considers the network under two link models, the static link model
and the Gilbert-Elliott (G-E) link model. Both network models assume all the
links in the network are independent.
The static link model assumes the links do not switch between being up and
down while packets are sent through the network. Therefore, the sequence of
topology realizations over time is constant. While not realistic, it leads to
the simple network estimator in Section 3.1 for pedagogical purposes. The a
priori transmission success probability of link $l=(i,j)$ is $p_{l}$.
The G-E link model represents each link $l$ by the two-state Markov chain
shown in Figure 4. At each sample time $k$, a link in state 0 (down)
transitions to state 1 (up) with probability $p_{l}^{\mathrm{u}}$, and a link
from state 1 transitions to state 0 with probability
$p_{l}^{\mathrm{d}}$.555We can easily instead use a G-E link model that
advances at each time step $t$, but it would make the following exposition and
notation more complicated. The steady-state probability of being in state 1,
which we use as the a priori probability of the link being up, is
$p_{l}=p_{l}^{\mathrm{u}}/(p_{l}^{\mathrm{u}}+p_{l}^{\mathrm{d}})\quad.$
Figure 4: Gilbert-Elliott link model
The fixed, repeating schedule of length $T$ is represented by a sequence of
matrices $\mathbf{F}^{(T)}=(F^{(1)},F^{(2)},\dots,F^{(T)})$, where the matrix
$F^{(t-1\pmod{T}+1)}$ represents the links scheduled at time $t$. The matrix
$F^{(t)}\in\\{0,1\\}^{M\times M}$ is defined from the set
${\mathcal{F}}^{(t)}\subseteq{\mathcal{E}}$ containing the links scheduled for
transmission at time $t$. We assume that nodes can only unicast packets,
meaning that for all nodes $i$, if $(i,j)\in{\mathcal{F}}^{(t)}$ then for all
$v\neq j,(i,v)\not\in{\mathcal{F}}^{(t)}$. Furthermore, a node holds onto a
packet if the transmission fails and can retransmit the packet the next time
an outgoing link is scheduled (hop-by-hop routing). Thus, the matrix $F^{(t)}$
has entries
$F_{ij}^{(t)}=\begin{cases}1&\text{if $(i,j)\in{\mathcal{F}}^{(t)}$, or}\\\
&\text{if $i=j$ and $\forall
v\in{\mathcal{V}},(i,v)\not\in{\mathcal{F}}^{(t)}$}\\\
0&\text{otherwise.}\end{cases}$
An exact description of the network consists of the sequence of topology
realizations over time and the schedule $\mathbf{F}^{(T)}$. Assuming a
topology realization $\tilde{G}$, the links that are scheduled and up at any
given time $t$ are represented by the matrix
$\tilde{F}^{(t;\tilde{G})}\in\\{0,1\\}^{M\times M}$, with entries
$\tilde{F}_{ij}^{(t;\tilde{G})}=\begin{cases}1&\text{if
$(i,j)\in{\mathcal{F}}^{(t)}\cap\tilde{{\mathcal{E}}}$, or }\\\ &\text{if
$i=j$ and $\forall
v\in{\mathcal{V}},(i,v)\not\in{\mathcal{F}}^{(t)}\cap\tilde{{\mathcal{E}}}$}\\\
0&\text{otherwise.}\end{cases}$ (2)
Define the matrix
$\tilde{F}^{(t,t^{\prime};\tilde{G})}=\tilde{F}^{(t;\tilde{G})}\tilde{F}^{(t+1;\tilde{G})}\dotsm\tilde{F}^{(t^{\prime};\tilde{G})}$,
such that entry $\tilde{F}_{ij}^{(t,t^{\prime};\tilde{G})}$ is 1 if a packet
at node $i$ at time $t$ will be at node $j$ at time $t^{\prime}$, and is 0
otherwise. Since the destination $b$ has no outgoing links, a packet sent from
the source $a$ at time $t$ reaches the destination $b$ at or before time
$t^{\prime}$ if and only if $\tilde{F}_{ab}^{(t,t^{\prime};\tilde{G})}=1$. To
simplify the notation, let the function $\delta_{\kappa}$ indicate whether the
packet delivery $\tilde{\nu}\in\\{0,1\\}$ is consistent with the topology
realization $\tilde{G}$, assuming the packet was generated at $t_{\kappa}$,
i.e.,
$\delta_{\kappa}(\tilde{\nu};\tilde{G})=\begin{cases}1&\text{if
$\tilde{\nu}=\tilde{F}_{ab}^{(t_{\kappa},t_{\kappa}^{\prime};\tilde{G})}$}\\\
0&\text{otherwise.}\end{cases}$ (3)
The function assumes the fixed repeating schedule $\mathbf{F}^{(T)}$, the
packet generation time $t_{\kappa}$, the deadline $t_{\kappa}^{\prime}$, the
source $a$, and the destination $b$ are implicitly known.
### 2.2 Network Estimators
As shown in Figure 2, at each sample time $k$ the network estimator
$\hat{{\mathscr{N}}}$ takes as input the previous packet delivery $\nu_{k-1}$,
estimates the topology realization using the network model and all past packet
deliveries, and outputs the joint probability distribution of future packet
deliveries $f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}$. For clarity in the following
exposition, let ${\mathscr{V}}_{\kappa}\in\\{0,1\\}$ be the value taken on by
the packet delivery random variable $\nu_{\kappa}$ at some past sample time
$\kappa$. Let the vector
$\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}}=[{\mathscr{V}}_{0},\dots,{\mathscr{V}}_{k-1}]$
denote the history of packet deliveries at sample time $k$, the values taken
on by the vector of random variables
$\bm{\nu}_{{}^{0}}^{{}_{k-1}}=[\nu_{0},\dots,\nu_{k-1}]$. Then,
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})=\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}|\bm{\nu}_{{}^{0}}^{{}_{k-1}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})$
(4)
is the prediction of the next $H$ packet deliveries, where
$\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}=[\nu_{k},\dots,\nu_{k+H-1}]$ is a vector of
random variables representing future packet deliveries and
$\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}\in\\{0,1\\}^{H}$.
The SIHS and GEIHS network estimators only differ in the network models. The
parameters of the network models — topology $G$, schedule $\mathbf{F}^{(T)}$,
link probabilities $\\{p_{l}\\}_{l\in{\mathcal{E}}}$ or
$\\{p_{l}^{\mathrm{u}},p_{l}^{\mathrm{d}}\\}_{l\in{\mathcal{E}}}$, source $a$,
sink $b$, packet generation times $t_{k}$, and deadlines $t_{k}^{\prime}$ —
are known a priori to the network estimators and are left out of the
conditional probability expressions.
In Section 3, we will use the probability distribution on the topology
realizations (our network state estimate),
$\operatorname{{\mathbb{P}}}(G^{{}_{(k)}}=\tilde{G}|\bm{\nu}_{{}^{0}}^{{}_{k-1}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})\quad,$
to obtain $f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}$ from
$\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}}$ and the network model.
### 2.3 FPD Controller
The FPD controller (${\mathscr{C}}$ in Figure 2) optimizes the control signals
to the statistics of the future packet delivery sequence, derived from the
past packet delivery sequence. We choose the optimal control framework because
the cost function allows us to easily compare the FPD controller with other
controllers. The control policy operates on the information set
${\mathcal{I}}^{{}^{\mathscr{C}}}_{k}=\\{\bm{x}_{{}^{0}}^{{}_{k}},\bm{u}_{{}^{0}}^{{}_{k-1}},\bm{\nu}_{{}^{0}}^{{}_{k-1}}\\}\quad.$
(5)
The control policy minimizes the linear quadratic cost function
$\operatorname*{{\mathbb{E}}}\left[\scriptstyle
x_{N}^{T}Q_{0}x_{N}+\sum_{n=0}^{N-1}x_{n}^{T}Q_{1}x_{n}+\nu_{n}u_{n}^{T}Q_{2}u_{n}\displaystyle\right]\quad,$
(6)
where $Q_{0}$, $Q_{1}$, and $Q_{2}$ are positive definite weighting matrices
and $N$ is the finite horizon, to get the minimum cost
$J=\min_{{}_{u_{0},\dots,u_{N-1}}}\operatorname*{{\mathbb{E}}}\left[\scriptstyle
x_{N}^{T}Q_{0}x_{N}+\sum_{n=0}^{N-1}x_{n}^{T}Q_{1}x_{n}+\nu_{n}u_{n}^{T}Q_{2}u_{n}\displaystyle\right]\quad.$
Section 4 will show that the resulting architecture separates into a network
estimator and a controller which uses the pdf
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}$ supplied by the network estimator
($\hat{{\mathscr{N}}}$ in Figure 2) to find the control signals $u_{k}$.
## 3 Network Estimation and Packet Delivery Prediction
We will use recursive Bayesian estimation to estimate the state of the
network, and use the network state estimate to predict future packet
deliveries. Figure 5 is the graphical model / hidden Markov model [16]
describing our recursive estimation problem.
Figure 5: Graphical model describing the network estimation problem. $\nu_{k}$
is the measurement output variable at time $k$, and $G^{{}_{(k)}}$ is the
hidden state of the network.
### 3.1 SIHS Network Estimator
The steps in the SIHS network estimator are derived from (4). We introduce new
notation for conditional pdfs (i.e., $\alpha_{k},\beta_{k},Z_{k}$), which will
be used later to state the steps in the estimator compactly.666A semicolon is
used in the conditional pdfs to separate the values being conditioned on from
the remaining arguments. First, express
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})$ as
$\underbrace{\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}|\bm{\nu}_{{}^{0}}^{{}_{k-1}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})}=\sum_{\tilde{G}}\underbrace{\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}|G^{{}_{(k-1)}}=\tilde{G})}_{\alpha_{k}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}};\tilde{G})}\cdot\underbrace{\operatorname{{\mathbb{P}}}(G^{{}_{(k-1)}}=\tilde{G}|\bm{\nu}_{{}^{0}}^{{}_{k-1}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{\beta_{k}(\tilde{G})}\quad,$
where we use the relation
$\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}|G^{{}_{(k-1)}}=\tilde{G},\bm{\nu}_{{}^{0}}^{{}_{k-1}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})=\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}|G^{{}_{(k-1)}}=\tilde{G})\quad.$
This relation states that given the state of the network, future packet
deliveries are independent of past packet deliveries. The expression
$\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}|G^{{}_{(k-1)}}=\tilde{G})$
indicates whether the future packet delivery sequence
$\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}$ is consistent with the graph
realization $\tilde{G}$, meaning
$\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}|G^{{}_{(k-1)}}=\tilde{G})=\prod_{h=0}^{H-1}\delta_{k+h}(\tilde{\nu}_{h};\tilde{G})\quad,$
where $\prod$ is the _and_ operator (sometimes denoted $\bigwedge$). The
network state estimate at sample time $k$ from past packet deliveries is
$\beta_{k}(\tilde{G})$ and is obtained from the network state estimate at
sample time $k-1$, since
$\underbrace{\operatorname{{\mathbb{P}}}(G^{{}_{(k-1)}}=\tilde{G}|\bm{\nu}_{{}^{0}}^{{}_{k-1}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{\beta_{k}(\tilde{G})}=\frac{\overbrace{\operatorname{{\mathbb{P}}}(\nu_{k-1}={\mathscr{V}}_{k-1}|G^{{}_{(k-1)}}=\tilde{G})}^{\delta_{k-1}({\mathscr{V}}_{k-1};\tilde{G})}\cdot\overbrace{\operatorname{{\mathbb{P}}}(G^{{}_{(k-1)}}=\tilde{G}|\bm{\nu}_{{}^{0}}^{{}_{k-2}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-2}})}^{\beta_{k-1}(\tilde{G})}}{\underbrace{\operatorname{{\mathbb{P}}}(\nu_{k-1}={\mathscr{V}}_{k-1}|\bm{\nu}_{{}^{0}}^{{}_{k-2}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-2}})}_{Z_{k}}}\quad.$
(7)
Here,
$\operatorname{{\mathbb{P}}}(G^{{}_{(k-1)}}=\tilde{G}|\bm{\nu}_{{}^{0}}^{{}_{k-2}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-2}})=\operatorname{{\mathbb{P}}}(G^{{}_{(k-2)}}=\tilde{G}|\bm{\nu}_{{}^{0}}^{{}_{k-2}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-2}})=\beta_{k-1}(\tilde{G})$
for the static link model because
$G^{{}_{(k-1)}}=G^{{}_{(k-2)}}=G^{{}_{(0)}}$. Again, we used the independence
of future packet deliveries from past packet deliveries given the network
state,
$\operatorname{{\mathbb{P}}}(\nu_{k-1}={\mathscr{V}}_{k-1}|G^{{}_{(k-1)}}=\tilde{G},\bm{\nu}_{{}^{0}}^{{}_{k-2}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-2}})=\operatorname{{\mathbb{P}}}(\nu_{k-1}={\mathscr{V}}_{k-1}|G^{{}_{(k-1)}}=\tilde{G})\quad.$
Note that
$\operatorname{{\mathbb{P}}}(\nu_{k-1}={\mathscr{V}}_{k-1}|G^{{}_{(k-1)}}=\tilde{G})$
can only be 0 or 1, indicating whether the packet delivery is consistent with
the graph realization. Finally,
$\operatorname{{\mathbb{P}}}(\nu_{k-1}={\mathscr{V}}_{k-1}|\bm{\nu}_{{}^{0}}^{{}_{k-2}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-2}})$
is the same for all $\tilde{G}$, so it is treated as a normalization constant.
At sample time $k=0$, when no packets have been sent through the network,
$\beta_{0}(\tilde{G})=\operatorname{{\mathbb{P}}}(G^{{}_{(0)}}=\tilde{G})$,
which is expressed in (8d) below. This equation comes from the assumption that
all links in the network are independent.
To summarize, the SIHS Network Estimator and Packet Delivery Predictor is a
recursive Bayesian estimator where the measurement output step consists of
$\displaystyle
f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})$
$\displaystyle=\sum_{\tilde{G}}\alpha_{k}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}};\tilde{G})\cdot\beta_{k}(\tilde{G})$
(8a) $\displaystyle\alpha_{k}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}};\tilde{G})$
$\displaystyle=\prod_{h=0}^{H-1}\delta_{k+h}(\tilde{\nu}_{h};\tilde{G})\quad,$
(8b) and the innovation step consists of $\displaystyle\beta_{k}(\tilde{G})$
$\displaystyle=\frac{\delta_{k-1}({\mathscr{V}}_{k-1};\tilde{G})\cdot\beta_{k-1}(\tilde{G})}{Z_{k}}$
(8c) $\displaystyle\beta_{0}(\tilde{G})$
$\displaystyle=\left(\prod_{l\in\tilde{{\mathcal{E}}}}p_{l}\right)\left(\prod_{l\in{\mathcal{E}}\backslash\tilde{{\mathcal{E}}}}1-p_{l}\right)\quad,$
(8d) where $\alpha_{k}$ and $\beta_{k}$ are functions, $Z_{k}$ is a
normalization constant such that $\sum_{\tilde{G}}\beta_{k}(\tilde{G})=1$, and
the functions $\delta_{k+h}$ and $\delta_{k-1}$ are defined by (3).
### 3.2 GEIHS Network Estimator
For compact notation in the probability expressions below, we use
$\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}}$ in place of
$\bm{\nu}_{{}^{0}}^{{}_{k-1}}=\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}}$ and only
write the random variable and not its value ($\tilde{\cdot}$) .
The derivation of the GEIHS network estimator is similar to the previous
derivation, except that the state of the network evolves with every sample
time $k$. Since all the links in the network are independent, the probability
that a given topology $\tilde{G}^{\prime}$ at sample time $k-1$ transitions to
a topology $\tilde{G}$ after one sample time is given by
$\Gamma(\tilde{G};\tilde{G}^{\prime})=\operatorname{{\mathbb{P}}}(G^{{}_{(k)}}|G^{{}_{(k-1)}})=\left(\prod_{l_{1}\in\tilde{{\mathcal{E}}}^{\prime}\cap\tilde{{\mathcal{E}}}}1-p_{l_{1}}^{\mathrm{d}}\right)\left(\prod_{l_{2}\in\tilde{{\mathcal{E}}}^{\prime}\backslash\tilde{{\mathcal{E}}}}p_{l_{2}}^{\mathrm{d}}\right)\left(\prod_{l_{3}\in\tilde{{\mathcal{E}}}\backslash\tilde{{\mathcal{E}}}^{\prime}}p_{l_{3}}^{\mathrm{u}}\right)\left(\prod_{l_{4}\in{\mathcal{E}}\backslash(\tilde{{\mathcal{E}}}^{\prime}\cup\tilde{{\mathcal{E}}})}1-p_{l_{4}}^{\mathrm{u}}\right)\quad.$
(9)
First, express
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})$ as
$\underbrace{\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}|\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})}=\sum_{G^{{}_{(k+H-1)}}}\underbrace{\operatorname{{\mathbb{P}}}(\nu_{k+H-1}|G^{{}_{(k+H-1)}})}_{\delta_{k+H-1}(\tilde{\nu}_{H-1};\tilde{G}_{H-1})}\cdot\underbrace{\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+H-2}},G^{{}_{(k+H-1)}}|\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{\alpha_{H-1|k-1}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-2}},\tilde{G}_{H-1})}\quad,$
where for $h=2,\dots,H-1$
$\underbrace{\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+h-1}},G^{{}_{(k+h)}}|\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{\alpha_{h|k-1}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{h-1}},\tilde{G}_{h})}=\sum_{G^{{}_{(k+h-1)}}}\underbrace{\operatorname{{\mathbb{P}}}(G^{{}_{(k+h)}}|G^{{}_{(k+h-1)}})}_{\Gamma(\tilde{G}_{h};\tilde{G}_{h-1})}\cdot\underbrace{\operatorname{{\mathbb{P}}}(\nu_{k+h-1}|G^{{}_{(k+h-1)}})}_{\delta_{k+h-1}(\tilde{\nu}_{h-1};\tilde{G}_{h-1})}\cdot\underbrace{\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+h-2}},G^{{}_{(k+h-1)}}|\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{\alpha_{h-1|k-1}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{h-2}},\tilde{G}_{h-1})}\quad.$
(10)
When $h=1$, replace
$\operatorname{{\mathbb{P}}}(\bm{\nu}_{{}^{k}}^{{}_{k+h-2}},G^{{}_{(k+h-1)}}|\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})$
in (10) with
$\operatorname{{\mathbb{P}}}(G^{{}_{(k)}}|\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})=\beta_{k|k-1}(\tilde{G})$.
The value $\beta_{k|k-1}(\tilde{G})$ comes from
$\underbrace{\operatorname{{\mathbb{P}}}(G^{{}_{(k)}}|\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{\beta_{k|k-1}(\tilde{G})}=\sum_{G^{{}_{(k-1)}}}\underbrace{\operatorname{{\mathbb{P}}}(G^{{}_{(k)}}|G^{{}_{(k-1)}})}_{\Gamma(\tilde{G};\tilde{G}^{\prime})}\cdot\underbrace{\operatorname{{\mathbb{P}}}(G^{{}_{(k-1)}}|\bm{{\mathscr{V}}}_{{}^{0}}^{{}_{k-1}})}_{\beta_{k-1|k-1}(\tilde{G}^{\prime})}\quad.$
The value $\beta_{k-1|k-1}(\tilde{G}^{\prime})$ comes from (7), with
$\beta_{k}$ replaced by $\beta_{k-1|k-1}$ and $\beta_{k-1}$ replaced by
$\beta_{k-1|k-2}$. Finally,
$\beta_{0|-1}(\tilde{G})=\operatorname{{\mathbb{P}}}(G^{{}_{(0)}})$, where all
links are independent and have link probabilities equal to their steady-state
probability of being in state 1, and is expressed in (11f) below.
To summarize, the GEIHS Network Estimator and Packet Delivery Predictor is a
recursive Bayesian estimator. The measurement output step consists of
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})=\sum_{\tilde{G}_{H-1}}\delta_{k+H-1}(\tilde{\nu}_{H-1};\tilde{G}_{H-1})\cdot\alpha_{H-1|k-1}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-2}},\tilde{G}_{H-1})\quad,$
(11a) where the function $\alpha_{H-1|k-1}$ is obtained from the following
recursive equation for $h=2,\ldots,H-1$:
$\alpha_{h|k-1}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{h-1}},\tilde{G}_{h})=\sum_{\tilde{G}_{h-1}}\Gamma(\tilde{G}_{h};\tilde{G}_{h-1})\cdot\delta_{k+h-1}(\tilde{\nu}_{h-1};\tilde{G}_{h-1})\cdot\alpha_{h-1|k-1}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{h-2}},\tilde{G}_{h-1})\quad,$
(11b) with initial condition
$\alpha_{1|k-1}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{0}},\tilde{G}_{1})=\sum_{\tilde{G}}\Gamma(\tilde{G}_{1};\tilde{G})\cdot\delta_{k}(\tilde{\nu}_{0};\tilde{G})\cdot\beta_{k|k-1}(\tilde{G})\quad.$
(11c) The prediction and innovation steps consist of
$\displaystyle\beta_{k|k-1}(\tilde{G})$
$\displaystyle=\sum_{\tilde{G}^{\prime}}\Gamma(\tilde{G};\tilde{G}^{\prime})\cdot\beta_{k-1|k-1}(\tilde{G}^{\prime})$
(11d) $\displaystyle\beta_{k-1|k-1}(\tilde{G})$
$\displaystyle=\frac{\delta_{k-1}({\mathscr{V}}_{k-1};\tilde{G})\cdot\beta_{k-1|k-2}(\tilde{G})}{Z_{k-1}}$
(11e) $\displaystyle\beta_{0|-1}(\tilde{G})$
$\displaystyle=\left(\prod_{l\in\tilde{{\mathcal{E}}}}p_{l}\right)\left(\prod_{l\in{\mathcal{E}}\backslash\tilde{{\mathcal{E}}}}1-p_{l}\right)\quad,$
(11f) where $\alpha_{h|k-1}$, $\beta_{k-1|k-1}$, and $\beta_{k|k-1}$ are
functions, $Z_{k-1}$ is a normalization constant such that
$\sum_{\tilde{G}}\beta_{k-1|k-1}(\tilde{G})=1$, and the functions
$\delta_{\kappa}$ (for the different values of $\kappa$ above) and $\Gamma$
are defined by (3) and (9), respectively.
### 3.3 Packet Predictor Complexity
The network estimators are trying to estimate network parameters using
measurements collected at the border of the network, a general problem studied
in the field of network tomography [17] under various problem setups. One of
the greatest challenges in network tomography is getting good estimates with
low computational complexity estimators.
Our proposed network estimators are “optimal” with respect to our models in
the sense that there is no loss of information, but they are computationally
expensive.
###### Property 1.
The SIHS network estimator described by the set of equations (8) takes
$O(E2^{E})$ to initialize and $O(2^{H}2^{E})$ to update the network state
estimate and predictions at each step. The GEIHS network estimator described
by the set of equations (11) takes $O(E2^{2E})$ to initialize and
$O(2^{H}2^{2E})$ for each update step.
###### Proof.
Let $D=\max_{k}t_{k}^{\prime}-t_{k}$. We assume that converting $\tilde{G}$ to
the set of links that are up, $\tilde{{\mathcal{E}}}$, takes constant time.
Also, one can simulate the path of a packet by looking up the scheduled and
successful link transmissions instead of multiplying matrices to evaluate
$\tilde{F}_{ab}^{(t_{\kappa},t_{\kappa}^{\prime};\tilde{G})}$, so computing
$\delta_{\kappa}$ for each graph $\tilde{G}$ only takes $O(D)$. The
computational complexities below assume that the pdfs can be represented by
matrices, and multiplying an $\ell\times m$ matrix with a $m\times n$ matrix
takes $O(\ell mn)$.
SIHS packet delivery predictor complexity:
Computing $\alpha_{k}$ in (8b) takes $O(D2^{H}2^{E})$ and computing
$\beta_{k}$ in (8c) takes $O(D2^{E})$, since there are $2^{E}$ graphs and
$2^{H}$ packet delivery prediction sequences. Computing
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}$ in (8a) takes $O(D2^{H}2^{E})$. The SIHS
packet delivery predictor update step is the aggregate of all these
computations and takes $O(D2^{H}2^{E})$.
The initialization step of the SIHS packet delivery predictor is just
computing $\beta_{0}$ in (8d), which takes $O(E2^{E})$.
GEIHS packet delivery predictor complexity:
Computing $\alpha_{h|k-1}$ in (11b) takes $O(D2^{E}+2^{E}2^{h}+2^{2E}2^{h})$
and computing $\alpha_{1|k-1}$ in (11c) takes $O(D2^{E}+2^{E+1}+2^{2E+1})$, so
computing all of them takes $O(DH2^{E}+2^{H}(2^{E}+2^{2E}))$, or just
$O(DH2^{E}+2^{H}2^{2E})$. Computing $\beta_{k|k-1}$ in (11d) takes
$O(2^{2E})$, and computing $\beta_{k-1|k-1}$ in (11e) takes $O(D2^{E})$.
Computing $f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}$ in (11a) takes
$O(D2^{E}+2^{H}2^{E})$. The GEIHS packet delivery predictor update step is the
aggregate of all these computations and takes $O(DH2^{E}+2^{H}2^{2E})$.
Computing $\Gamma$ in (9) takes $O(E2^{2E})$, and computing $\beta_{0|-1}$ in
(11f) takes $O(E2^{E})$. The initialization step of the GEIHS packet delivery
predictor is the aggregate of these computations and takes $O(E2^{2E})$.
If we assume that the deadline $D$ is short enough to be considered constant,
we get the computational complexities given in Property 1. ∎
A good direction for future research is to find lower complexity, “suboptimal”
network estimators for our problem setup, and compare them to our “optimal”
network estimators.
### 3.4 Discussion
Our network estimators can easily be extended to incorporate additional
observations besides past packet deliveries, such as the packet delay and
packet path traces. The latter can be obtained by recording the state of the
links that the packet has tried to traverse in the packet payload. The
function $\delta_{k-1}$ in (8c) and (11e) just needs to be replaced with
another function that returns 1 if the the received observation is consistent
with a network topology $\tilde{G}$, and 0 otherwise. The advantage of using
more observations than the one bit of information provided by a packet
delivery is that it will help the GEIHS network estimator more quickly detect
changes in the network state. A more non-trivial extension of the GEIHS
network estimator would use additional observations provided by packets from
other flows (not from our controller) to help estimate the network state,
which could significantly decrease the time for the network estimator to
detect a change in the state of the network. This is non-trivial because the
network model would now have to account for queuing at nodes in the network,
which is inevitable with multiple flows.
Note that the network state probability distribution, $\beta_{k}(\tilde{G})$
in (8c) or $\beta_{k-1|k-1}(\tilde{G})$ in (11e), does not need to converge to
a probability distribution describing one topology realization to yield
precise packet predictions
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})$,
where precise means there is one (or very few similar) high probability packet
delivery sequence(s) $\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}}$. Several topology
realizations $\tilde{G}$ may result in the same packet delivery sequence.
Also, note that the GEIHS network estimator performs better when the links in
the network are more bursty. Long bursts of packet losses from bursty links
result in poor control system performance, which is when the network estimator
would help the most.
## 4 FPD Controller
In this section, we derive the FPD controller using dynamic programming. Next,
we present two controllers for comparison with the FPD controller. These
comparative controllers assume particular statistical models (e.g., i.i.d.
Bernoulli) for the packet delivery sequence pdf which may not describe the
actual pdf, while the FPD controller allows for all packet delivery sequence
pdfs. We derive the LQG cost of using these controllers. Finally, we present
the computational complexity of the optimal controller.
### 4.1 Derivation of the FPD Controller
We first present the FPD controller and then present its derivation.
###### Theorem 4.1.
For a plant with state dynamics given by (1), the optimal control policy
operating on the information set (5) which minimizes the cost function (6)
results in an optimal control signal $u_{k}=-L_{k}x_{k}$, where
$L_{k}=\big{(}Q_{2}+B^{T}S_{k+1}\scriptstyle(\nu_{k}=1,\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle
B\big{)}^{-1}B^{T}S_{k+1}\scriptstyle(\nu_{k}=1,\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle
A$ (12)
and $S_{k}:\\{0,1\\}^{k}\mapsto{\mathbb{S}}_{+}^{\ell}$ and
$s_{k}:\\{0,1\\}^{k}\mapsto{\mathbb{R}}_{+}$ are the solutions to the cost-to-
go at time $k$, given by
$\displaystyle
S_{k}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle={}$
$\displaystyle
Q_{1}+A^{T}\operatorname*{{\mathbb{E}}}\big{[}S_{k+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{k}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{k-1}}\big{]}A$
$\displaystyle-\operatorname{{\mathbb{P}}}(\nu_{k}=1|\bm{\nu}_{{}^{0}}^{{}_{k-1}})\bigg{[}A^{T}S_{k+1}\scriptstyle(\nu_{k}=1,\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle
B\big{(}Q_{2}+B^{T}S_{k+1}\scriptstyle(\nu_{k}=1,\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle
B\big{)}^{-1}B^{T}S_{k+1}\scriptstyle(\nu_{k}=1,\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle
A\bigg{]}$ $\displaystyle
s_{k}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle={}$
$\displaystyle\operatorname{trace}\left\\{\operatorname*{{\mathbb{E}}}\big{[}S_{k+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{k}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{k-1}}\big{]}R_{w}\right\\}+\operatorname*{{\mathbb{E}}}\big{[}s_{k+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{k}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{k-1}}\big{]}\quad.$
###### Proof.
The classical problem in Åström [18] is solved by reformulating the original
problem as a recursive minimization of the Bellman equation derived for every
time instant, beginning with $N$. At time $n$, we have the minimization
problem
$\displaystyle\quad\min_{{}_{u_{n},\ldots,u_{N-1}}}\;\operatorname*{{\mathbb{E}}}[x_{N}^{T}Q_{0}x_{N}+\sum_{i=n}^{N-1}x_{i}^{T}Q_{1}x_{i}+\nu_{i}u_{i}^{T}Q_{2}u_{i}]$
$\displaystyle={}$
$\displaystyle\operatorname*{{\mathbb{E}}}\left[\min_{{}_{u_{n},\ldots,u_{N-1}}}\operatorname*{{\mathbb{E}}}[x_{N}^{T}Q_{0}x_{N}+\sum_{i=n}^{N-1}x_{i}^{T}Q_{1}x_{i}+\nu_{i}u_{i}^{T}Q_{2}u_{i}|{\mathcal{I}}^{{}^{\mathscr{C}}}_{n}]\right]$
$\displaystyle={}$
$\displaystyle\operatorname*{{\mathbb{E}}}\left[\quad\;\min_{{}_{u_{n}}}\quad\operatorname*{{\mathbb{E}}}[x_{n}^{T}Q_{1}x_{n}+\nu_{n}u_{n}^{T}Q_{2}u_{n}+V_{n+1}|{\mathcal{I}}^{{}^{\mathscr{C}}}_{n}]\right]\quad,$
where $V_{n}$ is the Bellman equation at time $n$. This is given by
$V_{n}=\min_{{}_{u_{n}}}\;\operatorname*{{\mathbb{E}}}\left[x_{n}^{T}Q_{1}x_{n}+\nu_{n}u_{n}^{T}Q_{2}u_{n}+V_{n+1}|{\mathcal{I}}^{{}^{\mathscr{C}}}_{n}\right]\quad.$
To solve the above nested minimization problem, we assume that the solution to
the functional is of the form
$V_{n}=x_{n}^{T}S_{n}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
x_{n}+s_{n}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle$, where
$S_{n}$ and $s_{n}$ are functions of the past packet deliveries
$\bm{\nu}_{{}^{0}}^{{}_{n-1}}$ that return a positive semidefinite matrix and
a scalar, respectively. However, both $S_{n}$ and $s_{n}$ are not functions of
the applied control sequence $\bm{u}_{{}^{0}}^{{}_{n-1}}$. We prove this
supposition using induction. The initial condition at time $N$ is trivially
obtained as $V_{N}=x_{N}^{T}Q_{0}x_{N}$, with $S_{N}=Q_{0}$ and $s_{N}=0$. We
now assume that the functional at time $n+1$ has a solution of the desired
form, and attempt to derive this at time $n$. We have
$\displaystyle V_{n}={}$
$\displaystyle\min_{{}_{u_{n}}}\;\operatorname*{{\mathbb{E}}}\left[x_{n}^{T}Q_{1}x_{n}+\nu_{n}u_{n}^{T}Q_{2}u_{n}+x_{n+1}^{T}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle
x_{n+1}+s_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|{\mathcal{I}}^{{}^{\mathscr{C}}}_{n}\right]$
$\displaystyle={}$
$\displaystyle\min_{{}_{u_{n}}}\;\operatorname*{{\mathbb{E}}}\Big{[}x_{n}^{T}\big{(}Q_{1}+A^{T}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle
A\big{)}x_{n}+\nu_{n}u_{n}^{T}\big{(}Q_{2}+B^{T}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle
B\big{)}u_{n}$
$\displaystyle\quad\quad+\nu_{n}x_{n}^{T}A^{T}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle
Bu_{n}+\nu_{n}u_{n}^{T}B^{T}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle
Ax_{n}+w_{n}^{T}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle
w_{n}+s_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|{\mathcal{I}}^{{}^{\mathscr{C}}}_{n}\Big{]}$
$\displaystyle={}$
$\displaystyle\min_{{}_{u_{n}}}\;x_{n}^{T}\bigg{(}Q_{1}+A^{T}\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}A\bigg{)}x_{n}+\operatorname{trace}\bigg{\\{}\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}R_{w}\bigg{\\}}+\operatorname*{{\mathbb{E}}}[s_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}]$
$\displaystyle\quad\quad+\operatorname{{\mathbb{P}}}(\nu_{n}=1|\bm{\nu}_{{}^{0}}^{{}_{n-1}})\bigg{[}u_{n}^{T}\big{(}Q_{2}+B^{T}S_{n+1}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
B\big{)}u_{n}$
$\displaystyle\quad\quad+x_{n}^{T}A^{T}S_{n+1}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
Bu_{n}+u_{n}^{T}B^{T}S_{n+1}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
Ax_{n}\bigg{]}\quad.$ (13)
In the last equation above, the expectation of the terms preceded by $\nu_{n}$
require the conditional probability
$\operatorname{{\mathbb{P}}}(\nu_{n}=1|\bm{\nu}_{{}^{0}}^{{}_{n-1}})$ and an
evaluation of $S_{n+1}$ with $\nu_{n}=1$. The corresponding terms with
$\nu_{n}=0$ vanish as they are multiplied by $\nu_{n}$. The control input at
sample time $n$ which minimizes the above expression is found to be
$u_{n}=-L_{n}x_{n}$, where the optimal control gain $L_{n}$ is given by (12),
with $k$ replaced by $n$. Substituting for $u_{n}$ in the functional $V_{n}$,
we get a solution to the functional of the desired form, with $S_{n}$ and
$s_{n}$ given by
$\displaystyle
S_{n}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle={}$
$\displaystyle
Q_{1}+A^{T}\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}A$
$\displaystyle-\operatorname{{\mathbb{P}}}(\nu_{n}=1|\bm{\nu}_{{}^{0}}^{{}_{n-1}})\bigg{[}A^{T}S_{n+1}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
B\big{(}Q_{2}+B^{T}S_{n+1}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
B\big{)}^{-1}B^{T}S_{n+1}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
A\bigg{]}$ (14a) $\displaystyle
s_{n}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle={}$
$\displaystyle\operatorname{trace}\left\\{\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}R_{w}\right\\}+\operatorname*{{\mathbb{E}}}\big{[}s_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}\quad.$
(14b)
Notice that $S_{n}$ and $s_{n}$ are functions of the variables
$\bm{\nu}_{{}^{0}}^{{}_{n-1}}$. When $n=k$, the current sample time, these
variables are known, and $S_{n}$ and $s_{n}$ are not random. But $S_{n+i}$ and
$s_{n+i}$, for values of $i\in\\{1,\ldots,N-n-1\\}$, are functions of the
variables $\bm{\nu}_{{}^{0}}^{{}_{n+i-1}}$, of which only the variables
$\bm{\nu}_{{}^{n}}^{{}_{n+i-1}}$ are random variables since they are unknown
to the controller at sample time $n=k$. Since the value of $S_{n+1}$ is
required at sample time $n$, we compute its conditional expectation as
$\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}=\operatorname{{\mathbb{P}}}(\nu_{n}=1|\bm{\nu}_{{}^{0}}^{{}_{n-1}})S_{n+1}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle+\operatorname{{\mathbb{P}}}(\nu_{n}=0|\bm{\nu}_{{}^{0}}^{{}_{n-1}})S_{n+1}\scriptstyle(\nu_{n}=0,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle\quad.$
(14c)
The above computation requires an evaluation of
$S_{n+i}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n+i-1}})\displaystyle$ through a
backward recursion for $i\in\\{1,\ldots,N-n-1\\}$ for all combinations of
$\bm{\nu}_{{}^{n+i}}^{{}_{N-2}}$. More explicitly, the expression at any time
$n+i$, for $i\in\\{N-n-1,\ldots,1\\}$, is given by
$\displaystyle\operatorname*{{\mathbb{E}}}\big{[}S_{n+i}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n+i-1}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}={}$
$\displaystyle
Q_{1}+A^{T}\operatorname*{{\mathbb{E}}}\big{[}S_{n+i+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n+i}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}A$
$\displaystyle-\sum_{\bm{\tilde{\nu}}_{{}^{0}}^{{}_{i-1}}\in\\{0,1\\}^{i}}\operatorname{{\mathbb{P}}}\big{(}\nu_{n+i}=1,\bm{\nu}_{{}^{n}}^{{}_{n+i-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{i-1}|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{)}$
$\displaystyle\times
A^{T}S_{n+i+1}\scriptstyle(\nu_{n+i}=1,\bm{\nu}_{{}^{n}}^{{}_{n+i-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{i-1},\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
B$
$\displaystyle\times\big{(}Q_{2}+B^{T}S_{n+i+1}\scriptstyle(\nu_{n+i}=1,\bm{\nu}_{{}^{n}}^{{}_{n+i-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{i-1},\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
B\big{)}^{-1}$ $\displaystyle\times
B^{T}S_{n+i+1}\scriptstyle(\nu_{n+i}=1,\bm{\nu}_{{}^{n}}^{{}_{n+i-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{i-1},\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
A$
$\displaystyle\operatorname*{{\mathbb{E}}}\big{[}s_{n+i}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n+i-1}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}$
$\displaystyle=\operatorname{trace}\left\\{\operatorname*{{\mathbb{E}}}\big{[}S_{n+i+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n+i}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}R_{w}\right\\}+\operatorname*{{\mathbb{E}}}\big{[}s_{n+i+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n+i}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}\quad.$
Using the above expressions, we obtain the net cost to be
$J=\operatorname{trace}{S_{0}R_{0}}+\sum_{n=0}^{N-1}\operatorname{trace}\left\\{\operatorname*{{\mathbb{E}}}[S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle]R_{w}\right\\}\quad.$
(15)
Notice that the control inputs $u_{n}$ are only applied to the plant and do
not influence the network or $\bm{\nu}_{{}^{0}}^{{}_{N-1}}$. Thus, the
architecture separates into a network estimator and controller, as shown in
Figure 2. ∎
### 4.2 Comparative controllers
In this section, we compare the performance of the FPD controller to two
controllers that assume particular statistical models for the packet delivery
sequence pdf, the IID controller and the ON controller.
_IID Controller:_ The IID controller was described in Schenato et al. [7] and
assumes that the packet deliveries are i.i.d. Bernoulli with packet delivery
probability equal to the a priori probability of delivering a packet through
the network.777Using the stationary probability of each link under the G-E
link model to calculate the end-to-end probability of delivering a packet
through the network. This is our first comparative controller, where
$u_{k}=-L_{k}^{\textrm{IID}}x_{k}$ and the control gain $L_{k}^{\textrm{IID}}$
is given by
$L_{k}^{\textrm{IID}}=\big{(}Q_{2}+B^{T}S_{k+1}^{\textrm{IID}}B\big{)}^{-1}B^{T}S_{k+1}^{\textrm{IID}}A\quad.$
Here, $S_{k+1}^{\textrm{IID}}$ is the solution to the Riccati equation for the
control problem where the packet deliveries are assumed to be i.i.d.
Bernoulli. The backward recursion is initialized to
$S_{N}^{\textrm{IID}}=Q_{0}$ and is given by
$S_{k}^{\textrm{IID}}=Q_{1}+A^{T}S_{k+1}^{\textrm{IID}}A-\operatorname{{\mathbb{P}}}(\nu_{k}=1)A^{T}S_{k+1}^{\textrm{IID}}B\big{(}Q_{2}+B^{T}S_{k+1}^{\textrm{IID}}B\big{)}^{-1}B^{T}S_{k+1}^{\textrm{IID}}A\quad.$
_ON Controller:_ The ON controller assumes that the packets are always
delivered, or that the network is always online. This is our second
comparative controller, where $u_{k}=-L_{k}^{\textrm{ON}}x_{k}$ and the
control gain $L_{k}^{\textrm{ON}}$ is given by
$L_{k}^{\textrm{ON}}=\big{(}Q_{2}+B^{T}S_{k+1}^{\textrm{ON}}B\big{)}^{-1}B^{T}S_{k+1}^{\textrm{ON}}A\quad.$
Here, $S_{k+1}^{\textrm{ON}}$ is the solution to the Riccati equation for the
classical control problem which assumes no packet losses on the actuation
channel. The backward recursion is initialized to $S_{N}^{\textrm{ON}}=Q_{0}$
and is given by
$S_{k}^{\textrm{ON}}=Q_{1}+A^{T}S_{k+1}^{\textrm{ON}}A-A^{T}S_{k+1}^{\textrm{ON}}B\big{(}Q_{2}+B^{T}S_{k+1}^{\textrm{ON}}B\big{)}^{-1}B^{T}S_{k+1}^{\textrm{ON}}A\quad.$
_Comparative Cost:_ The FPD controller is the most general form of a causal,
optimal LQG controller that takes into account the packet delivery sequence
pdf. It does not assume the packet delivery sequence pdf comes from a
particular statistical model. Approximating the actual packet delivery
sequence pdf with a pdf described by a particular statistical model, and then
computing the optimal control policy, will result in a suboptimal controller.
However, it may be less computationally expensive to obtain the control gains
for such a suboptimal controller. For example, the IID controller and the ON
controller are suboptimal controllers for networks like the one described in
Section 2.1, since they presume a statistical model that is mismatched to the
packet delivery sequence pdf obtained from the network model.
* Remark
The average LQG cost of using a controller with control gain
$L_{n}^{{\textrm{comp}}}$ is
$J=\operatorname{trace}{S_{0}^{\textrm{sopt}}R_{0}}+\sum_{n=0}^{N-1}\operatorname{trace}\left\\{\operatorname*{{\mathbb{E}}}[S_{n+1}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle]R_{w}\right\\}\quad,$
(16a)
where
$\displaystyle
S_{n}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle={}$
$\displaystyle
Q_{1}+A^{T}\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}A+\operatorname{{\mathbb{P}}}(\nu_{n}=1|\bm{\nu}_{{}^{0}}^{{}_{n-1}})$
$\displaystyle\times\bigg{[}L_{n}^{{\textrm{comp}}^{T}}\big{(}Q_{2}+B^{T}S_{n+1}^{\textrm{sopt}}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
B\big{)}L_{n}^{{\textrm{comp}}}$
$\displaystyle-A^{T}S_{n+1}^{\textrm{sopt}}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
BL_{n}^{{\textrm{comp}}}-L_{n}^{{\textrm{comp}}^{T}}B^{T}S_{n+1}^{\textrm{sopt}}\scriptstyle(\nu_{n}=1,\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
A\bigg{]}\quad,$ (16b)
and
$\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}$
is computed in a similar manner to (14c). The control gain
$L_{n}^{{\textrm{comp}}}$ can be the gain of a comparative controller (e.g.,
$L_{n}^{\textrm{IID}}$ or $L_{n}^{\textrm{ON}}$) where the statistical model
for the packet delivery sequence is mismatched to the actual model. ∎
This can be seen from the proof of Theorem 4.1, if we substitute for the
control input with $u_{n}^{{\textrm{sopt}}}=-L_{n}^{{\textrm{comp}}}x_{n}$ in
(13), instead of minimizing the expression to find the optimal $u_{n}$. On
simplifying, we get the solution to the cost-to-go $V_{n}$ of the form
$x_{n}^{T}S_{n}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle
x_{n}+s_{n}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle$,
with $S_{n}^{\textrm{sopt}}$ given by (16b) and $s_{n}^{\textrm{sopt}}$ given
by
$s_{n}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n-1}})\displaystyle=\operatorname{trace}\left\\{\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}R_{w}\right\\}+\operatorname*{{\mathbb{E}}}\big{[}s_{n+1}^{\textrm{sopt}}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{n-1}}\big{]}\quad.$
### 4.3 Algorithm to Compute Optimal Control Gain
At sample time $k$, we have $\bm{\nu}_{{}^{0}}^{{}_{k-1}}$. To compute $L_{k}$
given in (12), we need
$S_{k+1}\scriptstyle(\nu_{k}=1,\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle$,
which can only be obtained through a backward recursion from $S_{N}$. This
requires knowledge of $\bm{\nu}_{{}^{k}}^{{}_{N-1}}$, which are unavailable at
sample time $k$. Thus, we must evaluate $\\{S_{k+1},\ldots,S_{N}\\}$ for every
possible sequence of arrivals $\bm{\nu}_{{}^{k}}^{{}_{N-1}}$. This algorithm
is described below.
1. 1.
Initialization:
$S_{N}\scriptstyle(\bm{\nu}_{{}^{k}}^{{}_{N-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{N-k-1}},\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle=Q_{0},$
$\forall\;\bm{\tilde{\nu}}_{{}^{0}}^{{}_{N-k-1}}\in\\{0,1\\}^{N-k}$.
2. 2.
for $n=N-1:-1:k+1$
* a)
Using (14c), compute
$\operatorname*{{\mathbb{E}}}\big{[}S_{n+1}\scriptstyle(\bm{\nu}_{{}^{0}}^{{}_{n}})\displaystyle|\bm{\nu}_{{}^{0}}^{{}_{k-1}},\bm{\tilde{\nu}}_{{}^{0}}^{{}_{n-k-1}}\big{]},$
$\forall\;\bm{\tilde{\nu}}_{{}^{0}}^{{}_{n-k-1}}\in\\{0,1\\}^{n-k}$.
* b)
Using (14a), compute
$S_{n}\scriptstyle(\bm{\nu}_{{}^{k}}^{{}_{n-1}}=\bm{\tilde{\nu}}_{{}^{0}}^{{}_{n-k-1}},\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle,$
$\forall\;\bm{\tilde{\nu}}_{{}^{0}}^{{}_{n-k-1}}\in\\{0,1\\}^{n-k}$.
3. 3.
Compute $L_{k}$ using
$S_{k+1}\scriptstyle(\nu_{k}=1,\bm{\nu}_{{}^{0}}^{{}_{k-1}})\displaystyle$.
For $k=0$, the values $S_{0}$,
$\operatorname*{{\mathbb{E}}}[S_{1}\scriptstyle(\nu_{0})\displaystyle]$, and
the other values obtained above can be used to evaluate the cost function
according to (15).
### 4.4 Computational Complexity of Optimal Control Gain
The FPD controller is optimal but computationally expensive, as it requires an
enumeration of all possible packet delivery sequences from the current sample
time until the end of the control horizon to calculate the optimal control
gain (12) at every sample time $k$.
###### Property 2.
The algorithm presented in Section 4.3 for computing the optimal control gain
for the FPD controller takes $O(q^{3}(N-k)2^{N-k})$ operations at each sample
time $k$, where $q=\max(\ell,m)$ and $\ell$ and $m$ are the dimensions of the
state and control vectors.
###### Proof.
The computational complexities below assume that multiplying an $\ell\times m$
matrix with a $m\times n$ matrix takes $O(\ell mn)$, and that inverting an
$\ell\times\ell$ matrix takes $O(\ell^{3})$.
For the computation of $L_{k}$ in (12), we need to run the algorithm presented
in Section 4.3. The steps within the for-loop (Step 2) of the algorithm
require matrix multiplications and inversions that take
$O((2\ell^{3}+6\ell^{2}m+2\ell^{2}+2\ell m^{2}+m^{3}+m^{2})2^{N-k})$
operations, or $O(q^{3}2^{N-k})$ operations if we let $q=\max(\ell,m)$. This
must be repeated $N-k-1$ times in the for-loop, so Step 2 takes
$O(q^{3}(N-k-1)2^{N-k})$.
Finally, Step 3 takes $O(4\ell^{2}m+\ell m^{2}+m^{3}+m^{2})$ operations for
the matrix multiplications and inversions, which simplifies to $O(q^{3})$.
Combining these results and simplifying yields the computational complexity
given in Property 2. ∎
For the SIHS network model, once the network state estimates from the SIHS
network estimator converge, the conditional probabilities
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}$ will not change and the computations can
be reused. But, for a network that evolves over time, like the GEIHS network
model, the computations cannot be reused, and the computational cost remains
high.
## 5 Examples and Simulations
Using the system architecture depicted in Figure 2, we will demonstrate the
GEIHS network estimator on a small mesh network and use the packet delivery
predictions in our FPD controller. Figure 6 depicts the routing topology and
short repeating schedule of the network. Packets are generated at the source
every 409 time slots,888Effectively, the packets are generated every $9+4K$
time slots, where $K$ is a very large integer, so we can assume slow system
dynamics with respect to time slots and ignore the delay introduced by the
network. and the packet delivery deadline is $t_{k}^{\prime}-t_{k}=9,\forall
k$. The network estimator assumes all links have $p^{\mathrm{u}}=0.0135$ and
$p^{\mathrm{d}}=0.0015$.
Routing Topology
---
Schedule
Figure 6: Example of a simple mesh network for network estimation.
The packet delivery predictions from the network estimator are shown in Figure
7. Although the network estimator provides
$f_{\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}}(\bm{\tilde{\nu}}_{{}^{0}}^{{}_{H-1}})$, at
each sample time $k$ we plot the average prediction
$\operatorname*{{\mathbb{E}}}[\bm{\nu}_{{}^{k}}^{{}_{k+H-1}}]$. In this
example, all the links are up for $k\in\\{1,\ldots,4\\}$ and then link $(3,b)$
fails from $k=5$ onwards. After seeing a packet loss at $k=5$, the network
estimator revises its packet delivery predictions and now thinks there will be
a packet loss at $k=7$. The average prediction for the packet delivery at a
particular sample time tends toward 1 or 0 as the network estimator receives
more information (in the form of packet deliveries) about the new state of the
network.
Figure 7: Packet delivery predictions when network in Figure 6 has all links
up and then link $(3,b)$ fails.
The prediction for $k=7$ (packet generated at schedule time slot 3) at $k=5$
is influenced by the packet delivery at $k=5$ (packet generated at schedule
time slot 1) because hop-by-hop routing allows the packets to traverse the
same links under some realizations of the underlying routing topology $G$.
Mesh networks with many interleaved paths allow packets generated at different
schedule time slots to provide information about each others’ deliveries,
provided the links in the network have some memory. As discussed in Section
3.4, since a packet delivery provides only one bit of information about the
network state, it may take several packet deliveries to get good predictions
after the network changes.
Figure 8: Plot of the different control signals and state vectors when using
the FPD controller, an IID controller, and an ON controller (see text for
details).
Now, consider a linear plant with the following parameters
$\displaystyle A=\left[\begin{array}[]{cc}0&1.5\\\
1.5&0\end{array}\right],B=\left[\begin{array}[]{cc}5&0\\\
0&0.2\end{array}\right],R_{w}$
$\displaystyle=\left[\begin{array}[]{cc}0.1&0\\\
0&0.1\end{array}\right],R_{0}=\left[\begin{array}[]{cc}10&0\\\
0&10\end{array}\right]$ $\displaystyle
Q_{1}=Q_{2}=\left[\begin{array}[]{cc}1&0\\\ 0&1\end{array}\right],Q_{n}$
$\displaystyle=\left[\begin{array}[]{cc}10&0\\\ 0&10\end{array}\right]\quad.$
The transfer matrix $A$ flips and expands the components of the state at every
sampling instant. The input matrix $B$ requires the second component of the
control input to be larger in magnitude than the first component to have the
same effect on the respective component of the state. Also, the final state is
weighted more than the other states in the cost criterion. We compare the
three finite horizon LQG controllers discussed in Section 4, namely the FPD
controller, the IID controller, and the ON controller with their costs (15)
and (16a).
The controllers are connected to the plant at sample times $k=9,10,11$ through
the network example given in Figure 7. Figure 8 shows the control signals
computed by the different controllers and the plant states when the control
signals are applied following the actual packet delivery sequence. From the
predictions at $k=8,9,10$ in Figure 7, we see that the FPD controller has
better knowledge of the packet delivery sequence than the other two
controllers. The FPD controller uses this knowledge to compute an optimal
control signal that outputs a large magnitude for the second component of
$u_{10}$, despite the high cost of this signal. The IID and ON controllers
believe the control packet is likely to be delivered at $k=11$ and choose,
instead, to output a smaller increase in the first component of $u_{11}$,
since this will have the same effect on the final state if the control packet
at $k=11$ is successfully delivered.
The FPD controller is better than the other controllers at driving the first
component of the state close to zero at the end of the control horizon,
$k=12$. Thus, the packet delivery predictions from the network estimator help
the FPD controller significantly lower its LQG cost, as shown in Table 1. The
costs reported here are obtained from Monte-Carlo simulations of the system,
averaged over 10,000 runs, but with the network state set to the one described
above.
Table 1: Simulated LQG Costs (10,000 runs) for Example Described in Section 5 FPD Controller | IID Controller | ON Controller
---|---|---
681.68 | 1,008.2 | 1,158.9
## 6 Discussion on Network Model Selection
The ability of the network estimator to accurately predict packet deliveries
is dependent on the network model. A natural objection to the GEIHS network
model is that it assumes links are independent and does not capture the full
behavior of a lossy and bursty wireless link through the G-E link model [15].
Why not use one of the more sophisticated link models mentioned by Willig et
al. [15]? Why not use a network model that can capture correlation between the
links in the network? A good network model must be rich enough to capture the
relevant behavior of the actual network, but not have too many parameters that
are difficult to obtain.
In our problem setup, the relevant behavior is the packet delivery sequence of
the network. As mentioned in Section 3.4, the network state probability
distribution does not need to identify the exact network topology realization
to get precise packet delivery predictions. In this regard, the GEIHS network
model has too many states ($2^{E}$ states) and may be overmodeling the actual
network. However, the more relevant question is: Does the GEIHS network model
yield accurate packet delivery predictions, predictions that are close to the
actual future packet delivery sequence? Do the simplifications from assuming
link independence and using a G-E link model result in inaccurate packet
delivery predictions? These questions need further investigation, involving
real-world experiments.
Our GEIHS network model has as parameters the routing topology $G$, the
schedule $\mathbf{F}^{(T)}$, the G-E link transition probabilities
$\\{p_{l}^{\mathrm{u}},p_{l}^{\mathrm{d}}\\}_{l\in{\mathcal{E}}}$, the source
$a$, the sink $b$, the packet generation times $t_{k}$, and the deadlines
$t_{k}^{\prime}$. The most difficult parameters to obtain are the link
transition probabilities, which must be estimated by link estimators running
on the nodes and relayed to the GEIHS network estimator. Furthermore, on a
real network these parameters will change over time, albeit at a slower time
scale than the link state changes. The issue of how to obtain these parameters
is not addressed in this paper.
Despite its limitations, the GEIHS network model is a good basis for
comparisons when other network models for our problem setup are proposed in
the future. It also raises several related research questions and issues.
Are there classes of routing topologies where packet delivery statistics are
less sensitive to the parameters in our G-E link model $p_{l}^{\mathrm{u}}$
and $p_{l}^{\mathrm{d}}$? How do we build networks (e.g., select routing
topologies and schedules) that are “robust” to link modeling error and provide
good packet delivery statistics (e.g., low packet loss, low delay) for NCSs?
The latter half of the question, building networks with good packet delivery
statistics, is partially addressed by other works in the literature like
Soldati et al. [19], which studies the problem of scheduling a network to
optimize reliability given a routing topology and packet delivery deadline.
Another issue arises when we use a controller that reacts to estimates of the
network’s state. In our problem setup, if the network estimator gives wrong
(inaccurate) packet delivery predictions, the FPD controller can actually
perform _worse_ than the ON controller. How do we design FPD controllers that
are robust to inaccurate packet delivery predictions?
## 7 Conclusions
This paper proposes two network estimators based on simple network models to
characterize wireless mesh networks for NCSs. The goal is to obtain a better
abstraction of the network, and interface to the network, to present to the
controller and (future work) network manager. To get better performance in a
NCS, the network manager needs to _control and reconfigure_ the network to
reduce outages and the controller needs to _react to or compensate for_ the
network when there are unavoidable outages. We studied a specific NCS
architecture where the actuation channel was over a lossy wireless mesh
network and a network estimator provided packet delivery predictions for a
finite horizon, Future-Packet-Delivery-optimized LQG controller.
There are several directions for extending the basic problem setup in this
paper, including those mentioned in Sections 3.3, 3.4, and 6. For instance,
placing the network estimator(s) on the actuators in the general system
architecture depicted in Figure 1 is a more realistic setup but will introduce
a lossy channel between the network estimator(s) and the controller(s). Also,
one can study the use of packet delivery predictions in a receding horizon
controller rather than a finite horizon controller.
## References
* [1] Wireless Industrial Networking Alliance, “WINA website,” http://www.wina.org, 2010.
* [2] International Society of Automation, “ISA-SP100 wireless systems for automation website,” http://www.isa.org/isa100, 2010.
* [3] I. Chlamtac, M. Conti, and J. J. N. Liu, “Mobile ad hoc networking: Imperatives and challenges,” _Ad Hoc Networks_ , vol. 1, no. 1, pp. 13–64, 2003.
* [4] R. Bruno, M. Conti, and E. Gregori, “Mesh networks: commodity multihop ad hoc networks,” _IEEE Communications Magazine_ , vol. 43, no. 3, pp. 123–131, Mar. 2005.
* [5] J. P. Hespanha, P. Naghshtabrizi, and Y. Xu, “A survey of recent results in networked control systems,” _Proc. of the IEEE_ , vol. 95, pp. 138–162, 2007.
* [6] C. Robinson and P. Kumar, “Control over networks of unreliable links: Controller location and performance bounds,” in _Proc. of the 5th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt)_ , Apr. 2007, pp. 1–8.
* [7] L. Schenato, B. Sinopoli, M. Franceschetti, K. Poolla, and S. S. Sastry, “Foundations of control and estimation over lossy networks,” _Proc. of the IEEE_ , vol. 95, pp. 163–187, 2007.
* [8] H. Ishii, “$H_{\infty}$ control with limited communication and message losses,” _Systems & Control Letters_, vol. 57, no. 4, pp. 322–331, 2008\.
* [9] P. Seiler and R. Sengupta, “An $H_{\infty}$ approach to networked control,” _IEEE Transactions on Automatic Control_ , vol. 50, pp. 356–364, 2005.
* [10] N. Elia, “Remote stabilization over fading channels,” _Systems & Control Letters_, vol. 54, no. 3, pp. 237–249, 2005.
* [11] R. Olfati-Saber, J. Fax, and R. Murray, “Consensus and cooperation in networked multi-agent systems,” _Proc. of the IEEE_ , vol. 95, no. 1, pp. 215–233, Jan. 2007.
* [12] V. Gupta, A. Dana, J. Hespanha, R. Murray, and B. Hassibi, “Data transmission over networks for estimation and control,” _IEEE Trans. on Automatic Control_ , vol. 54, no. 8, pp. 1807–1819, Aug. 2009.
* [13] D. Varagnolo, P. Chen, L. Schenato, and S. Sastry, “Performance analysis of different routing protocols in wireless sensor networks for real-time estimation,” in _Proc. of the 47th IEEE Conference on Decision and Control_ , Dec. 2008.
* [14] K. S. J. Pister and L. Doherty, “TSMP: Time synchronized mesh protocol,” in _Proc. of the IASTED International Symposium on Distributed Sensor Networks (DSN)_ , Nov. 2008.
* [15] A. Willig, M. Kubisch, C. Hoene, and A. Wolisz, “Measurements of a wireless link in an industrial environment using an ieee 802.11-compliant physical layer,” _IEEE Trans. on Industrial Electronics_ , vol. 49, no. 6, pp. 1265–1282, Dec. 2002.
* [16] P. Smyth, D. Heckerman, and M. I. Jordan, “Probabilistic independence networks for hidden markov probability models,” _Neural Comput._ , vol. 9, no. 2, pp. 227–269, 1997.
* [17] R. Castro, M. Coates, G. Liang, R. Nowak, and B. Yu, “Network tomography: Recent developments,” _Statistical Science_ , vol. 19, no. 3, pp. 499–517, Aug. 2004.
* [18] K. J. Åström, _Introduction to Stochastic Control Theory_. Academic Press, 1970, republished by Dover Publications, 2006.
* [19] P. Soldati, H. Zhang, Z. Zou, and M. Johansson, “Optimal routing and scheduling of deadline-constrained traffic over lossy networks,” in _Proc. of the IEEE Global Telecommunications Conference_ , Miami FL, USA, Dec. 2010.
|
arxiv-papers
| 2011-03-28T16:22:01 |
2024-09-04T02:49:17.965383
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Phoebus Chen, Chithrupa Ramesh, Karl H. Johansson",
"submitter": "Phoebus Chen",
"url": "https://arxiv.org/abs/1103.5405"
}
|
1103.5498
|
# Linear response theory for random Schrödinger operators and noncommutative
integration
Nicolas Dombrowski Université de Cergy-Pontoise, CNRS UMR 8088, Laboratoire
AGM, Département de Mathématiques, Site de Saint-Martin, 2 avenue Adolphe
Chauvin, F-95302 Cergy-Pontoise, France ndombro@math.u-cergy.fr and François
Germinet Université de Cergy-Pontoise, CNRS UMR 8088, Laboratoire AGM,
Département de Mathématiques, Site de Saint-Martin, 2 avenue Adolphe Chauvin,
F-95302 Cergy-Pontoise, France germinet@math.u-cergy.fr
###### Abstract.
We consider an ergodic Schrödinger operator with magnetic field within the
non-interacting particle approximation. Justifying the linear response theory,
a rigorous derivation of a Kubo formula for the electric conductivity tensor
within this context can be found in a recent work of Bouclet, Germinet, Klein
and Schenker [BoGKS]. If the Fermi level falls into a region of localization,
the well-known Kubo-Str̆eda formula for the quantum Hall conductivity at zero
temperature is recovered. In this review we go along the lines of [BoGKS] but
make a more systematic use of noncommutative ${\rm L}^{p}$-spaces, leading to
a somewhat more transparent proof.
2000 _Mathematics Subject Classification._ Primary 82B44; Secondary 47B80,
60H25
###### Contents
1. 1 Introduction
2. 2 Covariant measurable operators and noncommutative ${\rm L}^{p}$-spaces
1. 2.1 Observables
2. 2.2 Noncommutative integration
3. 2.3 Commutators of measurable covariant operators
4. 2.4 Differentiation
3. 3 The setting: Schrödinger operators and dynamics
1. 3.1 Magnetic Schrödinger operators and time-dependent operators
2. 3.2 Adding the randomness
4. 4 Linear response theory and Kubo formula
1. 4.1 Adiabatic switching of the electric field
2. 4.2 The current and the conductivity
3. 4.3 Computing the linear response: a Kubo formula for the conductivity
4. 4.4 The Kubo-Str̆eda formula for the Hall conductivity
## 1\. Introduction
In [BoGKS] the authors consider an ergodic Schrödinger operator with magnetic
field, and give a controlled derivation of a Kubo formula for the electric
conductivity tensor, validating the linear response theory within the
noninteracting particle approximation. For an adiabatically switched electric
field, they then recover the expected expression for the quantum Hall
conductivity whenever the Fermi energy lies either in a region of localization
of the reference Hamiltonian or in a gap of the spectrum.
The aim of this paper is to provide a pedestrian derivation of [BoGKS]’s
result and to simplify their “mathematical apparatus” by resorting more
systematically to noncommutative ${\rm L}^{p}$-spaces. We also state results
for more general time dependent electric fields, so that AC-conductivity is
covered as well. That von Neumann algebra and noncommutative integration play
an important rôle in the context of random Schrödinger operators involved with
the quantum Hall effect goes back to Bellissard, e.g. [B, BES, SB1, SB2].
The electric conductivity tensor is usually expressed in terms of a “Kubo
formula,” derived via formal linear response theory. In the context of
disordered media, where Anderson localization is expected (or proved), the
electric conductivity has driven a lot of interest coming for several
perspective. For time reversal systems and at zero temperature, the vanishing
of the direct conductivity is a physically meaningful evidence of a localized
regime [FS, AG]. Another direction of interest is the connection between
direct conductivity and the quantum Hall effect [ThKNN, St, B, Ku, BES, AvSS,
AG]. On an other hand the behaviour of the alternative conductivity at small
frequencies within the region of localization is dictated by the celebrated
Mott formula [MD] (see [KLP, KLM, KM] for recent important developments).
Connected to conductivities, current-current correlations functions have
recently droven a lot of attention as well (see [BH, CGH] and references
therein).
During the past two decades a few papers managed to shed some light on these
derivations from the mathematical point of view, e.g., [P, Ku, B, NB, AvSS,
BES, SB1, SB2, AG, Na, ES, CoJM, CoNP]. While a great amount of attention has
been brought to the derivation from a Kubo formula of conductivities (in
particular of the quantum Hall conductivity), and to the study of these
conductivities, not much has been done concerning a controlled derivation of
the linear response and the Kubo formula itself; only the recent papers [SB2,
ES, BoGKS, CoJM, CoNP] deal with this question.
In this note, the accent is put on the derivation of the linear response for
which we shall present the main elements of proof, along the lines of [BoGKS]
but using noncommutative integration. The required material is briefly
presented or recalled from [BoGKS]. Further details and extended proofs will
be found in [Do]. We start by describing the noncommutative
$\mathrm{L}^{p}$-spaces that are relevant in our context, and we state the
properties that we shall need (Section 2). In Section 3 we define magnetic
random Schrödinger operators and perturbations of these by time dependent
electric fields, but in a suitable gauge where the electric field is given by
a time-dependent vector potential. We review from [BoGKS] the main tools that
enter the derivation of the linear response, in particular the time evolution
propagators. In Section 4 we compute rigorously the linear response of the
system forced by a time dependent electric field. We do it along the lines of
[BoGKS] but within the framework of the noncommutative $\mathrm{L}^{p}$-spaces
presented in Section 2. The derivation is achieved in several steps. First we
set up the Liouville equation which describes the time evolution of the
density matrix under the action of a time-dependent electric field (Theorem
4.1). In a standard way, this evolution equation can be written as an integral
equation, the so-called Duhamel formula. Second, we compute the net current
per unit volume induced by the electric field and prove that it is
differentiable with respect to the electric field at zero field. This yields
in fair generality the desired Kubo formula for the electric conductivity
tensor, for any non zero adiabatic parameter (Theorem 4.6 and Corollary 4.7).
The adiabatic limit is then performed in order to compute the direct / ac
conductivity at zero temperature (Theorem 4.8, Corollary 4.9 and Remark 4.11).
In particular we recover the expected expression for the quantum Hall
conductivity, the Kubo-Str̆eda formula, as in [B, BES]. At positive
temperature, we note that, while the existence of the electric conductivity
_measure_ can easily be derived from that Kubo formula [KM], proving that the
conductivity itself, i.e. its density, exists and is finite remains out of
reach.
_Acknowledgement._ We thank warmly Vladimir Georgescu for enlightening
discussions on noncommutative integration, as well as A. Klein for useful
comments.
## 2\. Covariant measurable operators and noncommutative ${\rm L}^{p}$-spaces
In this section we construct the noncommutative ${\rm L}^{p}$-spaces that are
relevant for our analysis. We first recall the underlying Von Neumann alegbra
of observables and we equip it with the so called “trace per unit volume”. We
refer to [D, Te] for the material. We shall skip some details and proofs for
which we also refer to [Do].
### 2.1. Observables
Let $\mathcal{H}$ be a separable Hilbert space (in our context
$\mathcal{H}={\rm L}^{2}(\mathbb{R}^{d})$). Let $\mathcal{Z}$ be an abelian
locally compact group and $U=\\{U_{a}\\}_{a\in\mathcal{Z}}$ a unitary
projective representation of $\mathcal{Z}$ on $\mathcal{H}$, i.e.
* •
$\ U_{a}U_{b}=\xi(a,b)U_{a+b}$, where $\xi(a,b)\in\mathbb{C}$, $|\xi(a,b)|=1$;
* •
$\ U_{e}=1$;
Now we take a set of orthogonal projections on $\mathcal{H}$ ,
$\chi:=\\{\chi_{a}\\}_{a\in\mathcal{Z}}$,
$\mathcal{Z}\rightarrow\mathcal{B}(\mathcal{H})$. Such that if a $\neq
b\Rightarrow\chi_{a}\chi_{b}=0$ and $\sum_{a\in\mathcal{Z}}$ $\chi_{a}$ =1.
Furthermore one requires a covariance relation or a stability relation of
$\chi$ under $U$ i.e $\ U_{a}\chi_{b}U^{*}_{a}=\chi_{a+b}$.
Next to this Hilbertian structure (representing the “physical” space), we
consider a probability space $(\Omega,\mathcal{F},\mathbb{P})$ (representing
the presence of the disorder) that is ergodic under the action of a group
$\tau=\\{\tau_{a}\\}_{a\in\mathcal{Z}}$, that is,
* •
$\forall a\in\mathcal{Z},\ \tau_{a}:\Omega\rightarrow\Omega$ is a measure
preserving isomorphism;
* •
$\forall a,b\in\mathcal{Z},\ \tau_{a}\circ\tau_{b}=\tau_{a+b}$;
* •
$\tau_{e}=1$ where $e$ is the neutral element of $\mathcal{Z}$ and thus
$\tau_{a}^{-1}=\tau_{-a},\ \forall a\in\mathcal{Z}$;
* •
If $\mathcal{A}\in\mathcal{F}$ is invariant under $\tau$, then
$\mathbb{P}(A)=0$ or $1$.
With these two structures at hand we define the Hilbert space
$\tilde{\mathcal{H}}=\int^{\oplus}_{\Omega}\mathcal{H}\,\mathrm{d}\mathbb{P}(\omega):=\mathrm{L}^{2}(\Omega,\mathbb{P},\mathcal{H})\simeq\mathcal{H}\otimes\mathrm{L}^{2}(\Omega,\mathbb{P}),$
(2.1)
equipped with the inner product
$\langle\varphi,\psi\rangle_{\tilde{\mathcal{H}}}=\int_{\Omega}\langle\varphi(\omega),\psi(\omega)\rangle_{\mathcal{H}}\,\mathrm{d}\mathbb{P}(\omega),\
\forall\varphi,\psi\in\tilde{\mathcal{H}^{2}}.$ (2.2)
We are interested in bounded operators on $\tilde{\mathcal{H}}$ that are
decomposable elements $A=(A_{\omega})_{\omega\in\Omega}$, in the sense that
they commute with the diagonal ones. Measurable operators are defined as
decomposable operators such that for all measurable vector’s field
$\\{\varphi(\omega),\omega\in\Omega\\}$, the set
$\\{A(\omega)\varphi(\omega),\omega\in\Omega\\}$ is measurable too. Measurable
decomposable operators are called essentially bounded if
$\omega\rightarrow\|A_{\omega}\|_{\mathcal{L}({\mathcal{H}})}$ is a element of
$L^{\infty}(\Omega,\mathbb{P})$. We set, for such $A$’s,
$\|{A}\|_{\mathcal{L}(\tilde{\mathcal{H}})}=\|A\|_{\infty}=\mathrm{ess-
sup}_{\Omega}\|A(\omega)\|,$ (2.3)
and define the following von Neumann algebra
$\mathcal{K}=\
L^{\infty}(\Omega,\mathbb{P},\mathcal{L}(\mathcal{H}))=\\{A:\Omega\rightarrow\mathcal{L}(\mathcal{H}),measurable\
\|A\|_{\infty}<\infty\\}.$ (2.4)
There exists an isometric isomorphism betwen $\mathcal{K}$ and decomposable
operators on $\mathcal{L}(\tilde{\mathcal{H}})$.
We shall work with observables of $\mathcal{K}$ that satisfy the so-called
covariant property.
###### Definition 2.1.
$A\in\mathcal{K}$ is covariant iff
$U_{a}A(\omega)U^{\star}_{a}=A(\tau_{a}\omega)\ ,\forall
a\in\mathcal{Z},\,\forall\omega\in\Omega.$ (2.5)
We set
$\mathcal{K}_{\infty}=\\{A\in\mathcal{K},A\mbox{ is covariant}\\}.$ (2.6)
If $\tilde{U_{a}}:=U_{a}\otimes\tau(-a)$,with the slight notation abuse where
we note $\tau$ for the action induct by $\tau$ on $\mathrm{L}^{2}$ and
$\tilde{U}=(\tilde{U_{a}})_{a\in\mathcal{Z}}$, we note that
$\displaystyle\mathcal{K}_{\infty}$ $\displaystyle=$
$\displaystyle\\{A\in\mathcal{K},\,\forall
a\in\mathcal{Z},[A,\tilde{U}_{a}]=0\\}$ (2.7) $\displaystyle=$
$\displaystyle\mathcal{K}\cap(\tilde{{U}})^{\prime},$ (2.8)
so that $\mathcal{K}_{\infty}$ is again a von Neumann algebra.
### 2.2. Noncommutative integration
The von Neumann algebra $\mathcal{K}_{\infty}$ is equipped with the faithfull,
normal and semi-finite trace
$\mathcal{T}(A):=\mathbb{E}\\{\mathrm{tr}(\chi_{e}A(\omega)\chi_{e})\\},$
(2.9)
where “$\mathrm{tr}$” denotes the usual trace on the Hilbert space
$\mathcal{H}$.In the usual context of the Anderson model this is nothing but
the trace per unit volume, by the Birkhoff Ergodic Theorem, whenever
$\mathcal{T}(|A|)<\infty$, one has
$\mathcal{T}(A)=\lim_{|\Lambda_{L}|\to\infty}\dfrac{1}{|\Lambda_{L}|}\mathrm{tr}(\chi_{\Lambda_{L}}A_{\omega}\chi_{\Lambda_{L}}),$
(2.10)
where $\Lambda_{L}\subset\mathcal{Z}$ and
$\chi_{\Lambda_{L}}=\sum_{a\in\Lambda_{L}}\chi_{a}$. There is a natural
topology associated to von Neumann algebras equipped with such a trace. It is
defined by the following basis of neighborhoods:
$N(\epsilon,\delta)=\\{A\in\mathcal{K}_{\infty},\,\exists
P\in\mathcal{K}^{proj}_{\infty},\|AP\|_{\infty}<\varepsilon\
,\mathcal{T}(P^{\perp})<\delta\\},$ (2.11)
where $\mathcal{K}^{proj}_{\infty}$ denotes the set of projectors of
$\mathcal{K}_{\infty}$. It is a well known fact that
$A\in
N(\varepsilon,\delta)\Longleftrightarrow\mathcal{T}(\chi_{]\varepsilon,\infty[}(|A|))\leq\delta.$
(2.12)
As pointed out to us by V. Georgescu, this topology can also be generated by
the following Frechet-norm on $\mathcal{K}_{\infty}$ [Geo]:
$\|A\|_{\mathcal{T}}=\inf_{P\in\mathcal{K}^{proj}_{\infty}}\max\\{\|AP\|_{\infty},\mathcal{T}(P^{\perp})\\}.$
(2.13)
Let us denote by $\mathcal{M}(\mathcal{K}_{\infty})$ the set of all
$\mathcal{T}$-measurable operators, namely the completion of
$\mathcal{K}_{\infty}$ with respect to this topology. It is a well established
fact from noncommutative integration that
###### Theorem 2.2.
$\mathcal{M}(\mathcal{K}_{\infty})$ is a Hausdorff topological $\ast$-algebra.
, in the sens that all the algebraic operations are continuous for the
$\tau$-measure topology.
###### Definition 2.3.
A linear subspace $\mathcal{E}\subseteq\mathcal{H}$ is called
$\mathcal{T}$-dense if, $\forall\delta\in\mathbb{R}^{+}$, there exists a
projection $P\in\mathcal{K}_{\infty}$ such that $\
P\mathcal{H}\subseteq\mathcal{E}$ and $\mathcal{T}(P^{\perp})\leq\delta.$
It turns out that any element $A$ of $\mathcal{M}(\mathcal{K}_{\infty})$ can
be represented as an (possibly unbounded) operator, that we shall still denote
by $A$, acting on $\tilde{\mathcal{H}}$ with a domain
$D(A)=\\{\varphi\in\tilde{\mathcal{H}},\ A\varphi\in\tilde{\mathcal{H}}\\}$
that is $\mathcal{T}$-densily defined. Then, adjoints, sums and products of
elements of $\mathcal{M}(\mathcal{K}_{\infty})$ are defined as usual adjoints,
sums and products of unbounded operators.
For any $0<p<\infty$, we set
$\mathrm{L}^{p}(\mathcal{K}_{\infty}):=\overline{\\{x\in K_{\infty},\
\mathcal{T}(|x|^{p})<\infty\\}}^{\|\cdot\|_{p}}=\\{x\in\mathcal{M}(\mathcal{K}_{\infty}),\
\mathcal{T}(|x|^{p})<\infty\\},$ (2.14)
where the second equality is actually a theorem. For $p\geq 1$, the spaces
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$ are Banach spaces in which
$\mathrm{L}^{p,o}(\mathcal{K}_{\infty}):=\mathrm{L}^{p}(\mathcal{K}_{\infty})\cap\mathcal{K}_{\infty}$
are dense by definition. For $p=\infty$, in analogy with the commutative case,
we set $\mathrm{L}^{\infty}(\mathcal{K}_{\infty})=\mathcal{K}_{\infty}$.
Noncommutative Hölder inequalities hold: for any $0<,p,q,r\leq\infty$ so that
$p^{-1}+q^{-1}=r^{-1}$, if $A\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$ and
$B\in\mathrm{L}^{q}(\mathcal{K}_{\infty})$, then the product
$AB\in\mathcal{M}(\mathcal{K}_{\infty})$ belongs to
$\mathrm{L}^{r}(\mathcal{K}_{\infty})$ with
$\|AB\|_{r}\leq\|A\|_{p}\|B\|_{q}.$ (2.15)
In particular, for all $A\in\mathrm{L}^{\infty}(\mathcal{K}_{\infty})$ and
$B\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$,
$\|AB\|_{p}\leq\|A\|_{\infty}\|B\|_{p}\mbox{ and
}\|BA\|_{p}\leq\|A\|_{\infty}\|B\|_{p}\,,$ (2.16)
so that $\mathrm{L}^{p}(\mathcal{K}_{\infty})$-spaces are
$\mathcal{K}_{\infty}$ two-sided submodules of
$\mathcal{M}(\mathcal{K}_{\infty})$. As another consequence, bilinear forms
$\mathrm{L}^{p,o}(\mathcal{K}_{\infty})\times\mathrm{L}^{q,o}(\mathcal{K}_{\infty})\ni(A,B)\mapsto\mathcal{T}(AB)\in\mathbb{C}$
continuously extends to bilinear maps defined on
$\mathrm{L}^{p}(\mathcal{K}_{\infty})\times\mathrm{L}^{q}(\mathcal{K}_{\infty})$.
###### Lemma 2.4.
Let $A\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$, $p\in[1,\infty[$ be given, and
suppose $\mathcal{T}(AB)=0$ for all
$B\in\mathrm{L}^{q}(\mathcal{K}_{\infty})$, $p^{-1}+q^{-1}=1$. Then $A=0$.
The case $p=2$ is of particular interest since
$\mathrm{L}^{2}(\mathcal{K}_{\infty})$ equipped with the sesquilinear form
$\langle A,B\rangle_{\mathrm{L}^{2}}=\mathcal{T}(A^{\ast}B)$ is a Hilbert
space. The corresponding norm reads
$\|A\|_{2}^{2}=\int_{\Omega}\mathrm{tr}(\chi_{e}A^{\ast}_{\omega}A_{\omega}\chi_{e})\mathrm{d}\mathbb{P}(\omega)=\int_{\Omega}\|A_{\omega}\chi_{e}\|_{2}^{2}\mathrm{d}\mathbb{P}(\omega).$
(2.17)
(Where $\|\cdot\|_{2}$ denotes the Hilbert-Schmidt norm.) From the case $p=2$,
we can derive the centraliy of the trace. Indeed, by covariance and using the
centrality of the usual trace, it follows that
$\mathcal{T}(AB)=\mathcal{T}(BA)$ whenever $A,B\in\mathcal{K}_{\infty}$. By
density we get the following lemma.
###### Lemma 2.5.
Let $A\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$ and
$B\in\mathrm{L}^{q}(\mathcal{K}_{\infty})$, $p^{-1}+q^{-1}=1$ be given. Then
$\mathcal{T}(AB)=\mathcal{T}(BA)$.
We shall also make use of the following observation.
###### Lemma 2.6.
Let $A\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$ and $(B_{n})$ a sequence of
elements of $\mathcal{K}_{\infty}$ that converges strongly to
$B\in\mathcal{K}_{\infty}$. Then $AB_{n}$ converges to $AB$ in
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$.
### 2.3. Commutators of measurable covariant operators
Let $H$ be a decomposable (unbounded) operator affiliated to
$\mathcal{K}_{\infty}$ with domain $\mathcal{D}$, and
$A\in\mathcal{M}(\mathcal{K}_{\infty})$. In particular $H$ need not be
$\mathcal{T}$-measurable, i.e. in $\mathcal{M}(\mathcal{K}_{\infty})$. If
there exists a $\mathcal{T}$-dense domain $\mathcal{D}^{\prime}$ such that $\
A\mathcal{D}^{\prime}\subset\mathcal{D}$, then $HA$ is well defined, and if in
addition the product is $\mathcal{T}$-measurable then we write
$HA\in\mathcal{M}(\mathcal{K}_{\infty})$. Similarly, if $\mathcal{D}$ is
$\mathcal{T}$-dense and the range of $H\mathcal{D}\subset\mathcal{D}(A)$, then
$AH$ is well defined, and if in addition the product is
$\mathcal{T}$-measurable then we write
$AH\in\mathcal{M}(\mathcal{K}_{\infty})$.
###### Remark 2.7.
We define the following (generalized) commutators:
(i):
If $A\in\mathcal{M}\ (\mathcal{K}_{\infty})$ and $B\in\mathcal{K}_{\infty}$,
$[A,B]:=AB-BA\in\mathcal{M}(\mathcal{K}_{\infty}),\quad[B,A]:=-[A,B].$ (2.18)
(ii):
If $A\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$,
$B\in\mathrm{L}^{q}(\mathcal{K}_{\infty})$, $p,q\geq 1$ such that
$p^{-1}+q^{-1}=1$ , then
$[A,B]:=AB-BA\in\mathrm{L}^{1}(\mathcal{K}_{\infty}).$ (2.19)
###### Definition 2.8.
Let $H\eta\mathcal{K}_{\infty}$(i.e $H$ affiliated to $\mathcal{K}_{\infty}$).
If $A\in\mathcal{M}(\mathcal{K}_{\infty})$ is such that $HA$ and $AH$ are in
$\mathcal{M}(\mathcal{K}_{\infty})$, then
$[H,A]:=HA-AH\in\mathcal{M}(\mathcal{K}_{\infty})\,.$ (2.20)
We shall need the following observations.
###### Lemma 2.9.
1) For any
$A\in\mathrm{L}^{p}(\mathcal{K}_{\infty}),B\in\mathrm{L}^{q}(\mathcal{K}_{\infty})$,
$p,q\geq 1$, $p^{-1}+q^{-1}=1$, and $C_{\omega}\in\mathcal{K}_{\infty}$, we
have
$\mathcal{T}\left\\{[C,A]B\right\\}=\mathcal{T}\left\\{C[A,B]\right\\}.$
(2.21)
2) For any $A,B\in\mathcal{K}_{\infty}$ and
$C\in\mathrm{L}^{1}(\mathcal{K}_{\infty})$, we have
$\mathcal{T}\left\\{A[B,C]\right\\}=\mathcal{T}\left\\{[A,B]C\right\\}.$
(2.22)
3) Let $p,q\geq 1$ be such that $p^{-1}+q^{-1}=1$. For any
$A\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$, resp.
$B\in\mathrm{L}^{q}(\mathcal{K}_{\infty})$, such that
$[H,A]\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$, resp.
$[H,B]\in\mathrm{L}^{q}(\mathcal{K}_{\infty})$, we have
$\mathcal{T}\left\\{[H,A]{B}\right\\}=-\mathcal{T}\left\\{A[H,{B}]\right\\}.$
(2.23)
### 2.4. Differentiation
A $\ast$-derivation $\partial$ is a $\ast$-map defined on a dense sub-algreba
of $\mathcal{K}_{\infty}$ and such that:
* •
$\partial(AB)=\partial(A)B+A\partial(B)$
* •
$\partial(A+\lambda B)=\partial(A)+\lambda\partial(B)$
* •
$\partial(A^{\star})=\partial(A)^{\star}$
* •
$\ [\alpha_{a},\partial]=0$ in the sense that
$\alpha_{a}\circ\partial(A)=\partial\circ\alpha_{a}(A)\ \forall
a\in\mathcal{Z}\ ,\forall A\in\mathcal{K}_{\infty}$.
If $\partial_{1},...,\partial_{d}$ are $\ast$-derivations we define a non-
commutative gradient by $\nabla:=(\partial_{1},...,\partial_{d})$, densily
defined on $\mathcal{K}_{\infty}$. We define a non-commutative Sobolev space
$\mathcal{W}^{1,p}(\mathcal{K}_{\infty}):=\\{A\in\mathrm{L}^{p}(\mathcal{K}_{\infty}),\,\nabla
A\in\mathrm{L}^{p}(\mathcal{K}_{\infty})\\}.$ (2.24)
and a second space for $H\eta\mathcal{K}_{\infty}$,
$\mathcal{D}^{(0)}_{p}(H)=\left\\{A\in\mathrm{L}^{p}(\mathcal{K}_{\infty}),\;\;HA,AH\in\mathrm{L}^{p}(\mathcal{K}_{\infty})\right\\}.$
(2.25)
## 3\. The setting: Schrödinger operators and dynamics
In this section we describe our background operators and recall from [BoGKS]
the main properties we shall need in order to establish the Kubo formula, but
within the framework of noncommutative integration when relevant (i.e. in
Subsection 3.2).
### 3.1. Magnetic Schrödinger operators and time-dependent operators
Throughout this paper we shall consider Schrödinger operators of general form
$\displaystyle
H=H(\mathbf{A},V)=\left(-i\nabla-\mathbf{A}\right)^{2}+V\;\;\;\mathrm{on}\;\;\;\mathrm{L}^{2}(\mathbb{R}^{d}),$
(3.1)
where the magnetic potential $\mathbf{A}$ and the electric potential $V$
satisfy the Leinfelder-Simader conditions:
* •
$\mathbf{A}(x)\in\mathrm{L}^{4}_{\mathrm{loc}}(\mathbb{R}^{d};\mathbb{R}^{d})$
with
$\nabla\cdot\mathbf{A}(x)\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R}^{d})$.
* •
$V(x)=V_{+}(x)-V_{-}(x)$ with
$V_{\pm}(x)\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R}^{d})$, $V_{\pm}(x)\geq
0$, and $V_{-}(x)$ relatively bounded with respect to $\Delta$ with relative
bound $<1$, i.e., there are $0\leq\alpha<1$ and $\beta\geq 0$ such that
$\|V_{-}\psi\|\leq\alpha\|\Delta\psi\|+\beta\|\psi\|\quad\mbox{for all
$\psi\in\mathcal{D}(\Delta)$}.$ (3.2)
Leinfelder and Simader have shown that $H(\mathbf{A},V)$ is essentially self-
adjoint on $C_{\mathrm{c}}^{\infty}(\mathbb{R}^{d})$ [LS, Theorem 3]. It has
been checked in [BoGKS] that under these hypotheses $H({\mathbf{A}},V)$ is
bounded from below:
$H({\mathbf{A}},V)\geq-\,\frac{\beta}{(1-{\alpha})}=:-\gamma+1,\mbox{ so that
}H+\gamma\geq 1.$ (3.3)
We denote by $x_{j}$ the multiplication operator in
$\mathrm{L}^{2}(\mathbb{R}^{d})$ by the $j^{\rm th}$ coordinate $x_{j}$, and
$\mathbf{x}:=(x_{1},\cdots x_{d})$. We want to implement the adiabatic
switching of a time dependent spatially uniform electric field
$\mathbf{E}_{\eta}(t)\cdot\mathbf{x}=\mathrm{e}^{\eta
t}\mathbf{E}(t)\cdot\mathbf{x}$ between time $t=-\infty$ and time $t=t_{0}$.
Here $\eta>0$ is the adiabatic parameter and we assume that
$\int_{-\infty}^{t_{0}}\mathrm{e}^{\eta t}|\mathbf{E}(t)|\mathrm{d}t<\infty.$
(3.4)
To do so we consider the time-dependent magnetic potential
$\mathbf{A}(t)=\mathbf{A}+\mathbf{F}_{\eta}(t)$, with
$\mathbf{F}_{\eta}(t)=\int_{-\infty}^{t}\mathbf{E}_{\eta}(s)\mathrm{d}s$. In
other terms, the dynamics is generated by the time-dependent magnetic operator
$\displaystyle
H(t)=({-i\nabla}-\mathbf{A}-\mathbf{F}_{\eta}(t))^{2}+V(x)=H(\mathbf{A}(t),V)\,,$
(3.5)
which is essentially self-adjoint on $C_{\mathrm{c}}^{\infty}(\mathbb{R}^{d})$
with domain $\mathcal{D}:=\mathcal{D}(H)=\mathcal{D}(H(t))$ for all
$t\in\mathbb{R}$. One has (see [BoGKS, Proposition 2.5])
$H(t)=H-2\mathbf{F}_{\eta}(t)\cdot{\mathbf{D}}(\mathbf{A})+\mathbf{F}_{\eta}(t)^{2}\;\;\mbox{on
$\mathcal{D}(H)$},$ (3.6)
where $\mathbf{D}=\mathbf{D}(\mathbf{A})$ is the closure of
$(-i\nabla-\mathbf{A})$ as an operator from $\mathrm{L}^{2}(\mathbb{R}^{d})$
to $\mathrm{L}^{2}(\mathbb{R}^{d};\mathbb{C}^{d})$ with domain
$C_{\mathrm{c}}^{\infty}(\mathbb{R}^{d})$. Each of its components
$\mathbf{D}_{j}=\mathbf{D}_{j}(\mathbf{A})=(-i\frac{\partial}{\partial
x_{j}}-\mathbf{A}_{j})$, $j=1,\ldots,d$, is essentially self-adjoint on
$C_{\mathrm{c}}^{\infty}(\mathbb{R}^{d})$.
To see that such a family of operators generates the dynamics a quantum
particle in the presence of the time dependent spatially uniform electric
field $\mathbf{E}_{\eta}(t)\cdot\mathbf{x}$, consider the gauge transformation
$[G(t)\psi](x):=\mathrm{e}^{i\mathbf{F}_{\eta}(t)\cdot x}\psi(x)\;,$ (3.7)
so that
$H(t)\ =\ G(t)\left[(-i\nabla-\mathbf{A})^{2}+V\right]G(t)^{*}\;.$ (3.8)
Then if $\psi(t)$ obeys Schrödinger equation
$i\partial_{t}\psi(t)=H(t)\psi(t),$ (3.9)
one has, _formally_ ,
$i\partial_{t}G(t)^{\ast}\psi(t)=\left[(-i\nabla-\mathbf{A})^{2}+V+\mathbf{E}_{\eta}(t)\cdot
x\right]G(t)^{\ast}\psi(t).$ (3.10)
To summarize the action of the gauge transformation we recall the
###### Lemma 3.1.
[BoGKS, Lemma 2.6] Let $G(t)$ be as in (3.7). Then
$\displaystyle G(t)\mathcal{D}$ $\displaystyle=$ $\displaystyle\mathcal{D}\,,$
(3.11) $\displaystyle H(t)$ $\displaystyle=$ $\displaystyle G(t)HG(t)^{*}\,,$
(3.12) $\displaystyle\mathbf{D}(\mathbf{A}+\mathbf{F}_{\eta}(t))$
$\displaystyle=$
$\displaystyle\mathbf{D}(\mathbf{A})-\mathbf{F}_{\eta}(t)=G(t)\mathbf{D}(\mathbf{A})G(t)^{*}.$
(3.13)
Moreover, $i[H(t),x_{j}]=2\mathbf{D}(\mathbf{A}+\mathbf{F}_{\eta}(t))$ as
quadratic forms on $\mathcal{D}\cap\mathcal{D}(x_{j})$, $j=1,2,\dots,d$.
The key observation is that the general theory of propagators with a time
dependent generator [Y, Theorem XIV.3.1] applies to $H(t)$. It thus yields the
existence of a two parameters family $U(t,s)$ of unitary operators, jointly
strongly continuous in $t$ and $s$, that solves the Schrödinger equation.
$\displaystyle U(t,r)U(r,s)$ $\displaystyle=$ $\displaystyle U(t,s)$ (3.14)
$\displaystyle U(t,t)$ $\displaystyle=$ $\displaystyle I$ (3.15)
$\displaystyle U(t,s)\mathcal{D}$ $\displaystyle=$
$\displaystyle\mathcal{D}\,,$ (3.16) $\displaystyle i\partial_{t}U(t,s)\psi$
$\displaystyle=$ $\displaystyle H(t)U(t,s)\psi\;\;\mbox{for all
$\psi\in\mathcal{D}$}\,,$ (3.17) $\displaystyle i\partial_{s}U(t,s)\psi$
$\displaystyle=$ $\displaystyle-\,U(t,s)H(s)\psi\;\;\mbox{for all
$\psi\in\mathcal{D}$}\,.$ (3.18)
We refer to [BoGKS, Theorem 2.7] for other relevant properties.
To compute the linear response, we shall make use of the following “Duhamel
formula”. Let $U^{(0)}(t)=\mathrm{e}^{-itH}$. For all $\psi\in\mathcal{D}$ and
$t,s\in\mathbb{R}$ we have [BoGKS, Lemma 2.8]
$U(t,s)\psi=U^{(0)}(t-s)\psi+i\int_{s}^{t}U^{(0)}(t-r)(2\mathbf{F}_{\eta}(r)\cdot\mathbf{D}(\mathbf{A})-\mathbf{F}_{\eta}(r)^{2})U(r,s)\psi\
\mathrm{d}r\,.$ (3.19)
Moreover,
$\lim_{|\mathbf{E}|\rightarrow 0}U(t,s)=U^{(0)}(t-s)\;\;\mbox{strongly}\,.$
(3.20)
### 3.2. Adding the randomness
Let $(\Omega,\mathbb{P})$ be a probability space equipped with an ergodic
group $\\{\tau_{a};\ a\in\mathbb{Z}^{d}\\}$ of measure preserving
transformations. We study operator–valued maps $A\colon\Omega\ni\omega\mapsto
A_{\omega}$.
Throughout the rest of this paper we shall use the material of Section 2 with
$\mathcal{H}=\mathrm{L}^{2}(\mathbb{R}^{d})$ and $\mathcal{Z}=\mathbb{Z}^{d}$.
The projective representation of $\mathbb{Z}^{d}$ on $\mathcal{H}$ is given by
magnetic translations $(U(a)\psi)(x)=\mathrm{e}^{ia\cdot Sx}\psi(x-a)$, $S$
being a given $d\times d$ real matrix. The projection $\chi_{a}$ is the
characteristic function of the unit cube of $\mathbb{R}^{d}$ centered at
$a\in\mathbb{Z}^{d}$.
In our context natural $\ast$-derivations arise:
$\partial_{j}A:=i[x_{j},A],\;j=1,\cdots,d,\quad\nabla A=i[\mathbf{x},A].$
(3.21)
We shall now recall the material from [BoGKS, Section 4.3]. Proofs of
assertions are extensions of the arguments of [BoGKS] to the setting of
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$-spaces. We refer to [Do] for details.
We state the technical assumptions on our Hamiltonian of reference
$H_{\omega}$.
###### Assumption 3.2.
The ergodic Hamiltonian $\omega\mapsto H_{\omega}$ is a measurable map from
the probability space $(\Omega,\mathbb{P})$ to self-adjoint operators on
$\mathcal{H}$ such that
$H_{\omega}=H(\mathbf{A}_{\omega},V_{\omega})=\left(-i\nabla-\mathbf{A}_{\omega}\right)^{2}+V_{\omega}\;,$
(3.22)
almost surely, where $\mathbf{A}_{\omega}$ ($V_{\omega}$) are vector (scalar)
potential valued random variables which satisfy the Leinfelder-Simader
conditions (see Subsection 3.1) almost surely. It is furthermore assumed that
$H_{\omega}$ is covariant as in (2.5). We denote by $H$ the operator
$(H_{\omega})_{\omega\in\Omega}$ acting on $\tilde{\mathcal{H}}$.
As a consequence $\|f(H_{\omega})\|\leq\|f\|_{\infty}$ and
$f(H)\in\mathcal{K}_{\infty}$ for every bounded Borel function $f$ on the real
line. In particular $H$ is affiliated to $\mathcal{K}_{\infty}$. For
$\mathbb{P}$-a.e. $\omega$, let $U_{\omega}(t,s)$ be the unitary propagator
associated to $H_{\omega}$ and described in Subsection 3.1. Note that
$(U_{\omega}(t,s))_{\omega\in\Omega}\in\mathcal{K}_{\infty}$ (measurability in
$\omega$ follows by construction of $U_{\omega}(t,s)$, see [BoGKS]). For
$A\in\mathcal{M}(\mathcal{K}_{\infty})$ decomposable, let
$\displaystyle{\mathcal{U}}(t,s)(A):=\int_{\Omega}^{\oplus}U_{\omega}(t,s)A_{\omega}U_{\omega}(s,t)\,\mathrm{d}\mathbb{P}(\omega).$
(3.23)
Then ${\mathcal{U}}(t,s)$ extends a linear operator on
$\mathcal{M}(\mathcal{K}_{\infty})$, leaving invariant
$\mathcal{M}(\mathcal{K}_{\infty})$ and
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$, $p\in]0,\infty]$, with
$\displaystyle{\mathcal{U}}(t,r){\mathcal{U}}(r,s)$ $\displaystyle=$
$\displaystyle{\mathcal{U}}(t,s)\,,$ (3.24) $\displaystyle{\mathcal{U}}(t,t)$
$\displaystyle=$ $\displaystyle I\,,$ (3.25)
$\displaystyle\left\\{{\mathcal{U}}(t,s)(A)\right\\}^{\ast}$ $\displaystyle=$
$\displaystyle{\mathcal{U}}(t,s)(A^{\ast})\,.$ (3.26)
Moreover, ${\mathcal{U}}(t,s)$ is to a unitary on
$\mathrm{L}^{2}(\mathcal{K}_{\infty})$ and an isometry in
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$, $p\in[1,\infty]$. In addition,
${\mathcal{U}}(t,s)$ is jointly strongly continuous in $t$ and $s$ on
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$, $p\in 1,\infty]$.
Pick $p>0$. Let $A\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$ be such that
$H(r_{0})A$ and $AH(r_{0})$ are in $\mathrm{L}^{p}(\mathcal{K}_{\infty})$ for
some $r_{0}\in[-\infty,\infty)$. Then both maps
$r\mapsto{\mathcal{U}}(t,r)(A)\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$ and
$t\mapsto{\mathcal{U}}(t,r)(A)\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$ are
differentiable in $\mathrm{L}^{p}(\mathcal{K}_{\infty})$, with (recalling
Definition 2.8)
$\displaystyle i\partial_{r}\,{\mathcal{U}}(t,r)(A)$ $\displaystyle=$
$\displaystyle-{\mathcal{U}}(t,r)([H(r),A]).$ (3.27) $\displaystyle
i\partial_{t}\,{\mathcal{U}}(t,r)(A)$ $\displaystyle=$
$\displaystyle[H(t),{\mathcal{U}}(t,r)(A)].$ (3.28)
Moreover, for $t_{0}<\infty$ given, there exists $C=C(t_{0})<\infty$ such that
for all $t,r\leq t_{0}$,
$\displaystyle\|\left(H(t)+\gamma\right){\mathcal{U}}(t,r)(A)\|_{p}\leq
C\|(H(r)+\gamma)A\|_{p}\,,$ (3.29)
$\displaystyle\left\|[H(t),{\mathcal{U}}(t,r)(A)]\right\|_{p}\leq
C\left(\|(H(r)+\gamma)A\|_{p}+\|A(H(r)+\gamma)\|_{p}\right).$ (3.30)
We note that in order to apply the above formula and in particular (3.27) and
(3.28), it is actually enough to check that $(H(r_{0})+\gamma)A$ and
$A(H(r_{0})+\gamma)$ are in $\mathrm{L}^{p}(\mathcal{K}_{\infty})$.
Whenever we want to keep track of the dependence of $U_{\omega}(t,s)$ on the
electric field $\mathbf{E}=\mathbf{E}_{\eta}(t)$, we shall write
$U_{\omega}(\mathbf{E},t,s)$. When $\mathbf{E}=0$, note that
$U_{\omega}(\mathbf{E}=0,t,s)=U^{(0)}_{\omega}(t-s):=\mathrm{e}^{-i(t-s)H_{\omega}}.$
(3.31)
For $A\in\mathcal{M}(\mathcal{K}_{\infty})$ decomposable, we let
$\mathcal{U}^{(0)}(r)(A):=\int_{\Omega}^{\oplus}U^{(0)}_{\omega}(r)A_{\omega}U^{(0)}_{\omega}(-r)\,\mathrm{d}\mathbb{P}(\omega).$
(3.32)
We still denote by $\mathcal{U}^{(0)}(r)(A)$ its extension to
$\mathcal{M}(\mathcal{K}_{\infty})$.
###### Proposition 3.3.
Let $p\geq 1$ be given. ${\mathcal{U}}^{(0)}(t)$ is a one-parameter group of
operators on $\mathcal{M}(\mathcal{K}_{\infty})$, leaving
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$ invariant. ${\mathcal{U}}^{(0)}(t)$ is
an isometry on $\mathrm{L}^{p}(\mathcal{K}_{\infty})$, and unitary if $p=2$.
It is strongly continuous on $\mathrm{L}^{p}(\mathcal{K}_{\infty})$. We
further denote by $\mathcal{L}_{p}$ the infinitesimal generator of
${\mathcal{U}}^{(0)}(t)$ in $\mathrm{L}^{p}(\mathcal{K}_{\infty})$:
$\mathcal{U}^{(0)}(t)=\mathrm{e}^{-it\mathcal{L}_{p}}\quad\mbox{for all
$t\in\mathbb{R}$}\,.$ (3.33)
The operator $\mathcal{L}_{p}$ is usually called the _Liouvillian_. Let
$\mathcal{D}^{(0)}_{p}=\left\\{A\in\mathrm{L}^{p}(\mathcal{K}_{\infty}),\;\;HA,AH\in\mathrm{L}^{p}(\mathcal{K}_{\infty})\right\\}.$
(3.34)
Then $\mathcal{D}^{(0)}_{p}$ is an operator core for $\mathcal{L}_{p}$ (note
that $\mathcal{L}_{2}$ is essentially self-adjoint on
$\mathcal{D}^{(0)}_{2}$), and
$\mathcal{L}_{p}(A)=[H,A]\quad\mbox{for all $A\in\mathcal{D}^{(0)}_{p}$}.$
(3.35)
Moreover, for every $B_{\omega}\in\mathcal{K}_{\infty}$ there exists a
sequence $B_{n,\omega}\in\mathcal{D}^{(0)}_{\infty}$ such that
$B_{n,\omega}\to B_{\omega}$ as a bounded and $\mathbb{P}$-a.e.-strong limit.
We finish this list of properties with the following lemma about the Gauge
transformations in spaces of measurable operators. The map
$\displaystyle\mathcal{G}(t)(A)=G(t)AG(t)^{*}\;,$ (3.36)
with
$G(t)=\mathrm{e}^{i\int_{-\infty}^{t}\mathbf{E}_{\eta}(s)\cdot\mathbf{x}}$ as
in (3.7), is an isometry on $\mathrm{L}^{p}(\mathcal{K}_{\infty})$, for
$p\in]0,\infty]$.
###### Lemma 3.4.
For any $p\in]0,\infty]$, the map $\mathcal{G}(t)$ is strongly continuous on
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$, and
$\lim_{t\to-\infty}\mathcal{G}(t)=I\;\;\mbox{strongly}$ (3.37)
on $\mathrm{L}^{p}(\mathcal{K}_{\infty})$. Moreover, if
$A\in\mathcal{W}^{1,p}(\mathcal{K}_{\infty})$, then $\mathcal{G}(t)(A)$ is
continuously differentiable in $\mathrm{L}^{p}(\mathcal{K}_{\infty})$ with
$\partial_{t}\mathcal{G}(t)(A)={\mathbf{E}}_{\eta}(t)\cdot\nabla(\mathcal{G}(t)(A)).$
(3.38)
## 4\. Linear response theory and Kubo formula
### 4.1. Adiabatic switching of the electric field
We now fix an initial equilibrium state of the system, i.e., we specify a
density matrix ${\zeta}_{\omega}$ which is in equilibrium, so
$[H_{\omega},{\zeta}_{\omega}]=0$. For physical applications, we would
generally take ${\zeta}_{\omega}=f(H_{\omega})$ with $f$ the Fermi-Dirac
distribution at inverse temperature $\beta\in(0,\infty]$ and _Fermi energy_
$E_{F}\in\mathbb{R}$, i.e., $f(E)=\frac{1}{1+\mathrm{e}^{\beta(E-E_{F})}}$ if
$\beta<\infty$ and $f(E)=\chi_{(-\infty,E_{F}]}(E)$ if $\beta=\infty$;
explicitly
${\zeta}_{\omega}\ =\ \begin{cases}F^{(\beta,E_{F})}_{\omega}\ :=\
\frac{1}{1+\mathrm{e}^{\beta(H_{\omega}-E_{F})}}\,,&\beta<\infty\,,\\\
P^{(E_{F})}_{\omega}\ :=\
\chi_{(-\infty,E_{F}]}(H_{\omega})\,,&\beta=\infty\,.\end{cases}$ (4.1)
However we note that our analysis allows for fairly general functions $f$
[BoGKS]. We set
$\zeta=(\zeta_{\omega})_{\omega\in\Omega}\in\mathcal{K}_{\infty}$ but shall
also write $\zeta_{\omega}$ instead of $\zeta$. That $f$ is the Fermi-Dirac
distribution plays no role in the derivation of the linear response. However
computing the Hall conductivity itself (once the linear response performed) we
shall restrict our attention to the zero temperature case with the _Fermi
projection_ $P_{\omega}^{(E_{F})}$.
The system is described by the ergodic time dependent Hamiltonian
$H_{\omega}(t)$, as in (3.5). Assuming the system was in equilibrium at
$t=-\infty$ with the density matrix
$\varrho_{\omega}(-\infty)={\zeta}_{\omega}$, the time dependent density
matrix $\varrho_{\omega}(t)$ is the solution of the Cauchy problem for the
Liouville equation. Since we shall solve the evolution equation in
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$, we work with
$H(t)=(H_{\omega}(t))_{\omega\in\Omega}$, as in Assumption 3.2.
The electric field $\mathbf{E}_{\eta}(t)\cdot\mathbf{x}=\mathrm{e}^{\eta
t}\mathbf{E}(t)\cdot\mathbf{x}$ is swichted on adiabatically between
$t=-\infty$ and $t=t_{0}$ (typically $t_{0}=0)$. Depending on which
conductivity on is interested, one may consider different forms for
$\mathbf{E}(t)$. In particular $\mathbf{E}(t)=\mathbf{E}$ leads to the direct
conductivity, while $\mathbf{E}(t)=cos(\nu t)\mathbf{E}$ leads to the AC-
conductivity at frequency $\nu$111The AC-conductivity may be better defined
using the from (4.25) as argued in [KLM].. The first one is relevant for
studying the Quantum Hall effect (see subsection 4.4), while the second enters
the Mott’s formula [KLP, KLM].
We write
${\zeta}(t)=G(t){\zeta}G(t)^{*}=\mathcal{G}(t)({\zeta}),\quad\text{i.e.,}\quad{\zeta}(t)=f(H(t)).$
(4.2)
###### Theorem 4.1.
Let $\eta>0$ and assume that $\int_{-\infty}^{t}\mathrm{e}^{\eta
r}|\mathbf{E}(r)|\mathrm{d}r<\infty$ for all $t\in\mathbb{R}$. Let
$p\in[1,\infty[$. Assume that
$\zeta\in\mathcal{W}^{1,p}(\mathcal{K}_{\infty})$ and that
$\nabla\zeta\in\mathcal{D}_{p}^{o}$. The Cauchy problem
$\displaystyle\left\\{\begin{array}[]{l}i\partial_{t}\varrho(t)=[H(t),\varrho(t))]\\\
\lim_{t\to-\infty}\varrho(t)={\zeta}\end{array}\right.,$ (4.5)
has a unique solution in $\mathrm{L}^{p}(\mathcal{K}_{\infty})$, that is given
by
$\displaystyle\varrho(t)$ $\displaystyle=$
$\displaystyle\lim_{s\to-\infty}{\mathcal{U}}(t,s)\left({\zeta}\right)$ (4.6)
$\displaystyle=$
$\displaystyle\lim_{s\to-\infty}{\mathcal{U}}(t,s)\left({\zeta}(s)\right)$
(4.7) $\displaystyle=$
$\displaystyle{\zeta}(t)-\int_{-\infty}^{t}\mathrm{d}r\,\mathrm{e}^{\eta
r}{\mathcal{U}}(t,r)(\mathbf{E}(r)\cdot\nabla{\zeta}(r)).$ (4.8)
We also have
$\displaystyle\varrho(t)={\mathcal{U}}(t,s)(\varrho(s))\,,\;\;\|\varrho(t)\|_{p}=\|{\zeta}\|_{p}\,,$
(4.9)
for all $t,s$. Furthermore, $\varrho(t)$ is non-negative, and if ${\zeta}$ is
a projection, then so is $\varrho(t)$ for all $t$.
###### Remark 4.2.
If the initial state ${\zeta}$ is of the form (4.1), then the hypotheses of
Theorem 4.1 hold for any $p>0$, provided
${\zeta}_{\omega}=P^{(E_{F})}_{\omega}$ that $E_{F}$ lies in a region of
localization. This is true for suitable $\mathbf{A}_{\omega},V_{\omega}$ and
$E_{F}$, by the methods of, for example, [CH, W, GK1, GK2, GK3, BoGK, AENSS,
U, GrHK] and for the models studied therein as well as in [CH, GK3]. The bound
$\mathbb{E}\||\mathbf{x}|\zeta_{\omega}\chi_{0}\|^{2}<\infty$ or equivalently
$\nabla\zeta\in\mathrm{L}^{2}(\mathcal{K}_{\infty})$ is actually sufficient
for our applications. For $p=1,2$, we refer to [BoGKS, Proposition 4.2] and
[BoGKS, Lemma 5.4] for the derivation of these hypotheses from known results.
###### Proof of Theorem 4.1.
Let us first define
$\varrho(t,s):={\mathcal{U}}(t,s)({\zeta}(s)).$ (4.10)
We get, as operators in $\mathcal{M}(\mathcal{K}_{\infty})$,
$\displaystyle\partial_{s}\varrho(t,s)$ $\displaystyle=$ $\displaystyle
i\mathcal{U}(t,s)\left([H(s),{\zeta}(s)]\right)+\mathcal{U}(t,s)\left({\mathbf{E}}_{\eta}(s)\cdot\nabla{\zeta}(s)\right)$
(4.11) $\displaystyle=$
$\displaystyle\mathcal{U}(t,s)\left({\mathbf{E}}_{\eta}(s)\cdot\nabla{\zeta}(s)\right)\,,$
where we used (3.27) and Lemma 3.4. As a consequence, with
$\mathbf{E}_{\eta}(r)=\mathrm{e}^{\eta r}\mathbf{E}(r)$,
$\varrho(t,t)-\varrho(t,s)=\int_{s}^{t}\mathrm{d}r\,\mathrm{e}^{\eta
r}{\mathcal{U}}(t,r)({\mathbf{E}}(r)\cdot\nabla{\zeta}(r)).$ (4.12)
Since $\|{\mathcal{U}}(t,r)({\mathbf{E}}(r)\cdot\nabla({\zeta}(r))\|_{p}\leq
c_{d}|{\mathbf{E}}(r)|\,\|\nabla{\zeta}\|_{p}<\infty$, the integral is
absolutely convergent by hypothesis on $\mathbf{E}_{\eta}(t)$, and the limit
as $s\to-\infty$ can be performed in $\mathrm{L}^{p}(\mathcal{K}_{\infty})$.
It yields the equality between (4.7) and (4.8). Equality of (4.6) and (4.7)
follows from Lemma 3.4 which gives
${\zeta}=\lim_{s\to-\infty}{\zeta}(s)\;\;\mbox{in
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$.}$ (4.13)
Since $\mathcal{U}(t,s)$ are isometries on
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$, it follows from (4.6) that
$\|\varrho(t)\|_{p}=\|{\zeta}\|_{p}$. We also get
$\varrho(t)=\varrho(t)^{\ast}$. Moreover, (4.6) with the limit in
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$ implies that $\varrho(t)$ is
nonnegative.
Furthermore, if ${\zeta}={\zeta}^{2}$ then $\varrho(t)$ can be seen to be a
projection as follows. Note that convergence in $\mathrm{L}^{p}$ implies
convergence in $\mathcal{M}(\mathcal{K}_{\infty})$, so that,
$\varrho(t)=\sideset{}{{}^{(\tau)}}{\lim}_{s\to-\infty}{\mathcal{U}}(t,s)\left({\zeta}\right)=\sideset{}{{}^{(\tau)}}{\lim}_{s\to-\infty}{\mathcal{U}}(t,s)\left({\zeta}\right){\mathcal{U}}(t,s)\left({\zeta}\right)\\\
=\left\\{\sideset{}{{}^{(\tau)}}{\lim}_{s\to-\infty}{\mathcal{U}}(t,s)\left({\zeta}\right)\right\\}\left\\{\sideset{}{{}^{(\tau)}}{\lim}_{s\to-\infty}{\mathcal{U}}(t,s)\left({\zeta}\right)\right\\}\
=\ \varrho(t)^{2}\,.$ (4.14)
where we note $\ \sideset{}{{}^{(\tau)}}{\lim}$ the limit in the topological
algebra $\mathcal{M}(\mathcal{K}_{\infty})$.
To see that $\varrho(t)$ is a solution of (4.5) in
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$, we differentiate the expression (4.8)
using (3.28) and Lemma 3.4. We get
$\displaystyle i\partial_{t}\varrho(t)$ $\displaystyle=$
$\displaystyle-\int_{-\infty}^{t}\mathrm{d}r\,\mathrm{e}^{\eta
r}\left[H(t),{\mathcal{U}}(t,r)\left(\mathbf{E}(r)\cdot\nabla{\zeta}(r)\right)\right]$
(4.15) $\displaystyle=$
$\displaystyle-\left[H(t),\left\\{\int_{-\infty}^{t}\mathrm{d}r\,\mathrm{e}^{\eta
r}{\mathcal{U}}(t,r)\left(\mathbf{E}(r)\cdot\nabla{\zeta}(r)\right)\right\\}\right]$
$\displaystyle=$
$\displaystyle\left[H(t),\left\\{{\zeta}(t)-\int_{-\infty}^{t}\mathrm{d}r\,\mathrm{e}^{\eta
r}{\mathcal{U}}(t,r)\left(\mathbf{E}(r)\cdot\nabla{\zeta}(r)\right)\right\\}\right]$
$\displaystyle=$ $\displaystyle\left[H(t),\varrho(t)\right].$ (4.17)
The integral in (4.15) converges since by (3.30) ,
$\|\left[H(t),{\mathcal{U}}(t,r)\left(\mathbf{E}(r)\cdot\nabla{\zeta}(r)\right)\right]\|_{p}\leq
2C\|(H+\gamma)(\mathbf{E}(r)\cdot\nabla{\zeta})\|_{p}.$ (4.18)
Then we justify going from (4.15) to (4.1) by inserting a resolvent
$(H(t)+\gamma)^{-1}$ and making use of (3.29).
It remains to show that the solution of (4.5) is unique in
$\mathrm{L}^{p}(\mathcal{K}_{\infty})$. It suffices to show that if $\nu(t)$
is a solution of (4.5) with $\zeta=0$ then $\nu(t)=0$ for all $t$. We define
$\tilde{\nu}^{(s)}(t)=\mathcal{U}(s,t)(\nu(t))$ and proceed by duality. Since
$p\geq 1$, with pick $q$ s.t. $p^{-1}+q^{-1}=1$. If
$A\in\mathcal{D}_{q}^{(0)}$, we have, using Lemma 2.5,
$\displaystyle
i\partial_{t}\mathcal{T}\left\\{A\tilde{\nu}^{(s)}(t)\right\\}=i\partial_{t}\mathcal{T}\left\\{{\mathcal{U}}(t,s)(A){\nu}(t)\right\\}$
(4.19) $\displaystyle\quad=\mathcal{T}\left\\{[H(t),\mathcal{U}(t,s)(A)]\
{\nu}(t)\right\\}+\mathcal{T}\left\\{{\mathcal{U}}(t,s)(A)\mathcal{L}_{q}(t)(\nu(t))\right\\}$
$\displaystyle\quad=-\mathcal{T}\left\\{\mathcal{U}(t,s)(A)\mathcal{L}_{q}(t)({\nu}(t))\right\\}+\mathcal{T}\left\\{{\mathcal{U}}(t,s)(A)\
\mathcal{L}_{q}(t)(\nu(t))\right\\}=0\,.$
We conclude that for all $t$ and $A\in\mathcal{D}^{(0)}_{q}$ we have
$\mathcal{T}\left\\{A\tilde{\nu}^{(s)}(t)\right\\}=\mathcal{T}\left\\{A\tilde{\nu}^{(s)}(s)\right\\}=\mathcal{T}\left\\{A{\nu}(s)\right\\}.$
(4.20)
Thus $\tilde{\nu}^{(s)}(t)=\nu(s)$ by Lemma 2.4, that is,
$\nu(t)=\mathcal{U}(t,s)(\nu(s))$. Since by hypothesis
$\lim_{s\rightarrow-\infty}\nu(s)=0$, we obtain that $\nu(t)=0$ for all $t$. ∎
### 4.2. The current and the conductivity
The velocity operator $\mathbf{v}$ is defined as
$\mathbf{v}=\mathbf{v}(\mathbf{A})=2\mathbf{D}(\mathbf{A}),$ (4.21)
where $\mathbf{D}=\mathbf{D}(\mathbf{A})$ is defined below (3.6). Recall that
$\mathbf{v}=2(-i\nabla-\mathbf{A})=i[H,\mathbf{x}]$ on
$C_{\mathrm{c}}^{\infty}(\mathbb{R}^{d})$. We also set
$\mathbf{D}(t)=\mathbf{D}(\mathbf{A}+\mathbf{F}_{\eta}(t))$ as in (3.13), and
$\mathbf{v}(t)=2\mathbf{D}(t)$.
From now on $\varrho(t)$ will denote the unique solution to (4.5), given
explicitly in (4.8). If
$H(t)\varrho(t)\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$ then clearly
$\mathbf{D}_{j}(t)\varrho(t)$ can be defined as well by
$\displaystyle\mathbf{D}_{j}(t)\varrho(t)=\left(\mathbf{D}_{j}(t)(H(t)+\gamma)^{-1}\right)\left((H(t)+\gamma)\varrho(t)\right),$
(4.22)
since $\mathbf{D}_{j}(t)(H(t)+\gamma)^{-1}\in\mathcal{K}_{\infty}$, and thus
$\mathbf{D}_{j}(t)\varrho(t)\in\mathrm{L}^{p}(\mathcal{K}_{\infty})$.
###### Definition 4.3.
Starting with a system in equilibrium in state ${\zeta}$, the net current (per
unit volume), $\mathbf{J}(\eta,\mathbf{E};{\zeta},t_{0})\in\mathbb{R}^{d}$,
generated by switching on an electric field $\mathbf{E}$ adiabatically at rate
$\eta>0$ between time $-\infty$ and time $t_{0}$, is defined as
$\mathbf{J}(\eta,\mathbf{E};{\zeta},t_{0})=\mathcal{T}\left(\mathbf{v}(t_{0})\varrho(t_{0})\right)-\mathcal{T}\left(\mathbf{v}{\zeta}\right).$
(4.23)
As it is well known, the current is null at equilibrium:
###### Lemma 4.4.
One has $\mathcal{T}(\mathbf{D}_{j}\zeta)=0$ for all $j=1,\cdots,d$, and thus
$\mathcal{T}\left(\mathbf{v}\zeta\right)=0$.
Throughout the rest of this paper, we shall assume that the electric field has
the form
$\mathbf{E}(t)=\mathcal{E}(t)\mathbf{E},$ (4.24)
where $\mathbf{E}\in\mathbb{C}^{d}$ gives the intensity of the electric in
each direction while $|\mathcal{E}(t)|=\mathcal{O}(1)$ modulates this
intensity as time varies. As pointed out above, the two cases of particular
interest are $\mathcal{E}(t)=1$ and $\mathcal{E}(t)=\cos(\nu t)$. We may
however, as in [KLM], use the more general form
$\mathcal{E}(t)=\int_{\mathbb{R}}\cos(\nu
t)\hat{\mathcal{E}}(\nu)\mathrm{d}\nu,$ (4.25)
for suitable $\hat{\mathcal{E}}(\nu)$ (see [KLM]).
It is useful to rewrite the current (4.23), using (4.8) and Lemma 4.4, as
$\displaystyle\mathbf{J}(\eta,\mathbf{E};{\zeta},t_{0})$ $\displaystyle=$
$\displaystyle\mathcal{T}\left\\{2\mathbf{D}(0)\left(\varrho(t_{0})-{\zeta}(t_{0})\right)\right\\}$
$\displaystyle=$
$\displaystyle-\mathcal{T}\left\\{2\int_{-\infty}^{t_{0}}\mathrm{d}r\,\mathrm{e}^{\eta
r}\mathbf{D}(0)\,{\mathcal{U}}(t_{0},r)\left(\mathbf{E}(r)\cdot\nabla{\zeta}(r)\right)\right\\}.$
$\displaystyle=$
$\displaystyle-\mathcal{T}\left\\{2\int_{-\infty}^{t_{0}}\mathrm{d}r\,\mathrm{e}^{\eta
r}\mathcal{E}(r)\mathbf{D}(0)\,{\mathcal{U}}(t_{0},r)\left(\mathbf{E}\cdot\nabla{\zeta}(r)\right)\right\\}.$
The conductivity tensor $\sigma(\eta;{\zeta},t_{0})$ is defined as the
derivative of the function
$\mathbf{J}(\eta,\mathbf{E};{\zeta},t_{0})\colon\mathbb{R}^{d}\to\mathbb{R}^{d}$
at $\mathbf{E}=0$. Note that $\sigma(\eta;{\zeta},t_{0})$ is a $d\times d$
matrix $\left\\{\sigma_{jk}(\eta;{\zeta},t_{0})\right\\}$.
###### Definition 4.5.
For $\eta>0$ and $t_{0}\in\mathbb{R}$, the conductivity tensor
$\sigma(\eta;{\zeta},t_{0})$ is defined as
$\sigma(\eta;{\zeta},t_{0})=\partial_{\mathbf{E}}(\mathbf{J}(\eta,\mathbf{E};{\zeta},t_{0}))_{\mid\mathbf{E}=0}\,,$
(4.27)
if it exists. The conductivity tensor $\sigma({\zeta},t_{0})$ is defined by
$\sigma(\zeta,t_{0}):=\lim_{\eta\downarrow 0}\sigma(\eta;\zeta,t_{0})\,,$
(4.28)
whenever the limit exists.
### 4.3. Computing the linear response: a Kubo formula for the conductivity
The next theorem gives a “Kubo formula” for the conductivity at positive
adiabatic parameter.
###### Theorem 4.6.
Let $\eta>0$. Under the hypotheses of Theorem 4.1 for $p=1$, the current
$\mathbf{J}(\eta,\mathbf{E};\zeta,t_{0})$ is differentiable with respect to
$\mathbf{E}$ at $\mathbf{E}=0$ and the derivative $\sigma(\eta;{{\zeta}})$ is
given by
$\displaystyle\sigma_{jk}(\eta;\zeta,t_{0})$ $\displaystyle=$
$\displaystyle\,-\mathcal{T}\left\\{2\int_{-\infty}^{t_{0}}\mathrm{d}r\,\mathrm{e}^{\eta
r}\mathcal{E}(r)\mathbf{D}_{j}\,\mathcal{U}^{(0)}(t_{0}-r)\left(\partial_{k}({\zeta})\right)\right\\}.$
(4.29)
The analogue of [BES, Eq. (41)] and [SB2, Theorem 1] then holds:
###### Corollary 4.7.
Assume that $\mathcal{E}(t)=\Re\mathrm{e}^{i\nu t}$, $\nu\in\mathbb{R}$, then
the conductivity $\sigma_{jk}(\eta;{{\zeta}};\nu)$ at frequency $\nu$ is given
by
$\displaystyle\sigma_{jk}(\eta;\zeta;\nu;0)$ $\displaystyle=$
$\displaystyle\,-\mathcal{T}\left\\{2\mathbf{D}_{j}\,(i\mathcal{L}_{1}+\eta+i\nu)^{-1}\left(\partial_{k}\zeta)\right\\}\right\\}\
,$ (4.30)
###### Proof of corollary 4.7.
Recall (4.11) , in particular $\zeta=\zeta^{\frac{1}{2}}\zeta^{\frac{1}{2}}$.
It follows that $\sigma(\eta;\nu;\zeta;0)$ in (4.27) is real (for arbitrary
$\zeta=f(H)$ write $f=f_{+}-f_{-}$). As a consequence ,
$\displaystyle\sigma(\eta;\nu;\zeta;0)=-\Re\mathcal{T}\left\\{2\int_{-\infty}^{t_{0}}\mathrm{d}r\,\mathrm{e}^{\eta
r}\mathrm{e}^{i\nu
r}\mathbf{D}_{j}\,\mathcal{U}^{(0)}(t_{0}-r)\left(\partial_{k}({\zeta})\right)\right\\}.$
(4.31)
Integrating over $r$ yields the result. ∎
###### Proof of Theorem 4.6.
For clarity, in this proof we display the argument $\mathbf{E}$ in all
functions which depend on $\mathbf{E}$. From (4.2) and
$\mathbf{J}_{j}(\eta,0;\zeta,t_{0})=0$ (Lemma 4.4), we have
$\sigma_{jk}(\eta;\zeta,t_{0})=-\lim_{E\to
0}2\mathcal{T}\left\\{\int_{-\infty}^{t_{0}}\mathrm{d}r\,\mathrm{e}^{\eta
r}\mathcal{E}(r)\mathbf{D}_{\mathbf{E},j}(0)\mathcal{U}(\mathbf{E},0,r)\left(\partial_{k}\zeta(\mathbf{E},r)\right)\right\\}.$
(4.32)
First understand we can interchange integration and the limit
$\mathbf{E}\rightarrow 0$, and get
$\sigma_{jk}(\eta;\zeta,t_{0})\ =\
-2\int_{-\infty}^{t_{0}}\mathrm{d}r\,\mathrm{e}^{\eta
r}\mathcal{E}(r)\lim_{{E}\to
0}\mathcal{T}\left\\{\mathbf{D}_{j}({\mathbf{E}},0){\mathcal{U}}({\mathbf{E}},0,r)\left(\partial_{k}{\zeta}({\mathbf{E}},r)\right)\right\\}\,.$
(4.33)
The latter can easily be seen by inserting a resolvent $(H(t)+\gamma)^{-1}$
and making use of (3.29), the fact that
$H\nabla\zeta\in\mathrm{L}^{1}(\mathcal{K}_{\infty})$, the inequality :
$|\mathcal{T}(A)|\leq\mathcal{T}(|A|)$ and dominated convergence.
Next, we note that for any $s$ we have
$\displaystyle\lim_{{E}\to 0}\mathcal{G}({\mathbf{E}},s)=I\;\;\mbox{strongly
in $\mathrm{L}^{1}(\mathcal{K}_{\infty})$}\,,$ (4.34)
which can be proven by a argument similar to the one used to prove Lemma 3.4.
Along the same lines, for $B\in\mathcal{K}_{\infty}$ we have
$\displaystyle\lim_{{E}\to
0}\mathcal{G}({\mathbf{E}},s)(B_{\omega})=B_{\omega}\;\;\mbox{strongly in
$\mathcal{H}$, with
$\|\mathcal{G}({\mathbf{E}},s)(B)\|_{\infty}=\|B\|_{\infty}$}\,.$ (4.35)
Recalling that
$\mathbf{D}_{j,\omega}({\mathbf{E}},0)=\mathbf{D}_{j,\omega}-\mathbf{F}_{j}(0)$
and that
$\|\partial_{k}{\zeta}({\mathbf{E}},r)\|_{1}=\|\partial_{k}{\zeta}\|_{1}<\infty$,
using Lemma 2.6,
$\displaystyle\lim_{{E}\to 0}$
$\displaystyle\mathcal{T}\left\\{\mathbf{D}_{j}({\mathbf{E}},0){\mathcal{U}}({\mathbf{E}},0,r)\left(\partial_{k}{\zeta}({\mathbf{E}},r)\right)\right\\}=\lim_{{\mathbf{E}}\to
0}\mathcal{T}\left\\{\mathbf{D}_{j}U({\mathbf{E}},0,r)(\partial_{k}{\zeta})U({\mathbf{E}},r,0)\right\\}$
$\displaystyle=\lim_{{\mathbf{E}}\to
0}\mathcal{T}\left\\{\mathbf{D}_{j}U({\mathbf{E}},0,r)(\partial_{k}{\zeta})U^{(0)}(r)\right\\},$
(4.36)
where we have inserted (and removed) the resolvents
$(H(\mathbf{E},r)+\gamma)^{-1}$ and $(H+\gamma)^{-1}$.
To proceed it is convenient to introduce a cutoff so that we can deal with
$\mathbf{D}_{j}$ as if it were in $\mathcal{K}_{\infty}$. Thus we pick
$f_{n}\in C^{\infty}_{c}(\mathbb{R})$, real valued, $|f_{n}|\leq 1$, $f_{n}=1$
on $[-n,n]$, so that $f_{n}(H)$ converges strongly to $1$. Using Lemma 2.6, we
have
$\displaystyle\mathcal{T}\left\\{\mathbf{D}_{j}U({\mathbf{E}},0,r)(\partial_{k}{\zeta})U^{(0)}(r)\right\\}=\lim_{n\to\infty}\mathcal{T}\left\\{f_{n}(H)\mathbf{D}_{j}U({\mathbf{E}},0,r)(\partial_{k}{\zeta})U^{(0)}(r)\right\\}$
$\displaystyle=\lim_{n\to\infty}\mathcal{T}\left\\{U({\mathbf{E}},0,r)\left((\partial_{k}{\zeta})(H+\gamma)\right)U^{(0)}(r)(H+\gamma)^{-1}f_{n}(H)\mathbf{D}_{j}\right\\}$
$\displaystyle=\mathcal{T}\left\\{U({\mathbf{E}},0,r)\left((\partial_{k}{\zeta})(H+\gamma)\right)\left(U^{(0)}(r)(H+\gamma)^{-1}\mathbf{D}_{j}\right)\right\\}\;,$
(4.37)
where we used the centrality of the trace, the fact that $(H+\gamma)^{-1}$
commutes with $U^{(0)}$ and then that
$(H+\gamma)^{-1}\mathbf{D}_{j}\in\mathcal{K}_{\infty}$ in order to remove to
limit $n\to\infty$. Finally, combining (4.36) and (4.37), we get
$\displaystyle\lim_{{E}\to 0}$
$\displaystyle\mathcal{T}\left\\{\mathbf{D}_{j}({\mathbf{E}},0){\mathcal{U}}({\mathbf{E}},0,r)\left(\partial_{k}{\zeta}({\mathbf{E}},r)\right)\right\\}$
(4.38) $\displaystyle=\ \mathcal{T}\left\\{U^{(0)}(-r)\
\left((\partial_{k}{\zeta})(H+\gamma)\right)U^{(0)}(r)(H+\gamma)^{-1}\mathbf{D}_{j}\right\\}$
$\displaystyle=\mathcal{T}\left\\{\mathbf{D}_{j}{\mathcal{U}}^{(0)}(-r)(\partial_{k}{\zeta})\right\\}.$
(4.39)
The Kubo formula (4.29) now follows from (4.33) and (4.39). ∎
### 4.4. The Kubo-Str̆eda formula for the Hall conductivity
Following [BES, AG], we now recover the well-known Kubo-Str̆eda formula for
the Hall conductivity at zero temperature (see however Remark 4.11 for AC-
conductivity). To that aim we consider the case $\mathcal{E}(t)=1$ and
$t_{0}=0$. Recall Definition 4.5. We write
$\sigma_{j,k}^{(E_{f})}=\sigma_{j,k}(P^{(E_{F})},0)\;,\text{ and }\
\sigma_{j,k}^{(E_{f})}(\eta)=\sigma_{j,k}(\eta;P^{(E_{F})},0)\;.$ (4.40)
###### Theorem 4.8.
Take $\mathcal{E}(t)=1$ and $t_{0}=0$. If ${\zeta}=P^{(E_{F})}$ is a Fermi
projection satisfying the hypotheses of Theorem 4.1 with $p=2$, we have
$\displaystyle\sigma_{j,k}^{(E_{F})}=-i\mathcal{T}\left\\{P^{(E_{F})}\left[\partial_{j}P^{(E_{F})},\partial_{k}P^{(E_{F})}\right]\right\\},$
(4.41)
for all $j,k=1,2,\ldots,d$. As a consequence, the conductivity tensor is
antisymmetric; in particular $\sigma_{j,j}^{(E_{F})}=0$ for $j=1,2,\ldots,d$.
Clearly the direct conductivity vanishes, $\sigma_{jj}^{(E_{F})}=0$. Note
that, if the system is time-reversible the off diagonal elements are zero in
the region of localization, as expected.
###### Corollary 4.9.
Under the assumptions of Theorem 4.8, if $\mathbf{A}=0$ (no magnetic field),
we have $\sigma_{j,k}^{(E_{F})}=0$ for all $j,k=1,2,\ldots,d$.
We have the crucial following lemma for computing the Kubo-Str̆eda formula,
which already appears in [BES] (and then in [AG]).
###### Lemma 4.10.
Let $P\in\mathcal{K}^{\infty}$ be a projection such that
$\partial_{k}P\in\mathrm{L}^{p}(\mathcal{K}^{\infty})$, then as operators in
$\mathcal{M}(\mathcal{K}^{\infty})$ (and thus in
$\mathrm{L}^{p}(\mathcal{K}^{\infty})$),
$\partial_{k}P=\left[P,[P,\partial_{k}P]\right].$ (4.42)
###### Proof.
Note that $\partial_{k}P=\partial_{k}P^{2}=P\partial_{k}P+(\partial_{k}P)P$ so
that multiplying left and right both sides by $P$ implies that
$P(\partial_{k}P)P=0$. We then have, in
$\mathrm{L}^{p}(\mathcal{K}^{\infty})$,
$\displaystyle\partial_{k}P$ $\displaystyle=$ $\displaystyle
P\partial_{k}P+(\partial_{k}P)P=P\partial_{k}P+(\partial_{k}P)P-2P(\partial_{k}P)P$
$\displaystyle=$ $\displaystyle P(\partial_{k}P)(1-P)+(1-P)(\partial_{k}P)P$
$\displaystyle=$ $\displaystyle\left[P,[P,\partial_{k}P]\right].$
∎
Remark that Lemma (4.10) heavily relies on the fact $P$ is a projection. We
shall apply it to the situation of zero temperature, i.e. when the initial
density matrix is the orthogonal projection $P^{(E_{F})}$. The argument would
not go through at positive temperature.
###### Proof of Theorem 4.8.
We again regularize the velocity $\mathbf{D}_{j,\omega}$ with a smooth
function $f_{n}\in\mathcal{C}^{\infty}_{c}(\mathbb{R})$, $|f_{n}|\leq 1$,
$f_{n}=1$ on $[-n,n]$, but this time we also require that $f_{n}=0$ outside
$[-n-1,n+1]$, so that $f_{n}\chi_{[-n-1,n+1]}=f_{n}$. Thus
$\mathbf{D}_{j}f_{n}(H)\in\mathrm{L}^{p,o}(\mathcal{K}_{\infty})$,
$0<p\leq\infty$. Moreover
$f_{n}(H)(2\mathbf{D}_{j})f_{n}(H)=f_{n}(H)P_{n}(2\mathbf{D}_{j})P_{n}f_{n}(H)=-f_{n}(H)\partial_{j}(P_{n}H)f_{n}(H)$
(4.43)
where $P_{n}=P_{n}^{2}=\chi_{[-n-1,n+1]}(H)$ so that $HP_{n}$ is a bounded
operator. We have, using the centrality of the trace $\mathcal{T}$ , that
$\displaystyle\widetilde{\sigma}_{jk}^{(E_{F})}(r)$ $\displaystyle:=$
$\displaystyle-\mathcal{T}\left\\{2\mathbf{D}_{j,\omega}\mathcal{U}^{(0)}(-r)(\partial_{k}P^{(E_{F})})\right\\}$
(4.44) $\displaystyle=$
$\displaystyle-\lim_{n\to\infty}\mathcal{T}\left\\{\mathcal{U}^{(0)}(r)(f_{n}(H)2\mathbf{D}_{j,\omega}f_{n}(H))\partial_{k}P^{(E_{F})}\right\\}.$
(4.45)
Using Lemma 2.9 and applying Lemma 4.10 applied to $P=P^{(E_{F})}$, it follows
that
$\displaystyle\mathcal{T}\left\\{{\mathcal{U}^{(0)}}(r)(f_{n}(H)2\mathbf{D}_{j}f_{n}(H))\partial_{k}P^{(E_{F})}\right\\}$
$\displaystyle=$
$\displaystyle\mathcal{T}\left\\{{\mathcal{U}^{(0)}}(r)(f_{n}(H)2\mathbf{D}_{j}f_{n}(H))\left[P^{(E_{F})},\left[P^{(E_{F})},\partial_{k}P^{(E_{F})}\right]\right]\right\\}$
$\displaystyle=$
$\displaystyle\mathcal{T}\left\\{{\mathcal{U}^{(0)}}(r)\left(\left[P^{(E_{F})},\left[P^{(E_{F})},f_{n}(H)2\mathbf{D}_{j}f_{n}(H)\right]\right]\right)\partial_{k}P^{(E_{F})}\right\\},$
$\displaystyle=$
$\displaystyle-\mathcal{T}\left\\{{\mathcal{U}^{(0)}}(r)\left(\left[P^{(E_{F})},f_{n}(H)\left[P^{(E_{F})},\partial_{j}(HP_{n})\right]f_{n}(H)\right]\right)\partial_{k}P^{(E_{F})}\right\\},$
where we used that $P^{(E_{F})}$ commutes with $U^{(0)}$ and $f_{n}(H)$, and
(4.43). Now, as elements in $\mathcal{M}(\mathcal{K}^{\infty})$,
$\left[P^{(E_{F})},\partial_{j}HP_{n}\right]=\left[HP_{n},\partial_{j}P^{(E_{F})}\right].$
(4.47)
Since $[H,\partial_{j}P^{(E_{F})}]]$ is well defined by hypothesis,
$f_{n}(H)\left[HP_{n},\partial_{j}P^{(E_{F})}\right]f_{n}(H)$ converges in
$L^{p}$ to the latter as $n$ goes to infinity. Combining (4.45), (4.4), and
(4.47), we get after taking $n\rightarrow\infty$,
$\widetilde{\sigma}_{jk}^{(E_{F})}(r)\ =\
-\mathcal{T}\left\\{{\mathcal{U}^{(0)}}(r)\left(\left[P^{(E_{F})},\left[H,\partial_{j}P^{(E_{F})}\right]\right]\right)\partial_{k}P^{(E_{F})}\right\\}\,.$
(4.48)
Next,
$\displaystyle\left[P^{(E_{F})},\left[H,\partial_{j}P^{(E_{F})}\right]\right]=\left[H,\left[P^{(E_{F})},\partial_{j}P^{(E_{F})}\right]\right],$
(4.49)
so that, recalling Proposition 3.3,
$\displaystyle\widetilde{\sigma}_{jk}^{(E_{F})}(r)$ $\displaystyle=$
$\displaystyle-\mathcal{T}\left\\{\mathcal{U}^{(0)}(r)\left(\left[H,\left[P^{(E_{F})},\partial_{j}P^{(E_{F})}\right]\right]\right)\partial_{k}P^{(E_{F})}\right\\}$
(4.50) $\displaystyle=$
$\displaystyle-\left\langle\mathrm{e}^{-ir\mathcal{L}}{\mathcal{L}_{2}}\left(\left[P^{(E_{F})},\partial_{j}P^{(E_{F})}\right]\right),\partial_{k}P^{(E_{F})}\right\rangle_{\mathrm{L}^{2}},$
where $\langle A,B\rangle_{\mathrm{L}^{2}}=\mathcal{T}(A^{\ast}B)$. Combining
(4.29), (4.44), and (4.50), we get
$\sigma_{jk}^{(E_{F})}(\eta)=-\left\langle
i\left(\mathcal{L}_{2}+i\eta\right)^{-1}{\mathcal{L}_{2}}\left(\left[P^{(E_{F})},\partial_{j}P^{(E_{F})}\right]\right),\partial_{k}P^{(E_{F})}\right\rangle_{\mathrm{L}^{2}}.$
(4.51)
It follows from the spectral theorem (applied to $\mathcal{L}_{2}$) that
$\lim_{\eta\to
0}\left(\mathcal{L}_{2}+i\eta\right)^{-1}{\mathcal{L}_{2}}=P_{(\mathrm{Ker}\,\mathcal{L}_{2})^{\perp}}\;\;\mbox{strongly
in $\mathrm{L}^{2}(\mathcal{K}_{\infty})$}\,,$ (4.52)
where $P_{(\mathrm{Ker}\,\mathcal{L}_{2})^{\perp}}$ is the orthogonal
projection onto ${(\mathrm{Ker}\,\mathcal{L}_{2})^{\perp}}$. Moreover, as in
[BoGKS] one can prove that
$\left[P^{(E_{F})},\partial_{j}P^{(E_{F})}\right]\in{(\mathrm{Ker}\,\mathcal{L}_{2})^{\perp}}\,.$
(4.53)
Combining (4.51), (4.52), (4.53), and Lemma 2.9, we get
$\sigma_{j,k}^{(E_{F})}=i\left\langle\left[P^{(E_{F})},\partial_{j}P^{(E_{F})}\right],\partial_{k}P^{(E_{F})}\right\rangle_{\mathrm{L}^{2}}=-i\mathcal{T}\left\\{P^{(E_{F})}\left[\partial_{j}P^{(E_{F})},\partial_{k}P^{(E_{F})}\right]\right\\}\,,$
which is just (4.41). ∎
###### Remark 4.11.
If one is interested in the AC-conductivity, then the proof above is valid up
to (4.51). In particular, with $\mathcal{E}(t)=\Re\mathrm{e}^{i\nu t}$, one
obtains
$\sigma_{jk}^{(E_{F})}(\eta)=-\Re\left\langle
i\left(\mathcal{L}_{2}+\nu+i\eta\right)^{-1}{\mathcal{L}_{2}}\left(\left[P^{(E_{F})},\partial_{j}P^{(E_{F})}\right]\right),\partial_{k}P^{(E_{F})}\right\rangle_{\mathrm{L}^{2}}.$
(4.54)
The limit $\eta\to 0$ can still be performed as in [KLM, Corollary 3.4]. It is
the main achievement of [KLM] to be able to investigate the behaviour of this
limit as $\nu\to 0$ in connection with Mott’s formula.
## References
* [AENSS] Aizenman, M., Elgart, A., Naboko, S., Schenker, J.H., Stolz, G.: Moment Analysis for Localization in Random Schrödinger Operators. 2003 Preprint, math-ph/0308023.
* [AG] Aizenman, M., Graf, G.M.: Localization bounds for an electron gas, J. Phys. A: Math. Gen. 31, 6783-6806, (1998).
* [AvSS] Avron, J., Seiler, R., Simon, B.: Charge deficiency, charge transport and comparison of dimensions. Comm. Math. Phys. 159, 399-422 (1994).
* [B] Bellissard, J.: Ordinary quantum Hall effect and noncommutative cohomology. In Localization in disordered systems (Bad Schandau, 1986), pp. 61-74. Teubner-Texte Phys. 16, Teubner, 1988.
* [BES] Bellissard, J., van Elst, A., Schulz-Baldes, H.: The non commutative geometry of the quantum Hall effect. J. Math. Phys. 35, 5373-5451 (1994).
* [BH] Bellissard, J., Hislop, P.: Smoothness of correlations in the Anderson model at strong disorder. Ann. Henri Poincar 8, 1-26 (2007).
* [BoGK] Bouclet, J.M., Germinet, F., Klein, A.: Sub-exponential decay of operator kernels for functions of generalized Schrödinger operators. Proc. Amer. Math. Soc. 132 , 2703-2712 (2004).
* [BoGKS] Bouclet, J.M., Germinet, F., Klein, A., Schenker. J.: Linear response theory for magnetic Schr dinger operators in disordered media, J. Funct. Anal. 226, 301-372 (2005)
* [CH] Combes, J.M., Hislop, P.D.: Landau Hamiltonians with random potentials: localization and the density of states. Commun. Math. Phys. 177, 603-629 (1996).
* [CGH] Combes, J.M., Germinet, F., Hislop, P.D.: Conductivity and current-current correlation measure. In preparation.
* [CoJM] Cornean, H.D., Jensen, A., Moldoveanu, V.: The Landauer-Büttiker formula and resonant quantum transport. Mathematical physics of quantum mechanics, 45-53, Lecture Notes in Phys., 690, Springer, Berlin, 2006.
* [CoNP] Cornean, H.D., Nenciu, G. Pedersen, T.: The Faraday effect revisited: general theory. J. Math. Phys. 47 (2006), no. 1, 013511, 23 pp.
* [D] Dixmier, J.: Les algèbres d’opérateurs dans l’espace Hilbertien (algèbres de von Neumann), Gauthier-Villars 1969 and Gabay 1996.
* [Do] N. Dombrowski. PhD Thesis. In preparation.
* [ES] Elgart, A. , Schlein, B.: Adiabatic charge transport and the Kubo formula for Landau Type Hamiltonians. Comm. Pure Appl. Math. 57, 590-615 (2004).
* [FS] Fröhlich, J., Spencer, T.: Absence of diffusion with Anderson tight binding model for large disorder or low energy. Commun. Math. Phys. 88, 151-184 (1983)
* [Geo] Georgescu, V.: Private communication.
* [GK1] Germinet, F., Klein, A.: Bootstrap Multiscale Analysis and Localization in Random Media. Commun. Math. Phys. 222, 415-448 (2001).
* [GK2] Germinet, F., Klein, A.: Operator kernel estimates for functions of generalized Schrödinger operators. Proc. Amer. Math. Soc. 131, 911-920 (2003).
* [GK3] Germinet, F, Klein, A.: Explicit finite volume criteria for localization in continuous random media and applications. Geom. Funct. Anal. 13, 1201-1238 (2003).
* [GrHK] Ghribi, F., Hislop, P., Klopp, F.: Localization for Schrödinger operators with random vector potentials. Contemp. Math. To appear.
* [KLP] Kirsch, W. Lenoble, O. Pastur, L.: On the Mott formula for the ac conductivity and binary correlators in the strong localization regime of disordered systems. J. Phys. A 36, 12157-12180 (2003).
* [KLM] Klein, A., Lenoble, O., Müller, P.: On Mott’s formula for the AC-conductivity in the Anderson model. Annals of Math. To appear.
* [KM] Klein, A, Müller, P.: The conductivity measure for the Anderson model. In preparation.
* [Ku] Kunz, H.: The Quantum Hall Effect for Electrons in a Random Potential. Commun. Math. Phys. 112, 121-145 (1987).
* [LS] H. Leinfelder, C.G. Simader, Schrödinger operators with singular magnetic potentials, Math. Z. 176, 1-19 (1981).
* [MD] Mott, N.F., Davies, E.A.: Electronic processes in non-crystalinne materials. Oxford: Clarendon Press 1971.
* [NB] Nakamura, S., Bellissard, J.: Low Energy Bands do not Contribute to Quantum Hall Effect. Commun. Math. Phys. 131, 283-305 (1990).
* [Na] Nakano, F.: Absence of transport in Anderson localization. Rev. Math. Phys. 14, 375-407 (2002).
* [P] Pastur, L., Spectral properties of disordered systems in the one-body approximation. Comm. Math. Phys. 75, 179-196 (1980).
* [PF] Pastur, L., Figotin, A.: Spectra of Random and Almost-Periodic Operators. Springer-Verlag, 1992.
* [SB1] Schulz-Baldes, H., Bellissard, J.: Anomalous transport: a mathematical framework. Rev. Math. Phys. 10, 1-46 (1998).
* [SB2] Schulz-Baldes, H., Bellissard, J.: A Kinetic Theory for Quantum Transport in Aperiodic Media. J. Statist. Phys. 91, 991-1026 (1998).
* [St] Str̆eda, P.: Theory of quantised Hall conductivity in two dimensions. J. Phys. C. 15, L717-L721 (1982).
* [Te] Terp, M.: $\mathrm{L}^{p}$ spaces associated with von Neumann algebras. Notes, Math. Instititue, Copenhagen university 1981.
* [ThKNN] Thouless, D. J., Kohmoto, K., Nightingale, M. P., den Nijs, M.: Quantized Hall conductance in a two-dimensional periodic potential. Phys. Rev. Lett. 49, 405-408 (1982).
* [U] Ueki, N.: Wegner estimates and localization for Gaussian random potentials. Publ. Res. Inst. Math. Sci. 40, 29-90 (2004).
* [W] Wang, W.-M.: Microlocalization, percolation, and Anderson localization for the magnetic Schrödinger operator with a random potential. J. Funct. Anal. 146, 1-26 (1997).
* [Y] Yosida, K.: _Functional Analysis, 6th edition._ Springer-Verlag, 1980.
|
arxiv-papers
| 2011-03-28T21:59:52 |
2024-09-04T02:49:17.973392
|
{
"license": "Public Domain",
"authors": "N. Dombrowski, F. Germinet",
"submitter": "Nicolas Dombrowski",
"url": "https://arxiv.org/abs/1103.5498"
}
|
1103.5521
|
# Spin waves in the block checkerboard antiferromagnetic phase
Feng Lu1, and Xi Dai1 1 Beijing National Laboratory for Condensed Matter
Physics, and Institute of Physics, Chinese Academy of Sciences, Beijing
100190, China
(today)
###### Abstract
Motivated by the discovery of new family 122 iron-based superconductors, we
present the theoretical results on the ground state phase diagram, spin wave
and dynamic structure factor of the extended $J_{1}-J_{2}$ Heisenberg model.
In the reasonable physical parameter region of $K_{2}Fe_{4}Se_{5}$ , we find
the block checkerboard antiferromagnetic order phase is stable. There are two
acoustic branches and six optical branches spin wave in the block checkerboard
antiferromagnetic phase, which has analytic expression in the high symmetry
points. To compare the further neutron scattering experiments, we discuss the
saddlepoint structure in the magnetic excitation spectrum and calculate the
predicted inelastic neutron scattering pattern based on linear spin wave
theory.
###### pacs:
75.30.Ds, 74.25.Dw, 74.20.Mn
## I Introduction
Searching high-$T_{c}$ superconductivity has been one of the central topics in
condensed matter physicsLee-P-A-1 . Following the discovery of the copper-
based superconductors two decades agoBednorz-1 , the second class of high-
transition temperature superconductors has been reported in iron-based
materialsKamihara-1 ; Chen-G-F-1 ; Lei-H-1 . The iron pnictides contains four
typical crystal structures, such as 1111-type ReFeAsO (Re represents rare
earth elements)Kondo-1 , 122 type Ba(Ca)Fe2As2Wray-1 ; Xu-Gang-1 , 111 type
LiFeAsBorisenko-1 and 11 type FeSeKhasanov-1 . The parent compounds with all
structures except the 11 type have the stripe like antiferromagnetic state as
the ground state. By substituting a few percent of O with F Kondo-1 or Ba with
KWray-1 ; Xu-Gang-1 , the compounds enter the superconducting (SC) phase from
the SDW phase below $T_{c}$ . In addition to the above four crystal
structures, a new family of iron-based superconducting materials with 122 type
crystal structure have recently been discovered with the transition
temperature $T_{c}$ as high as 33 K Guo-2 . However, these new ”122” compounds
differ from the iron superconductors with the other structures in many
aspectsMaziopa-1 . Firstly, the parent compound of the new ”122” material is
proposed to be $K_{0.8}Fe_{1.6}Se_{2}$ with intrinsic $root5byroot5Fe$ vacancy
ordering determined by various experimentsWei-Bao-1 ; Wei-Bao-2 ; Wei-Bao-3 .
Secondly, the ground state for $K_{0.8}Fe_{1.6}Se_{2}$ is Mott insulator with
block antiferromagnetic order, which is observed by the neutron diffraction
experimentsWei-Bao-1 ; Wei-Bao-2 ; Wei-Bao-3 . By the first principles
calculation, Yan et. alY-Z-Lu-1 and Cao et. alDai-J-H also found the block
antiferromagnetic order is the most stable ground state for
$K_{0.8}Fe_{1.6}As_{2}$.
To date, a number of researches have been carried out to study the nature of
superconductivity and magnetic properties of these materials. The neutron
scattering experiments by Bao et.alWei-Bao-3 have found the ground state of
$K_{0.8}Fe_{1.6}As_{2}$ to be block anti-ferromagnetic with magnetic moment
around $3.4\mu_{B}$. It has been proposed by several authors that the magnetic
and superconducting instabilities are strongly coupled together and the
properties of magnetic excitations, such as spin wave, play very crucial roles
for the superconductivity in this family materials. Zhang et. alZhang-G-M-1
even suggested that the superconducting pairing may be mediated by coherent
spin wave excitations in these materials.
In order to give the qualitative insight into the magnetic excitation
properties in this system, we studied the spin wave spectrum using the
Heisenberg model on the 2D square lattice with $\sqrt{5}\times\sqrt{5}$
vacancy pattern. There are four independent parameters are used in the model,
which correspond to the nearest neighbor and next nearest neighbor coupling
between spins. We first obtain the ground state phase diagram as the function
of those parameters, based on which we calculate the spin wave spectrum as
well as the spin dynamic structure factor using the Holstein-Primakov
transformation. Our results demonstrate that the block checkerboard
antiferromagnetic order is stable in a wide range of phase regime and there
are two acoustic branches as well as six optical branches spin wave in this
system, which can be measured by the future neutron scattering experiments.
## II Model and Method
The simplest model that captures the essential physics in Fe-vacancies ordered
material $K_{2}Fe_{4}Se_{5}$ can be described by the extended $J_{1}-J_{2}$
Heisenberg model on a quasi-two-dimensional lattice Dai-J-H ; Y-Z-Lu-1 ,
$\displaystyle H$ $\displaystyle=$ $\displaystyle
J_{1}\sum_{i,\delta,\delta^{\prime}(>\delta)}\overrightarrow{S}_{i,\delta}\cdot\overrightarrow{S}_{i,\delta^{\prime}}$
(1)
$\displaystyle+J^{\prime}_{1}\sum_{i,\gamma,\delta,\delta^{\prime}}\overrightarrow{S}_{i,\delta}\cdot\overrightarrow{S}_{i+\gamma,\delta^{\prime}}$
$\displaystyle+J_{2}\sum_{i,\delta,\delta^{\prime\prime}(>\delta)}\overrightarrow{S}_{i,\delta}\cdot\overrightarrow{S}_{i,\delta^{\prime\prime}}$
$\displaystyle+J^{\prime}_{2}\sum_{i,\gamma,\delta,\delta^{\prime\prime}}\overrightarrow{S}_{i,\delta}\cdot\overrightarrow{S}_{i+\gamma,\delta^{\prime\prime}}$
Here, $\delta=1,2,3,4$ and $\gamma=1,2,3,4$; the first and second terms
represent nearest-neighbor (n.n.) spin interactions in the intra- and inter-
block, respectively, as shown in Fig.1. The third and forth term are second-
neighbor ( n.n.n.) spin interactions which are taken to be independent on the
direction in the intra- and inter- block. Here, i is the block index, $\gamma$
denotes the nearest-neighbor block of i block.
$\delta^{\prime}$($\delta^{\prime\prime}$) represents the site-index which is
n.n (n.n.n) site of site $\delta$. $J_{1}$ and $J^{\prime}_{1}$ ($J_{2}$ and
$J^{\prime}_{2}$ ) indicate n.n. ( n.n.n.) couplings of intra- and inter-
block, respectively, which are illustrated in Fig. 1. Here, we define $J_{2}$
is the energy unit.
Figure 1: (color online)Schematic diagram of the 2 dimensional crystal and
magnetic structure (single layer for $K_{2}Fe_{4}Se_{5}$ ) and the
corresponding Brillouin Zone. On the left is the crystal structure and spin
pattern of the pnictides $K_{2}Fe_{4}Se_{5}$ . The up-arrow(blue) and down-
arrow(azure) atoms indicate the Fe-atoms with positive/negative magnetic
moment, respectively. Here we show the considered model with nearest neighbor
coupling $J_{1},J_{1}^{\prime}$ and next nearest neighbor coupling
$J_{2},J_{2}^{\prime}$. The coupling within each block is $J(J_{1},J_{2})$,
and the coupling between blocks is
$J^{\prime}(J_{1}^{\prime},J_{2}^{\prime})$. On the right is the positive
quadrant of the square-lattice Brillouin Zone showing wave-vectors
$\Gamma=(0,0)$, $A=(\frac{2\pi}{\sqrt{10}a},0)$,
$B=(\frac{2\pi}{\sqrt{10}a},\frac{2\pi}{\sqrt{10}a})$,
$C=(0,\frac{2\pi}{\sqrt{10}a})$. With the high symmetry line, the spin wave
are contained along the direction $\Gamma-A-B-\Gamma-C-B$. The purple solid
line marks the magnetic unit cell. The iron vacancy site Fe is marked by the
open square, and the occupied site Fe is marked by solid circle with the blue
or reseda color indicating spin up or spin down.
In order to understand this $J_{1}-J_{2}$ Heisengerg Hamiltonian, we depict
the typical block spin ground state and the 2D Brillouin zone (BZ) in Fig.1.
The q-vectors used for defining the high symmetry line of the spin wave is
also depicted. Each magnetic unit cell contains eight Fe atoms and two Fe
vacancy which are marked by the open squares, as shown in Fig.1. The block
checkerboard antiferromagnetic order are recently observed by the neutron
diffraction experiment in the $K_{2}Fe_{4}Se_{5}$ material. An convenient way
to understand the antiferromagnetic structure of $K_{2}Fe_{4}Se_{5}$ is to
consider the four parallel magnetic moments in one block as a supermoment; and
the supermoments then form a simple chess-board nearest-neighbor
antiferromagnetic order on a square lattice, as seen from Fig.1a. The distance
between the nearest-neighbor iron is defined as $a$. Then, the crystal lattice
constant for the magnetic unit cell is $\sqrt{10}a$. Fig.1b is the 2D BZ for
the magnetic unit cell.
We use Holstein-Primakoff (HP) bosons to investigate the spin wave of the
block checkerboard antiferromagnetic ground state. As we know, linearized spin
wave theory is a standard procedure to calculate the spin wave excitation
spectrum and the zero-temperature dynamical structure factorAuerbach-1 ; D-X-
Yao-1 . Firstly, we use HP bosons to replace the spin operators, as shown in
Appendix A.
Using Holstein-Primakoff transformations, we can obtain the HP boson
Hamiltonian,
$\displaystyle H$ $\displaystyle=$
$\displaystyle\sum_{k}\psi_{k}^{\dagger}\mathcal{H}_{k}\psi_{k}+E_{0}-N\cdot
E_{\left(k=0\right)}$ (2)
Here, $\begin{smallmatrix}\psi_{k}^{\dagger}&=\end{smallmatrix}$
$\bigl{(}\begin{smallmatrix}a_{k1}^{\dagger}&a_{k2}^{\dagger}&a_{k3}^{\dagger}&a_{k4}^{\dagger}&b_{-k-1}&b_{-k-2}&b_{-k-3}&b_{-k-4}\end{smallmatrix}\bigr{)}$;
$E_{0}$ is the classical ground state energy for block checkerboard
antiferromagnetic order,
$E_{0}=8J_{1}NS^{2}-4J^{\prime}_{1}NS^{2}+4J_{2}NS^{2}-8J^{\prime}_{2}NS^{2}$.
The Specific expression for $\mathcal{H}_{k}$ is shown in Appendix B.
In the real space, we define the ’molecular orbital’ , which is the
combination of HP boson operators in one block.
$\displaystyle\begin{smallmatrix}\alpha_{i1}&=&\frac{1}{\sqrt{4}}\left(a_{i1}-a_{i2}+a_{i3}-a_{i4}\right)\\\
\alpha_{i2}&=&\frac{1}{\sqrt{4}}\left(a_{i1}+a_{i2}-a_{i3}-a_{i4}\right)\\\
\alpha_{i3}&=&\frac{1}{\sqrt{4}}\left(a_{i1}-a_{i2}-a_{i3}+a_{i4}\right)\\\
\alpha_{i4}&=&\frac{1}{\sqrt{4}}\left(a_{i1}+a_{i2}+a_{i3}+a_{i4}\right)\\\
\beta_{i-1}&=&\frac{1}{\sqrt{4}}\left(b_{i-1}-b_{i-2}+b_{i-3}-b_{i-4}\right)\\\
\beta_{i-2}&=&\frac{1}{\sqrt{4}}\left(b_{i-1}+b_{i-2}-b_{i-3}-b_{i-4}\right)\\\
\beta_{i-3}&=&\frac{1}{\sqrt{4}}\left(b_{i-1}-b_{i-2}-b_{i-3}+b_{i-4}\right)\\\
\beta_{i-4}&=&\frac{1}{\sqrt{4}}\left(b_{i-1}+b_{i-2}+b_{i-3}+b_{i-4}\right)\end{smallmatrix}$
Here, the $-$ represents the spin down block. The corresponding physical
picture for each ’molecular orbital’ is shown in the Fig. 2.
Figure 2: (color online) The schematic diagram for the ’molecular orbital’ in
the $\Gamma$ point: (a) The deviation of spin in site 1 and 3 has the same
phase for the corresponding wave function; and the deviation of spin in site 2
and 4 has the same phase for the corresponding wave function. But the
different between the phase of wave function for site 1 and 2 is 180 degrees.
(b)The deviation of spin in site 1 and 2 has the same phase for the
corresponding the wave function; and the deviation of spin in site 3 and 4 has
the same phase for the corresponding wave function. But the different between
the phase of wave function for site 1 and 3 is 180 degrees. (c) The deviation
of spin in site 1 and 4 has the same phase for the corresponding wave
function; and the deviation of spin in site 2 and 3 has the same phase for the
corresponding wave function. But the different between the phase of wave
function for site 1 and 2 is 180 degrees. (d) The deviation of spin in site 1,
2, 3 and 4 all has the same phase for the corresponding wave function.
In the ’molecular orbital’ basis, the Hamiltonian becomes,
$\displaystyle H_{k}$ $\displaystyle=$
$\displaystyle\sum_{k}\psi_{k}^{o\dagger}\mathcal{H}^{orbital}_{k}\psi_{k}^{o}$
(3)
Here, $\begin{smallmatrix}\psi_{k}^{o\dagger}&=\end{smallmatrix}$
$\bigl{(}\begin{smallmatrix}\alpha_{k1}^{\dagger}&\alpha_{k2}^{\dagger}&\alpha_{k3}^{\dagger}&\alpha_{k4}^{\dagger}&\beta_{-k-1}&\beta_{-k-2}&\beta_{-k-3}&\beta_{-k-4}\end{smallmatrix}\bigr{)}$.
The matrix elements for different ’molecular orbital’ in the same block is
zero, which is interesting; and we discuss the Hamiltonian later. The Specific
expression for $\mathcal{H}^{orbital}_{k}$ is shown in Appendix B.
Because the boson Hamiltonian are big, one must use numerical diagonalization
to solve eigenvalues for spin wave, which has a standard procedureM-W-Xiao ;
M-W-Xiao-1 ; M-W-Xiao-3 ; M-W-Xiao-4 ; M-W-Xiao-5 , as shown in Appendix C.
To further understand the physical properties in this spin system, we obtain
the analytical eigenvalues and eigenvectors in some special k points, such as
$k=(0,0)$ and $k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ point. In
the $\Gamma$ point, the ’molecular orbital’ Hamiltonian becomes four $2\times
2$ block matrix $\bigl{(}\begin{smallmatrix}h_{m}&h_{m,m+4}\\\
h_{m,m+4}&h_{m}\end{smallmatrix}\bigr{)}$ about $\alpha_{m}$ and
$\beta_{-k-m}^{\dagger}$ ($m=1,2,3,4$), which indicates that the ’molecular
orbital’ are intrinsic vibration modes for $\Gamma$ point. The spin waves at
$\Gamma$ point are collective excitations of the same ’molecular orbital’
between different blocks.
From the $2\times 2$ block Hamiltonian, we obtain the eigenvalues in the
$\Gamma$ point,
$\displaystyle\begin{smallmatrix}\omega_{1}^{(0,0)}&=&S\sqrt{(4J_{1}-J^{\prime}_{1}-2J^{\prime}_{2})^{2}-(J^{\prime}_{1}-2J^{\prime}_{2})^{2}}\\\
\omega_{2}^{(0,0)}=\omega_{3}^{(0,0)}&=&2S\sqrt{(J_{1}+J_{2}-J^{\prime}_{2})(J_{1}-J^{\prime}_{1}+J_{2}-J^{\prime}_{2})}\\\
\omega_{4}^{(0,0)}&=&0\end{smallmatrix}$
There are four Eigenvalues. The first eigenvalue is twofold degenerate, and
its eigenvector is a combination by the first ’molecular orbital’ in the spin
up and spin down block site, as shown in Fig.2a. The second and third
eigenvalues are degenerate and each of them is twofold degenerate. Their
eigenvector are combination by the second and third ’molecular orbital’ in the
spin up and spin down block site, as shown in Fig.2b and Fig.2c, respectively.
They are all optical branches due to the gap in the $\Gamma$ point. The final
eigenvalue is also twofold degenerate. However, different from the above
optical branches, it is a acoustic branch and always zero in $\Gamma$ point,
which is imposed by Goldstone’s theorem.
As shown above, we can also discuss the physical properties in the
$k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ point. From the boson
Hamiltonian, we can obtain the eigenvalues in the
$k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ point,
$\displaystyle\begin{smallmatrix}\omega_{1}^{(\pi,\pi)}&=&-2SJ_{1}+S\sqrt{(2J_{1}-J^{\prime}_{1}-2J^{\prime}_{2})^{2}-{J^{\prime}_{1}}^{2}}\\\
\omega_{2}^{(\pi,\pi)}&=&2SJ_{1}+S\sqrt{(2J_{1}-J^{\prime}_{1}-2J^{\prime}_{2})^{2}-{J^{\prime}_{1}}^{2}}\\\
\omega_{3}^{(\pi,\pi)}=\omega_{4}^{(\pi,\pi)}&=&S\sqrt{(2J_{1}-J^{\prime}_{1}+2J_{2}-2J^{\prime}_{2})^{2}-4{J^{\prime}_{2}}^{2}-{J^{\prime}_{1}}^{2}}\\\
\end{smallmatrix}$
There are also four Eigenvalues and each of them are twofold degenerate. Six
eigenvalue are optical branches and two eigenvalue are acoustic branches. The
third and forth eigenvalue are always degenerate in
$k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ point.
Now, we have five different eigenvalues for spin wave in the special point,
which can be used to fit the experimental data in order to get the n.n. (
n.n.n.) couplings of intra- and inter-block, $J_{1}$ , $J^{\prime}_{1}$
($J_{2}$ , $J^{\prime}_{2}$ ) . Then using this interaction parameters, we can
obtain the spin wave along all the BZ by numerical diagonalization method.
## III RESULTS AND DISCUSSION
In this section, we first present the phase diagram of the $J_{1}-J_{2}$
model. Then we discuss the spin wave and spin dynamical factor in the block
stipe antiferromagnetic phase.
Figure 3: (color online) The phase diagram for the
$J_{1}(J_{1}^{\prime})-J_{2}(J_{2}^{\prime})$ model. The phases are defined in
Ref. 17, among which the AFM1 phase is the block checkerboard
antiferromagnetic phase observed in the neutron diffraction experiment. The
blue/azure atoms indicate the Fe-atoms with positive/negative magnetic moment,
respectively. The magnetic configuration for AFM2, AFM3 and AFM4 is shown. (a)
The phase diagram for the interaction parameter
$J_{2}=1,J_{2}^{\prime}=J_{2}$. (b) The phase diagram for the interaction
parameter $J_{2}=1,J_{2}^{\prime}=2.5J_{2}$.
To investigate the phase diagram for the $J_{1}-J_{2}$ Heisenberg model, we
use the stochastic Monte Carlo(MC) method to investigate the system ground
state. In the reasonable physical parameter region, the phase diagram for the
$J_{1}-J_{2}$ Heisenberg model is given in Fig.3, which is plotted in the
plane $J_{1}/J_{2}-J_{1}^{\prime}/J_{2}$ at fixed value
$J_{2}^{\prime}/J_{2}$. We obtain four stable phases in the case
$J_{2}^{\prime}/J_{2}=1$. The first one is the block checkerboard
antiferromagnetic order phase, denoted by AFM1 in Fig.1, which has been
observed by the experimentWei-Bao-1 ; Wei-Bao-2 . This phase is our interested
in the study of iron-based superconductors. Obviously, when the coupling
$J_{1}$ is negative, the spin favors to form ferromagnetic configuration in
the blocks. simultaneously, when the coupling $J_{1}^{\prime}$ is positive,
the spin favors to form anti-ferromagnetic configuration between the nearest-
neighbor block. Of course, when $J_{1}^{\prime}$ is negative, but small, the
interaction $J_{1}$ and $J_{2}^{\prime}$ are dominant and the block
checkerboard antiferromagnetic phase is also stable in this region. In the
case $J_{2}^{\prime}/J_{2}=1$, the block checkerboard antiferromagnetic phase
stably exists when the following conditions are satisfied: $J_{1}<0$ and
$J_{1}^{\prime}>J_{1}$ . In the parameter region $J_{1}>0$ and
$J_{1}>J_{1}^{\prime}$ , the system favors to stay in the AFM2 phase, as seen
in the Fig. 3(a). In this phase, the antiferromagnetic order in the block
arises from the antiferromagnetic coupling $J_{1}$. On the other hand, the
spin system favors the AFM3 phase when $J_{1}^{\prime}<J_{1}<0$ , which is
mainly attributed to the dominant interaction $J_{1}^{\prime}$ in this
parameter regions. Similar to the phase AFM3, the system stay in the AFM4
phase in the region $0<J_{1}<J_{1}^{\prime}$, as seen in the Fig. 3(a). To
concretely investigate the spin properties in this system and compare the
theory calculation with experimental results, in what follows, we change the
parameter value $J_{2}^{\prime}/J_{2}=1$ to $J_{2}^{\prime}/J_{2}=2.5$ to
investigate the phase diagram. In this parameter region, the antiferromagnetic
coupling $J_{2}^{\prime}$ is dominant. Different from the first case, there
are only two phase, AFM1 and AFM2, in the phase diagram. The two phase is
separated by the line $J_{1}=0.5J_{1}^{\prime}$. Below the line, the phase is
AFM1 phase, otherwise, the phase is AFM2 phase. It is interesting to ask in
which region the realistic parameters of the iron pnictides fall. From the LDA
calculations, Cao et al suggested that $J_{1}=-29$ mev, $J_{1}^{\prime}=10$
mev, $J_{2}=39$ mev and $J_{2}^{\prime}=95$ mevDai-J-H . Such a set of
parameters falls in the block checkerboard antiferromagnetic phase in Fig.3(a)
and (b), implying that the $K_{2}Fe_{4}Se_{5}$ should have the block
checkerboard antiferromagnetic order. This fact tells us we only need to focus
on parameter region in the AFM1 phase.
Figure 4: (color online)Acoustic and optical spin-wave branches for linear
spin-wave theory, T = 0, as a function of k along selected high symmetry
directions in the magnetic unit cell Brillouin Zone. (i)$J_{1}=-1$,
$J_{1}^{\prime}=0.2$, $J_{2}=1$ and $J_{2}^{\prime}=2.5$, (ii)$J_{1}=-1.5$,
$J_{1}^{\prime}=0.2$, $J_{2}=1$ and $J_{2}^{\prime}=2.5$. , (iii) $J_{1}=-1$,
$J_{1}^{\prime}=1.5$, $J_{2}=1$ and $J_{2}^{\prime}=2.5$. , (iv) $J_{1}=-1$,
$J_{1}^{\prime}=0.2$, $J_{2}=0.2$ and $J_{2}^{\prime}=2.5$. , (v) $J_{1}=-1$,
$J_{1}^{\prime}=0.2$, $J_{2}=1$ and $J_{2}^{\prime}=1.5$. The x-axis
correspond to the k point along the selected direction in Fig.1b. y-axis
correspond to the energy for spin wave; the energy unit is $J_{2}=1$ except
case (iv).
First of all, we use numerical diagonalization method to investigate the spin
wave dispersion relations along the high symmetry direction in different
situations. In the numerical calculation, it is convenient to set $J_{2}=1$;
and comparing with experiments, the actual energy scale of $J_{2}$ for the
specifical material can be deduced. Motivated by the first principle reported
parametersDai-J-H , we first study the set of parameter: (i)$J_{1}=-1$,
$J_{1}^{\prime}=0.2$, $J_{2}=1$ and $J_{2}^{\prime}=2.5$. However, to study
the influence of interaction parameters on the spin-wave spectra, we also
investigate four different sets of parameter: (ii)$J_{1}=-1.5$,
$J_{1}^{\prime}=0.2$, $J_{2}=1$ and $J_{2}^{\prime}=2.5$. , (iii) $J_{1}=-1$,
$J_{1}^{\prime}=1.5$, $J_{2}=1$ and $J_{2}^{\prime}=2.5$. , (iv) $J_{1}=-1$,
$J_{1}^{\prime}=0.2$, $J_{2}=0.2$ and $J_{2}^{\prime}=2.5$. , (v) $J_{1}=-1$,
$J_{1}^{\prime}=0.2$, $J_{2}=1$ and $J_{2}^{\prime}=1.5$. In Fig. 4, we plot
spin wave dispersions along high-symmetry direction $\Gamma-A-B-\Gamma-C-B$ in
the BZ for different interaction parameters. In all cases, there are one
acoustic and three optical spin-wave branches(each of them is twofold
degenerate). For the acoustic branches, the gap of spin wave in $\Gamma$ point
is always zero due to Goldstone’s theorem. In the first case, the vibration
mode for acoustic branches is shown in Fig.2(d). It is the collective
excitation mode for the forth ’molecular orbital’ in one block with the forth
’molecular orbital’ in other blocks. And the relative phase in a ’molecular
orbital’ is only depend on the momentum $k$ and independent on the interaction
parameters. However, the relative phase between different block’s ’molecular
orbital’ is dependent on the specific interaction parameters. The optical gap
in the $\Gamma$ point is dependent on the specific interaction parameters by
contraries. As discussed in Eq.II, two of the three optical branches are
degenerate at $\Gamma$ point. Away from $\Gamma$ point, the two degenerate
optical branches split. For example, with the increasing of k along the
$\Gamma B$ direction, the optical branch for the second ’molecular orbital’
(Fig. 2b) is almost no change. In contrast, the optical branch for the
vibration mode of third ’molecular orbital’(Fig.2c) has obvious dispersion,
which can be clearly seen in Fig.4a. The reason that causes the splitting of
spin wave can be attributed that the vibration mode of different ’molecular
orbital’ is not isotropic and dependent on the momentum $k$. Therefore,
different vibration mode has different behavior in different momentum
direction. Similar to the the above two optical branches, the third optical
branch is related to the vibration mode of the first ’molecular orbital’,
which is a independent branch.
With the change of interaction parameters, we can investigate the influence of
the interaction parameters on the spin wave, as shown in Fig.4. Firstly, in
all cases, the acoustic branches is always zero in $\Gamma$ point and there
are always twofold degenerate in $\Gamma$ and
$k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ point. In the following,
we compare the spin wave dispersion relation by the change of interaction
parameters. Comparing with the Fig.4 (i) and Fig.4 (ii), we find that with the
increasing of $J_{1}$, the spin wave dispersions become bigger in different k
point, especially for the acoustic branch in the
$k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ point. From the Fig.4 (i)
and Fig.4 (iii), we observe that the amplitude of the second optical branch
becomes large with the increasing of $J^{\prime}_{1}$. It also change the
energy for spin wave in different k point, especially in the
$k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ point. With the increasing
of $J^{\prime}_{1}$, the amplitude of the second optical branch becomes more
and more clear. With comparing the Fig.4(i) and Fig.4 (iv), we observe that
the interval for the first and second optical branch becomes smaller with the
increasing of $J_{2}$. The energy for spin wave is also changed in different k
point, especially in the $k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$
point. Like all the above cases, the spin wave dispersion also has some change
in the intensity with the increasing of $J^{\prime}_{2}$. However, the
interval for the first and second optical branch becomes larger, which is one
of the most obvious features for increasing $J^{\prime}_{2}$, as seen in
Fig.4(i) and Fig.4 (v).
Figure 5: (color online) 3D spin wave dispersion for the parameter set of
$J_{1}=-1$, $J_{1}^{\prime}=0.2$, $J_{2}=1$ and $J_{2}^{\prime}=2.5$ in the
extended BZ for one block, as seen in Fig.1 a. The energy is in units of
$J_{2}$. (a) spin wave dispersion for first optical branch. (b) spin wave
dispersion for second optical branch. (c) spin wave dispersion for third
optical branch. (d) spin wave dispersion for Acoustic branch. The x-axis and
y-axis correspond to $3k_{x}-k_{y}$ and $k_{x}+3k_{y}$ direction respectively.
Fig.5 shows the typical 3 dimension spin wave spectrum in the extended BZ for
one block of the first set of parameters in Fig.(4a); and this plots provide a
general qualitative overview. Regardless of the specific parameter values, a
common feature of the spin-wave dispersion is that there are twofold
degenerate in $\Gamma$ and $k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$
point and it has one zero branch in $\Gamma$ point. One acoustic branch and
three optical branch can also be seen, which is also a common feature in this
system with the fixed saddlepoint’s structure. Generally speaking, we can
determine the interaction parameters by comparing the spin-wave gap at
$\Gamma$ and $k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ point with
the experimental data; then we plot the spin wave using this set of
parameters, which can be used to compare with the inelastic neutron scattering
experiment.
As we all know, the neutron scattering cross section is proportional to the
dynamic structure factor $S^{in}(k,\omega)$. To further guide neutron
scattering experiment, we plot the expected neutron scattering intensity at
different constant frequency cuts in k-space. The zero-temperature dynamic
structure factor can be calculated by Holstein-Primakoff bosons. In the linear
spin-wave approximation, $S^{z}$ does not change the number of magnons, only
contributing to the elastic part of the neutron scattering intensity. However,
$S^{x}(k)$ and $S^{y}(k)$ contribute to the inelastic neutron scattering
intensity through single magnon excitations. The spin dynamical factor
associated with the spin-waves is given by the expression,
$\displaystyle S^{in}(k,\omega)$ $\displaystyle=$ $\displaystyle
S\sum_{f}|<f|\sum_{m=\pm\\{1,2,3,4\\}}\xi_{km}\alpha_{m}^{\dagger}|0>|^{2}$
(4) $\displaystyle\times\delta\left(\omega-\omega_{f}\right)$
Here $|0>$ is the magnon vacuum state and $|f>$ denotes the final state of the
spin system with excitation energy. $\xi_{km}$ is the m-th component of the
eigenvector $\alpha_{m}^{\dagger}|0>$D-X-Yao-1 .
Figure 6: (Color online) Constant-energy cuts (untwinned) of the dynamic
structure factor $S^{in}(k,\omega)$ for parameters: $J_{1}=-1$,
$J_{1}^{\prime}=0.2$, $J_{2}=1$ and $J_{2}^{\prime}=2.5$, (ii)$J_{1}=-1.5$.
The x-axis and y-axis correspond to $3k_{x}-k_{y}$ and $k_{x}+3k_{y}$
direction in range ($-\pi$,$\pi$) respectively. The constant-energy cuts from
top to down = 0.5$J_{2}$ , 1.0$J_{2}$, 2.0$J_{2}$, 3.5$J_{2}$, 5.0$J_{2}$ and
9.0$J_{2}$ for the all the energy range.
Fig. 6 shows our predictions for the intensity of the dynamical structure
factor of the block checkerboard antiferromagnetic order(an untwinned case) as
a function of frequency. At low energies($\omega=0.5$), four strongest
quadrate diffraction peaks are visible, which come from the acoustic spin-wave
branch. With the increasing of the cut frequency($\omega=1$), four strongest
quadrate diffraction peaks disperse outward toward, which is also in the
acoustic spin-wave branch range. At intermediate frequency($\omega=2$), the
dynamic structure factor becomes a chain ring shape diffraction peaks, which
is a common results of acoustic and the third optical spin-wave branch.
However, the intensity of diffraction peaks around the $\Gamma$ point is very
weak. At high frequency($\omega=3.5$), the chain ring shape evolves to nine
strong circle diffraction peaks, which come from the two optical spin-wave
branch, which is degenerate in the $\Gamma$ point. But the intensity of
diffraction peaks around the $\Gamma$ point is also very weak. Different from
the low energy situation, in the higher frequency($\omega=4.5$), the nine
strong circle diffraction peaks don’t disperse outward toward, but inward
toward; simultaneously, the middle circle diffraction peak around the $\Gamma$
point becomes weaker and vanishes with the increasing of the cut energy. The
diffraction peak at this energy cut come from the two optical spin-wave
branches, which is degenerate in $\Gamma$ the point. At highest
frequency($\omega=9.0$), four strongest circle diffraction peaks are visible
at the four corners of extended BZ for one block and four weaker circle
diffraction peaks are visible around the $\Gamma$ point and the middle of the
four boundaries, which come from the first optical spin-wave branch. The
results are similar for other sets of parameters that have same ground states,
with the main difference being that the energy cuts must be changed to obtain
the similar spin dynamical factor patten.
## IV Discussions and Conclusions
In conclusion, starting with the $J_{1}-J_{2}$ Heisenberg Hamiltonian model,
we have obtained the magnetic ground state phase diagram by MQ approach and
found that the block checkerboard antiferromagnetic order is stable at
reasonable physical parameter region. In this paper, we have used spin wave
theory to investigate the spin wave and dynamic structure factor for the block
checkerboard antiferromagnetic state observed in the iron-based
superconductors. There are two acoustic branches and six optical branches spin
wave in the block checkerboard antiferromagnetic spin system, which are the
combination by the ’molecular orbital’ in the $\Gamma$ point. Then, we
discussed the saddlepoint structure in the magnetic excitation spectrum, which
can also be measured by neutron scattering experiments. The explicit
analytical expressions for the spin-wave dispersion spectra at $\Gamma$ and
$k=(\frac{\pi}{\sqrt{10}a},\frac{\pi}{\sqrt{10}a})$ have been given.
Comparison with future inelastic neutron scattering studies, we can obtain the
specific values of interaction parameters. We have also calculated the
predicted inelastic neutron scattering pattern based on linear spin wave
theory. In addition, we have also studied the specific influence of each
interaction parameter on the spin wave and dynamic structure factor. Neutron
scattering experiments about the spin wave and the behavior of spin wave at
the proximity of a quantum critical point deserve further attention.
## Appendix A Holstein-Primakoff Transformation
The Holstein-Primakoff transformation,
$\displaystyle\hat{S}_{i}^{\dagger}$ $\displaystyle=$
$\displaystyle(\sqrt{2S-a_{i}^{\dagger}a_{i}})a_{i}\approx\sqrt{2S}a_{i}$
$\displaystyle\hat{S}_{i}^{-}$ $\displaystyle=$ $\displaystyle
a_{i}^{\dagger}(\sqrt{2S-a_{i}^{\dagger}a_{i}})\approx\sqrt{2S}a_{i}^{\dagger}$
$\displaystyle\hat{S}_{i}^{z}$ $\displaystyle=$
$\displaystyle\left(S-a_{i}^{\dagger}a_{i}\right)$ (5)
for $i=1,2,3,4$, which is occupied by spin up; S represents the size of the
magnetic moment for each iron.
and
$\displaystyle\hat{S}_{-i}^{\dagger}$ $\displaystyle=$ $\displaystyle
b_{-1}^{\dagger}(\sqrt{2S-b_{-i}^{\dagger}b_{-i}})\approx\sqrt{2S}b_{-i}^{\dagger}$
$\displaystyle\hat{S}_{-i}^{-}$ $\displaystyle=$
$\displaystyle(\sqrt{2S-b_{-i}^{\dagger}b_{-i}})b_{-i}\approx\sqrt{2S}b_{-i}$
$\displaystyle\hat{S}_{-i}^{z}$ $\displaystyle=$
$\displaystyle\left(b_{-i}^{\dagger}b_{-i}-S\right)$ (6)
for $i=1,2,3,4$, which is occupied by spin down.
The fourier transform for bosonic operators is,
$\displaystyle a_{L}$ $\displaystyle=$ $\displaystyle
N^{\frac{1}{2}}\Sigma_{k}e^{ik\cdot R_{L}}a_{k}$ $\displaystyle
a_{L}^{\dagger}$ $\displaystyle=$ $\displaystyle
N^{\frac{1}{2}}\Sigma_{k}e^{-ik\cdot R_{L}}a_{k}^{\dagger}$ $\displaystyle
b_{L}$ $\displaystyle=$ $\displaystyle N^{\frac{1}{2}}\Sigma_{k}e^{ik\cdot
R_{L}}b_{k}$ $\displaystyle b_{L}^{\dagger}$ $\displaystyle=$ $\displaystyle
N^{\frac{1}{2}}\Sigma_{k}e^{-ik\cdot R_{L}}b_{k}^{\dagger}$ (7)
Here, we define N is the number of the magnetic unit cell. $L$ is the site
index and k is momentum index.
## Appendix B Spin Wave Hamiltonian
We write the detailed expression in Eq. 2. Replacing the spin operators by HP
bosons, we can get the HP boson Hamiltonian $\mathcal{H}_{k}$,
$\displaystyle\begin{smallmatrix}\left\\{\begin{array}[]{cccccccc}E_{1}&F_{k}&G_{k}&S_{k}&0&\left(B_{k}\right)^{*}&\left(C_{k}\right)^{*}&\left(A_{k}\right)^{*}\\\
\left(F_{k}\right)^{*}&E_{1}&S_{k}&T_{k}&B_{k}&0&\left(A_{k}\right)^{*}&\left(D_{k}\right)^{*}\\\
\left(G_{k}\right)^{*}&\left(S_{k}\right)^{*}&E_{1}&(F_{k})^{*}&C_{k}&A_{k}&0&B_{k}\\\
\left(S_{k}\right)^{*}&\left(T_{k}\right)^{*}&F_{k}&E_{1}&A_{k}&D_{k}&\left(B_{k}\right)^{*}&0\\\
0&(B_{k})^{*}&(C_{k})^{*}&(A_{k})^{*}&E_{1}&F_{k}&G_{k}&S_{k}\\\
B_{k}&0&(A_{k})^{*}&(D_{k})^{*}&(F_{k})^{*}&E_{1}&S_{k}&T_{k}\\\
C_{k}&A_{k}&0&B_{k}&(G_{k})^{*}&(S_{k})^{*}&E_{1}&\left(F_{k}\right)^{*}\\\
A_{k}&D_{k}&(B_{k})^{*}&0&(S_{k})^{*}&(T_{k})^{*}&F_{k}&E_{1}\end{array}\right\\}\end{smallmatrix}$
and
$\displaystyle\begin{smallmatrix}E_{k}&=&J_{1}+1J^{\prime}_{1}-1J_{2}+2J^{\prime}_{2}\\\
A_{k}&=&J^{\prime}_{2}Se^{-i\overrightarrow{k}\cdot\overrightarrow{x_{a}}}\\\
B_{k}&=&J^{\prime}_{2}Se^{i\overrightarrow{k}\cdot\overrightarrow{y_{a}}}\\\
C_{k}&=&J^{\prime}_{1}Se^{-i\overrightarrow{k}\cdot(0.5\overrightarrow{x_{a}}+0.5\overrightarrow{y_{a}})}\\\
D_{k}&=&J^{\prime}_{1}Se^{i\overrightarrow{k}\cdot(0.5\overrightarrow{x_{a}}-0.5\overrightarrow{y_{a}})}\\\
F_{k}&=&SJ_{1}e^{-i\overrightarrow{k}\cdot(0.5\overrightarrow{x_{a}}-0.5\overrightarrow{y_{a}})}\\\
G_{k}&=&SJ_{2}e^{-i\overrightarrow{k}\cdot\overrightarrow{x_{a}}}\\\
S_{k}&=&SJ_{1}e^{-i\overrightarrow{k}\cdot(0.5\overrightarrow{x_{a}}+0.5\overrightarrow{y_{a}})}\\\
T_{k}&=&SJ_{2}e^{-i\overrightarrow{k}\cdot\overrightarrow{y_{a}}}\\\
\overrightarrow{x_{a}}&=&\sqrt{1.6}a\overrightarrow{i}+\sqrt{0.4}a\overrightarrow{j}\\\
\overrightarrow{y_{a}}&=&-\sqrt{0.4}a\overrightarrow{i}+\sqrt{1.6}a\overrightarrow{j}\end{smallmatrix}$
In the ’molecular orbital’ basis, the Hamiltonian $\mathcal{H}^{orbital}_{k}$
becomes,
$\displaystyle\begin{smallmatrix}\frac{1}{4}\left\\{\begin{array}[]{cccccccc}E_{o}^{1}&0&0&0&A_{o}&B_{o}&C_{o}&D_{o}\\\
0&E_{o}^{2}&0&0&\left(B_{o}\right)^{*}&F_{o}&-D_{o}&G_{o}\\\
0&0&E_{o}^{3}&0&\left(C_{o}\right)^{*}&-D_{o}&P_{o}&S_{o}\\\
0&0&0&E_{o}^{4}&D_{o}&\left(G_{o}\right)^{*}&\left(S_{o}\right)^{*}&T_{o}\\\
A_{o}^{4}&B_{o}&C_{o}&D_{o}&E_{o}^{1}&0&0&0\\\
\left(B_{o}\right)^{*}&F_{o}&-D_{o}&G_{o}&0&E_{o}^{2}&0&0\\\
\left(C_{o}\right)^{*}&-D_{o}&P_{o}&S_{o}&0&0&E_{o}^{3}&0\\\
D_{o}&\left(G_{o}\right)^{*}&\left(S_{o}\right)^{*}&T_{o}&0&0&0&E_{o}^{4}\end{array}\right\\}\end{smallmatrix}$
and
$\displaystyle\begin{smallmatrix}E_{o}^{1}&=&-16J_{1}+4J^{\prime}_{1}+8.J^{\prime}_{2}\\\
E_{o}^{2}&=&-8J_{1}+4J^{\prime}_{1}-8J_{2}+8J^{\prime}_{2}\\\
E_{o}^{3}&=&-8J_{1}+4J^{\prime}_{1}-8J_{2}+8J^{\prime}_{2}\\\
E_{o}^{4}&=&4J^{\prime}_{1}+8J^{\prime}_{2}\\\
A_{o}&=&[2J^{\prime}_{1}-4J^{\prime}_{2}].[cos(a_{o})+cos(b_{o})]\\\
B_{o}&=&-2J^{\prime}_{1}[sin(a_{o})+sin(b_{o})]i\\\
&&+4J^{\prime}_{2}sin(b_{o})i\\\
C_{o}&=&-2J^{\prime}_{1}[sin(a_{o})-sin(b_{o})]i\\\
&&+4J^{\prime}_{2}sin(a_{o})i\\\
D_{o}&=&2J^{\prime}_{1}[cos(a_{o})-cos(b_{o})]\\\
F_{o}&=&-2J^{\prime}_{1}[cos(a_{o})+cos(b_{o})]\\\
&&-4J^{\prime}_{2}[cos(a_{o})-cos(b_{o})]\\\
G_{o}&=&2J^{\prime}_{1}[sin(a_{o})-sin(b_{o})]i\\\
&&+4J^{\prime}_{2}sin(a_{o})i\\\
P_{o}&=&4J^{\prime}_{2}[cos(a_{o})-cos(b_{o})]]\\\
&&-2J^{\prime}_{1}[cos(a_{o})+cos(b_{o})\\\
S_{o}&=&4J^{\prime}_{2}sin(b_{o})i\\\
&&+2J^{\prime}_{1}[sin(a_{o})+sin(b_{o})]i\\\
T_{o}&=&[2J^{\prime}_{1}+4J^{\prime}_{2}].[cos(a_{o})+cos(b_{o})]\\\
a_{o}&=&1.5\overrightarrow{k}\cdot\overrightarrow{x_{a}}+0.5\overrightarrow{k}\cdot\overrightarrow{y_{a}}\\\
b_{o}&=&0.5\overrightarrow{k}\cdot\overrightarrow{x_{a}}-1.5\overrightarrow{k}\cdot\overrightarrow{y_{a}}\end{smallmatrix}$
The matrix elements for different ’molecular orbital’ in the same block is
zero.
## Appendix C Numerical Diagonalization Method
We can use the numerical method to diagonalize the boson pairing Hamiltonian.
The matrix form for boson Hamiltonian is,
$\displaystyle\psi^{\dagger}\hat{\mathbb{\mathcal{H}}}\psi$ $\displaystyle=$
$\displaystyle\left[a^{\dagger},b\right]\left[\begin{array}[]{cc}A&B\\\
B&A\end{array}\right]\left[\begin{array}[]{c}a\\\
b^{\dagger}\end{array}\right]$
Here,
$\psi^{\dagger}=\left(a^{\dagger},b\right)=\left(a_{1}^{\dagger},a_{2}^{\dagger},a_{3}^{\dagger},a_{4}^{\dagger},b_{-1},b_{-2},b_{-3},b_{-4}\right)$,
A and B is a $4\times 4$ matrix; a and b are boson operators. The operators
satisfy the commutation relation,
$\displaystyle\left[\psi_{i},\psi_{j}^{\dagger}\right]$ $\displaystyle=$
$\displaystyle\hat{I}_{-i,j}$
and
$\displaystyle\hat{I}_{-}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}I&0\\\ 0&-I\end{array}\right)$
$I$ is a $4\times 4$ identity matrix.
The diagonalization problem amounts to finding a transformation $T$, which
lets $\hat{T}^{\dagger}\hat{\mathcal{H}}\hat{T}$ become a diagonalization
matrix $\Omega$.
$\displaystyle\psi$ $\displaystyle=$
$\displaystyle\left[\begin{array}[]{c}a\\\
b^{\dagger}\end{array}\right]=\hat{T}\left[\begin{array}[]{c}\alpha\\\
\beta^{\dagger}\end{array}\right]=\hat{T}\varphi$
Here, we require that $\alpha$ and $\beta$ are also a set of boson operators
and
$\varphi^{\dagger}=\left(\alpha^{\dagger},\beta\right)=\left(\alpha_{1}^{\dagger},\alpha_{2}^{\dagger},\alpha_{3}^{\dagger},\alpha_{4}^{\dagger},\beta_{-1},\beta_{-2},\beta_{-3},\beta_{-4}\right)$.
Then,
$\displaystyle\left[\psi_{i},\psi_{j}^{\dagger}\right]$ $\displaystyle=$
$\displaystyle\hat{I}_{-i,j}=\sum_{i^{\prime},j^{\prime}}T_{i,i^{\prime}}I_{-i^{\prime},j^{\prime}}T_{j^{\prime},j}^{\dagger}$
we get
$\hat{T}\hat{I}_{-}\hat{T}^{\dagger}=\hat{I}_{-}$ (11)
Since $\hat{I}_{-}^{2}=I$. we know
$I=(\hat{I}_{-}\hat{T})(\hat{I}_{-}\hat{T}^{\dagger})$. Due to
$(\hat{I}_{-}\hat{T})$ and $(\hat{I}_{-}\hat{T}^{\dagger})$ are each inverses,
thus $I=(\hat{I}_{-}\hat{T}^{\dagger})(\hat{I}_{-}\hat{T})$ and
$\hat{T}^{\dagger}\hat{I}_{-}\hat{T}=\hat{I}_{-}$.
Then we want the final form of $\hat{\mathcal{H}}$ is diagonalization.
$\displaystyle\hat{\mathcal{H}}$ $\displaystyle=$
$\displaystyle\left[a^{\dagger},b\right]\left[\begin{array}[]{cc}A&B\\\
B&A\end{array}\right]\left[\begin{array}[]{c}a\\\
b^{\dagger}\end{array}\right]$ $\displaystyle=$
$\displaystyle\psi^{\dagger}\hat{\mathbb{\mathcal{H}}}\psi$ $\displaystyle=$
$\displaystyle\varphi^{\dagger}\left\\{\hat{T}^{\dagger}\left[\begin{array}[]{cc}A&B\\\
B&A\end{array}\right]\hat{T}\right\\}\varphi$ (19) $\displaystyle=$
$\displaystyle\left[\alpha^{\dagger},\beta\right]\left\\{\hat{T}^{\dagger}\left[\begin{array}[]{cc}A&B\\\
B&A\end{array}\right]\hat{T}\right\\}\left[\begin{array}[]{c}\alpha\\\
\beta^{\dagger}\end{array}\right]$ (24) $\displaystyle=$
$\displaystyle\left[\alpha^{\dagger},\beta\right]\left\\{\hat{\Omega}\right\\}\left[\begin{array}[]{c}\alpha\\\
\beta^{\dagger}\end{array}\right]$ $\displaystyle=$
$\displaystyle\varphi^{\dagger}\hat{\Omega}\varphi$ $\displaystyle=$
$\displaystyle\sum_{i}^{4}\omega_{i}\alpha^{\dagger}\alpha+\omega_{-i}\beta\beta^{\dagger}$
(28)
Here,$\hat{\Omega}=diag\left(\omega_{1},\cdots,\omega_{4},\omega_{-1},\cdots,\omega_{-4}\right)$
represents a diagonalization matrix and the matrix elements for
diagonalization matrix is
$\left(\omega_{1},\cdots,\omega_{4},\omega_{-1},\cdots,\omega_{-4}\right)$ .
In other words, we want the expression
$\hat{T}^{\dagger}\mathcal{\hat{H}}\hat{T}=\Omega$ and matrix $\hat{\Omega}$
is a diagonalization matrix. We must solve the matrix $\hat{T.}$
Combining Eq.11 and Eq.28, we get,
$\displaystyle\hat{T}^{\dagger}\mathcal{\hat{H}}\hat{T}$ $\displaystyle=$
$\displaystyle\hat{\Omega}$
$\displaystyle\left[\hat{T}\hat{I}_{-}\right]\hat{T}^{\dagger}\mathcal{\hat{H}}\hat{T}$
$\displaystyle=$ $\displaystyle\left[\hat{T}\hat{I}_{-}\right]\hat{\Omega}$
$\displaystyle\left[\hat{T}\hat{I}_{-}\hat{T}^{\dagger}\right]\mathcal{\hat{H}}\hat{T}$
$\displaystyle=$ $\displaystyle\left[\hat{T}\hat{I}_{-}\right]\hat{\Omega}$
$\displaystyle\hat{I}_{-}\mathcal{\hat{H}}\hat{T}$ $\displaystyle=$
$\displaystyle\left[\hat{T}\hat{I}_{-}\right]\hat{\Omega}$
$\displaystyle(\hat{I}_{-}\hat{\mathcal{H}})\hat{T}$ $\displaystyle=$
$\displaystyle\hat{T}\left[\hat{I}_{-}\hat{\Omega}\right]$
$\displaystyle(\hat{I}_{-}\hat{\mathcal{H}})\hat{T}$ $\displaystyle=$
$\displaystyle\hat{T}\left[\hat{\lambda}\right]$ (29)
Here
$\hat{\lambda}=\left[\hat{I}_{-}\hat{\Omega}\right]=diag\left(\omega_{1},\cdots,\omega_{4},-\omega_{-1},\cdots,-\omega_{-4}\right)$
is a diagonalization matrix. In other words, if we want to get
$\hat{T}^{\dagger}\mathcal{\hat{H}}\hat{T}=\hat{\Omega}$, we can solve the
general Hamiltonian
$(\hat{I}_{-}\hat{\mathcal{H}})\hat{T}=\hat{T}\left[\hat{\lambda}\right]$.
J.L. van Hemmen’sM-W-Xiao-1 strategy is that the canonical transformation
$\hat{T}$ is fully determined by its n(=8) columns {$x_{1}$, … , $x_{1}$,
$x_{-1}$, … , $x_{-4}$}. We, therefore, reduce Eq.29 to an eigenvalue problem
for these n(=8) vectors $x_{i}$, $1\leqq i\leqq 8$.
Then
$\displaystyle\left(\hat{I}_{-}\hat{\mathcal{H}}\right)\chi$ $\displaystyle=$
$\displaystyle\lambda\chi$
Where $\chi=x_{i}$, $1\leqq i\leqq 8$, and
$\lambda\in\left\\{\omega_{1},\omega_{2},\omega_{3},\omega_{4},-\omega_{-1},-\omega_{-2},-\omega_{-3},-\omega_{-4}\right\\}$.
So, to diagonalize the Hamiltonian is equivalence to solve a general
eigenvalue problem.
If we define a matrix $\hat{I}_{y}=\left[\begin{array}[]{cc}0&I\\\
I&0\end{array}\right]$ and $I$ is a $4\times 4$ identity matrix, it is easy to
proof ,
$\displaystyle\hat{I}_{-}\mathcal{\hat{H}}$ $\displaystyle=$
$\displaystyle-\hat{I}_{y}^{-1}\hat{I}_{-}\mathcal{\hat{H}}\hat{I}_{y}$
We assume there is a eigenvalue $\lambda_{i}$ and eigenvector
$\chi_{i}=\left[\begin{array}[]{c}\mu_{1-4}\\\ \nu_{1-4}\end{array}\right]$
for the general Hamiltonian,
$\displaystyle\hat{I}_{-}\hat{\mathcal{H}}\chi_{i}$ $\displaystyle=$
$\displaystyle\lambda_{i}\chi_{i}$
Then the
$\chi^{\prime}_{i}=\hat{I}_{y}\cdot\chi_{i}=\left[\begin{array}[]{c}\nu_{1-4}\\\
\mu_{1-4}\end{array}\right]$ is also eigenvector of
$\hat{I}_{-}\hat{\mathcal{H}}$ and the corresponding eigenvalue is
$\lambda^{\prime}=-\lambda$:
$\displaystyle\hat{I}_{-}\hat{\mathcal{H}}\chi_{i}$ $\displaystyle=$
$\displaystyle\lambda_{i}\chi_{i}$
$\displaystyle-\hat{I}_{y}^{-1}\hat{I}_{-}\hat{\mathcal{H}}\left[\hat{I}_{y}\chi_{i}\right]$
$\displaystyle=$ $\displaystyle\lambda_{i}\chi_{i}$
$\displaystyle\hat{I}_{y}\hat{I}_{y}^{-1}\hat{I}_{-}\hat{\mathcal{H}}\left[\hat{I}_{y}\chi_{i}\right]$
$\displaystyle=$ $\displaystyle-\hat{I}_{y}\lambda_{i}\chi_{i}$
$\displaystyle\hat{I}_{-}\hat{\mathcal{H}}\left[\hat{I}_{y}\chi_{i}\right]$
$\displaystyle=$ $\displaystyle-\lambda_{i}\hat{I}_{y}\chi_{i}$ $\displaystyle
I_{-}\hat{\mathcal{H}}\chi^{\prime}_{i}$ $\displaystyle=$
$\displaystyle-\lambda_{i}\chi^{\prime}_{i}$ $\displaystyle
I_{-}\hat{\mathcal{H}}\chi^{\prime}_{i}$ $\displaystyle=$
$\displaystyle\lambda^{\prime}_{i}\chi^{\prime}_{i}$
Therefore, the eigenvalue are in pairs. For the sake of convenience, we
arrange the order of eigenvalues by the relative size of the value
$\aleph_{i}=\mid\mu_{i}\mid^{2}-\mid\nu_{i}\mid^{2}$,
($\aleph_{1}>......>\aleph_{8}$); for the same $\aleph_{i}$, we arrange the
order of eigenvalues by its relative size. If the eigenvalue $\lambda_{i}$ for
the corresponding eigenvector $\aleph_{1}>\aleph_{2}>\aleph_{3}>\aleph_{4}$ is
lower than zero, it represents the ground state is not stable. For the first
four eigenvector $\mid\mu_{i}\mid^{2}-\mid\nu_{i}\mid^{2}=1$(i=1,2,3,4) and
for the last four eigenvector $\mid\mu_{i}\mid^{2}-\mid\nu_{i}\mid^{2}=-1$
(i=5,6,7,8).
## References
* (1) P. A. Lee, N. Nagaosa, and X. G. Wen, Rev. Mod. Phys. 78, 17 (2006).
* (2) J. G. Bednorz, and K. A. Muller, Z. Phys. B 64, 189 (1986).
* (3) Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J. Am. Chem. Soc. 130 3296 (2008).
* (4) G. F. Chen, Phys. Rev. Lett. 101 057007 (2008).
* (5) Hechang Lei, KefengWang, J. B. Warren, andC. Petrovic, arXiv:1102.2215.
* (6) Takeshi Kondo, A. F. Santander-Syro, O. Copie, Chang Liu, M. E. Tillman, E. D. Mun, J. Schmalian, S. L. Bud’ko, M. A. Tanatar, P. C. Canfield, and A. Kaminski, Phys. Rev. Lett., 101, 147003 (2008).
* (7) L. Wray, D. Qian, D. Hsieh, Y. Xia, L. Li, J. G. Checkelsky, A. Pasupathy, K. K. Gomes, C. V. Parker, A. V. Fedorov, G. F. Chen, J. L. Luo, A. Yazdani, N. P. Ong, N. L. Wang, and M. Z. Hasan, Phys. Rev. B, 78, 184508 (2008).
* (8) Gang Xu, Haijun Zhang, Xi Dai, and Zhong Fang, EPL, 84, 67015 (2008).
* (9) S. V. Borisenko, V. B. Zabolotnyy, D. V. Evtushinsky, T. K. Kim, I. V. Morozov, A. N. Yaresko, A. A. Kordyuk, G. Behr, A. Vasiliev, R. Follath, and B. Bhner, Phys. Rev. Lett. 105, 067002 (2010) .
* (10) R. Khasanov, M. Bendele, A. Amato, K. Conder, H. Keller, H.-H. Klauss, H. Luetkens, and E. Pomjakushina, Phys. Rev. Lett. 104, 087004 (2010) .
* (11) J. Guo et al., arXiv:1101.0092 (2011); Y. Kawasaki et al., arXiv:1101.0896 (2011); J. J. Ying et al., arXiv:1101.1234 (2011).
* (12) A. Krzton-Maziopa et al., arXiv:1012.3637 (2010). J. J. Ying et al., arXiv:1012.5552 (2010). A. F. Wang et al., arXiv:1012.5525 (2010) M. Fang et al., arXiv:1012.5236 (2010); H. Wang et al., arXiv:1101.0462 (2011).
* (13) Xun-WangYan, MiaoGao, Zhong-Yi Lu, and TaoXiang, arXiv:1102.2215.
* (14) Wei Bao, G. N. Li, Q. Huang, G. F. Chen, J. B. He, M. A. Green, Y. Qiu, D. M. Wang, J. L. Luo, andM. M. Wu, arXiv:1102.3674.
* (15) Wei Bao, Q. Huang, G. F. Chen, M. A. Green, D. M. Wang, J. B. He X. Q. Wang andY. Qiu, arXiv:1102.0830.
* (16) F. Ye, S. Chi, Wei Bao, X. F. Wang, J. J. Ying, X. H. Chen, H. D. Wang, C. H. Dong, and Minghu Fang,arXiv:1102.2282.
* (17) C. Cao and J. Dai (2011), arXiv:1102.1344.
* (18) V. Yu. Pomjakushin, E. V. Pomjakushina, A. Krzton-Maziopa, K. Conder, and Z. Shermadini, arXiv:1102.3380.
* (19) Yi-ZhuangYou, FangYang, Su-PengKou, and Zheng-YuWeng, arXiv:1102.3200.
* (20) G. M. Zhang, Z. Y. Lu, and T. Xiang, arXiv:1102.4575 (2011).
* (21) A. Auerbach, Phys. Rev. Lett., 72, 2931 (1994).
* (22) E. W. Carlson, D. X. Yao, and D. K. Campbell, Phys. Rev. B , 70, 064505 (2004); D. X. Yao, E. W. Carlson, and D. K. Campbell, Phys. Rev. Lett. 97, 017003 (2006).
* (23) Ming-wen Xiao, arXiv:0908.0787.
* (24) J.L. van Hemmen, Z. Physik B-Condensed Matter 38, 271-277 (1980).
* (25) Lu Huaixin and Zhang Yongde, International Journal of Theoretical Physics, 39, 2, (2000).
* (26) Constantino Tsallis, J. Math. Phys. 19, 277, (1978).
* (27) J.H.P. ColpaCOLPA, Physica A, 93, 327, (1978).
|
arxiv-papers
| 2011-03-29T01:12:25 |
2024-09-04T02:49:17.981266
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Feng Lu, and Xi Dai",
"submitter": "Feng Lu",
"url": "https://arxiv.org/abs/1103.5521"
}
|
1103.5522
|
# Getting a directed Hamilton cycle two times faster
Choongbum Lee Department of Mathematics, UCLA, Los Angeles, CA, 90095. Email:
choongbum.lee@gmail.com. Research supported in part by Samsung Scholarship.
Benny Sudakov Department of Mathematics, UCLA, Los Angeles, CA 90095. Email:
bsudakov@math.ucla.edu. Research supported in part by NSF grant DMS-1101185,
NSF CAREER award DMS-0812005 and by USA-Israeli BSF grant. Dan Vilenchik
Department of Mathematics, UCLA, Los Angeles, CA 90095. Email:
vilenchik@math.ucla.edu.
###### Abstract
Consider the random graph process where we start with an empty graph on $n$
vertices, and at time $t$, are given an edge $e_{t}$ chosen uniformly at
random among the edges which have not appeared so far. A classical result in
random graph theory asserts that $whp$ the graph becomes Hamiltonian at time
$(1/2+o(1))n\log n$. On the contrary, if all the edges were directed randomly,
then the graph has a directed Hamilton cycle $whp$ only at time $(1+o(1))n\log
n$. In this paper we further study the directed case, and ask whether it is
essential to have twice as many edges compared to the undirected case. More
precisely, we ask if at time $t$, instead of a random direction one is allowed
to choose the orientation of $e_{t}$, then whether it is possible or not to
make the resulting directed graph Hamiltonian at time earlier than $n\log n$.
The main result of our paper answers this question in the strongest possible
way, by asserting that one can orient the edges on-line so that $whp$, the
resulting graph has a directed Hamilton cycle exactly at the time at which the
underlying graph is Hamiltonian.
## 1 Introduction
The celebrated _random graph process_ , introduced by Erdős and Rényi [11] in
the 1960’s, begins with an empty graph on $n$ vertices, and in every round
$t=1,\ldots,m$ adds to the current graph a single new edge chosen uniformly at
random out of all missing edges. This distribution is commonly denoted as
$G_{n,m}$. An equivalent “static” way of defining $G_{n,m}$ would be: choose
$m$ edges uniformly at random out of all $\binom{n}{2}$ possible ones. One
advantage in studying the random graph process, rather than the static model,
is that it allows for a higher resolution analysis of the appearance of
monotone graph properties (a graph property is monotone if it is closed under
edge addition).
A _Hamilton cycle_ of a graph is a simple cycle that passes through every
vertex of the graph, and a graph containing a Hamilton cycle is called
_Hamiltonian_. Hamiltonicity is one of the most fundamental notions in graph
theory, and has been intensively studied in various contexts, including random
graphs. The earlier results on Hamiltonicity of random graphs were obtained by
Pósa [20], and Korshunov [17]. Improving on these results, Komlós and
Szemerédi [16] proved that if $m^{\prime}=\frac{1}{2}n\log
n+\frac{1}{2}\log\log n+c_{n}n$, then
$\lim_{n\rightarrow\infty}\mathbb{P}(G_{n,m^{\prime}}\,\textrm{is
Hamiltonian})=\left\\{\begin{array}[]{cll}&0&\textrm{if
}c_{n}\rightarrow-\infty\\\ &e^{-e^{-2c}}&\textrm{if }c_{n}\rightarrow c\\\
&1&\textrm{if }c_{n}\rightarrow\infty.\end{array}\right.$
One obvious necessary condition for the graph to be Hamiltonian is for the
minimum degree to be at least 2, and surprisingly, the probability of
$G_{n,m^{\prime}}$ having minimum degree two at time ${m^{\prime}}$ has the
same asymptotic behavior as the probability of it being Hamiltonian. Bollobás
[7] strengthened this observation by proving that $whp$ the random graph
process becomes Hamiltonian when the last vertex of degree one disappears.
Moreover, Bollobás, Fenner, and Frieze [8] described a polynomial time
algorithm which $whp$ finds a Hamilton cycle in random graphs.
Hamiltonicity has been studied for directed graphs as well. Consider a random
directed graph process where at time $t$ a random directed edge is chosen
uniformly at random among all missing edges. and let $D_{n,m}$ be the graph
consisting of the first $m$ edges. Frieze [14] proved that for
$m^{\prime\prime}=n\log n+c_{n}n$, the probability of $D_{n,m^{\prime\prime}}$
containing a (directed) Hamilton cycle is
$\lim_{n\rightarrow\infty}\mathbb{P}(D_{n,m^{\prime\prime}}\,\textrm{is
Hamiltonian})=\left\\{\begin{array}[]{cll}&0&\textrm{if
}c_{n}\rightarrow-\infty\\\ &e^{-2e^{-c}}&\textrm{if }c_{n}\rightarrow c\\\
&1&\textrm{if }c_{n}\rightarrow\infty.\end{array}\right.$
Similar to the undirected case, this probability has the same asymptotic
behavior as the probability of the directed graph having minimum in-degree and
out-degree 1. In fact, Frieze proved [14] that when the last vertex to have
in- or out-degree less than one disappears, the graph has a Hamilton cycle
$whp$.
Hamiltonicity of various other random graph models has also been studied [21,
3]. One model which will be of particular interest to us is the $k$-in $k$-out
model, in which every vertex chooses $k$ in-neighbors and $k$-out neighbors
uniformly at random and independently of the others. Improving on several
previous results, Cooper and Frieze [9] proved that a random graph in this
model is Hamiltonian $whp$ already when $k=2$ (which is best possible since it
is easy to see that a 1-in 1-out random graph is $whp$ not Hamiltonian).
### 1.1 Our Contribution
Bollobás [7], and Frieze’s [14] results introduced above suggest that the main
obstacle to Hamiltonicity of random graphs lies in “reaching” certain minimum
degree conditions. It is therefore natural to ask how the thresholds change if
we modify the random graph process so that we can somehow bypass this
obstacle.
We consider the following process suggested by Frieze [15] which has been
designed for this purpose. Starting from the empty graph, at time $t$, an
undirected edge $(u,v)$ is given uniformly at random out of all missing edges,
and a choice of its orientation ($u\to v$ or $v\to u$) is to be made at the
time of its arrival. In this process, one can attempt to accelerate the
appearance of monotone directed graph properties, or delay them, by applying
an appropriate on-line algorithm. It is important to stress that the process
is on-line in nature, namely, one cannot see any future edges at the current
round and is forced to make the choice based only on the edges seen so far. In
this paper, we investigate the property of containing a directed Hamilton
cycle by asking the question, “can one speed up the appearance of a directed
Hamilton cycle?”. The best we can hope for is to obtain a directed Hamilton
cycle at the time when the underlying graph has minimum degree 2. The
following result asserts that directed Hamiltonicity is in fact achievable
exactly at that time, and this answers the above question positively in the
strongest possible way.
###### Theorem 1.1.
Let $\mathcal{G}$ be a random (undirected) graph process that terminates when
the last vertex of degree one disappears. There exists an on-line algorithm
${\bf Orient}$ that orients the edges of $\mathcal{G}$, so that the resulting
directed graph is Hamiltonian $whp$.
Let us remark that $\mathcal{G}$ $whp$ contains $(1+o(1))n\log n/2$ edges, in
contrast with $(1+o(1))n\log n$ edges in the random directed graph model. Thus
the required number of random edges is reduced by half.
Our model is similar in spirit to the so called Achlioptas process. It is well
known that a giant connected component (i.e. a component of linear size)
appears in the random graph $G_{n,m}$ when $m=(1+o(1))n/2$. Inspired by the
celebrated “power of two choices” result [2], Achlioptas posed the following
question: Suppose that edges arrive in pairs, that is in round $t$ the pair of
edges $(e_{t},e^{\prime}_{t})$ chosen uniformly at random is given, and one is
allowed to pick an edge out of it for the graph (the other edge will be
discarded). Can one delay the appearance of the giant component? Bohman and
Frieze answered this question positively [4] by describing an algorithm whose
choice rule allows for the ratio $m/n\geq 0.53$, and this ratio has been
improved since [5]. Quite a few papers have thereafter studied various related
problems that arise in the above model [6, 13, 18, 22, 23]. As an example, in
[18], the authors studied the question, “How long can one delay the appearance
of a certain fixed subgraph?”.
One such paper which is closely related to our work is the recent work of
Krivelevich, Lubetzky, and Sudakov [19]. They studied the Achlioptas process
for Hamiltonicity, and proved that by exploiting the “power of two choices”,
one can construct a Hamilton cycle at time $(1+o(1))n\log n/4$, which is twice
as fast as in the random case. Both our result and this result suggest that
the “bottleneck” to Hamiltonicity of random graphs indeed lies in the minimum
degree, and thus these results can be understood in the context of
complementing the results of Bollobás [7], and Frieze [14].
### 1.2 Preliminaries
The paper is rather involved technically. One factor that contributes to this
is the fact that we are establishing the “hitting time” version of the
problem. That is, we determine the exact threshold for the appearance of a
Hamilton cycle. The analysis can be simplified if one only wishes to estimate
this threshold asymptotically (see concluding remarks). To make the current
analysis more approachable without risking any significant change to the
random model, we consider the following variant of the graph process, which we
call the random edge process : at time $t$, an edge is given as an ordered
pair of vertices $e_{t}=(v_{t},w_{t})$ chosen uniformly at random, with
repetition, from the set of all possible $n^{2}$ ordered pairs (note that this
model allows loops and repeated edges). In what follows, we use $G_{t}$ to
denote the graph induced by the first $t$ edges, and given the orientation of
each edge, use $D_{t}$ to denote the directed graph induced by the first $t$
edges. By $m_{*}$ we denote the time $t$ when the last vertex of degree one in
$G_{t}$ becomes a degree two vertex.
We will first prove that there exists an on-line algorithm ${\bf Orient}$
which $whp$ orients the edges of the graph $G_{m_{*}}$ so that the directed
graph $D_{m_{*}}$ is Hamiltonian, and then in Section 6 show how Theorem 1.1
can be recovered from this result.
### 1.3 Organization of the Paper
In the next section we describe the algorithm ${\bf Orient}$ that is used to
prove Theorem 1.1 (in the modified model). Then in Section 3 we outline the
proof of Theorem 1.1. Section 4 describes several properties that a typical
random edge process possesses. Using these properties we prove Theorem 1.1 in
Section 5. Then in Section 6, we show how to modify the algorithm ${\bf
Orient}$, in order to make it work for the original random graph process.
Notation. A _directed 1-factor_ is a directed graph in which every vertex has
in-degree and out-degree exactly 1, and a _1-factor_ of a directed graph is a
spanning subgraph which is a directed 1-factor. The function $\exp(x):=e^{x}$
is the exponential function. Throughout the paper $\log(\cdot)$ denotes the
natural logarithm. For the sake of clarity, we often omit floor and ceiling
signs whenever these are not crucial and make no attempts to optimize our
absolute constants. We also assume that the order $n$ of all graphs tends to
infinity and therefore is sufficiently large whenever necessary.
## 2 The Orientation Rule
In this section we describe the algorithm ${\bf Orient}$. Its input is the
edge process ${\bf e}=(e_{1},e_{2},\ldots,e_{m_{*}})$, and output is an on-
line orientation of each edge $e_{t}$. The algorithm proceeds in two steps. In
the first step, which consists of the first $2n\log\log n$ edges, the
algorithm builds a “core” which contains almost all the vertices, and whose
edges are distributed (almost) like a 6-in 6-out random graph. In the second
step, which contains all edges that follow, the remaining $o(n)$ non-core
vertices are taken care of, by being connected to the core in a way that will
guarantee $whp$ the existence of a directed Hamiltonian cycle.
### 2.1 Step I
Recall that each edge is given as an ordered pair $(v,w)$. For every vertex
$v$ we keep a count of the number of times that $v$ appears as the first
vertex. We update the set of _saturated_ vertices, which consists of the
vertices which appeared at least 12 times as the first vertex. Given the edge
$(v,w)$ at time $t$, if $v$ is still not saturated, direct the edge $(v,w)$
alternatingly with respect to $v$ starting from an out edge (by alternatingly
we mean, if the last edge having $v$ as the first vertex was directed as an
out edge of $v$, then direct the current one as an in edge of $v$, and vice-
versa. For the first edge we choose arbitrarily the out direction). Otherwise,
if $v$ is saturated, then count the number of times that $w$ appeared as a
second vertex when the first vertex is already saturated, and direct the edges
alternatingly according to this count with respect to $w$ starting from an in
edge. This alternation process is independent to the previous one. That is,
even if $w$ appeared as a first vertex somewhere before, the count should be
kept track separately from it.
For a vertex $v\in V$, let the _first vertex degree_ of $v$ be the number of
times that $v$ appeared as a first vertex in Step I, and denote it as
$d_{1}(v)$. Let the _second vertex degree_ of $v$ be the number of times that
$v$ appeared in Step I as a second vertex of an edge whose first vertex is
already saturated, and denote it as $d_{2}(v)$. Note that the sum of the first
vertex degree and second vertex degree of $v$ is not necessarily equal to the
degree of $v$ in Step I as $v$ might appear as a second vertex of an edge
whose first vertex is not yet saturated. We will call such an edge a
_neglected edge_ of $v$.
### 2.2 Step II
Let $A$ be the set of saturated vertices at the end of Step I, and
$B=V\setminus A$. Call an edge an $A$-$B$ edge if one end point lies in $A$
and the other end point lies in $B$, and similarly define $A$-$A$ edges and
$B$-$B$ edges. Given an edge $e=(v,w)$ at time $t$, if $e$ is an $A$-$B$ edge,
and w.l.o.g. assume that $v\in B$ and $w\in A$, then direct $e$ alternatingly
with respect to $v$, where the alternation process of Step II continues the
one from Step I as follows:
1. 1.
If $v$ appeared as a first vertex in Step I at least once, then pick up where
the alternation process of $v$ as a first vertex in Step I stopped and
continue the alternation.
2. 2.
If $v$ did not appear as a first vertex in Step I but did appear as a second
vertex of an already saturated vertex, then pick up where the alternation
process of $v$ as a second vertex of a saturated vertex stopped in Step I and
continue the alternation.
3. 3.
If $v$ appeared in Step I but does not belong to the above two cases, then
consider the first neglected edge connected to $v$, and start the alternation
process from the opposite direction of this edge.
4. 4.
If none of the above, then start from an out edge.
Otherwise, if $e$ is an $A$-$A$ edge or a $B$-$B$ edge, orient it uniformly at
random. Note that unlike Step I, the order of vertices of the given edge does
not affect the orientation of the edge in Step II.
For a vertex $v\in B$, let the _$A$ -$B$ degree_ of $v$ be the number of
$A$-$B$ edges incident to $v$ in Step II, and denote it as $d_{AB}(v)$. For
$v\in A$, let $d_{AB}(v)=0$.
## 3 Proof Outline
Our approach builds on Frieze’s proof of the Hamiltonicity of the random
directed graph process [14] with some additional ideas. His proof consists of
two phases (the original proof consists of three phases, but for simplicity,
we describe it as two phases). We shall first describe these two phases of
Frieze’s proof, and then point out the modifications that are necessary to
accommodate our different setting. Let $m=(1+o(1))n\log n$ be the time at
which the random directed graph process has minimum in-degree and out-degree
1, and let $D_{n,m}$ be the directed graph at time $m$ (throughout this
section we say that random directed graphs have certain properties if they
have the properties $whp$).
### 3.1 Phase 1 : Find a small 1-factor
In Phase 1, a 1-factor of $D_{n,m}$ consisting of at most $O(\log n)$ cycles
is constructed. To this end, a subgraph $D_{5-in,5-out}$ of $D_{n,m}$ is
constructed which uses only a small number of the edges. Roughly speaking, for
each vertex, use its first 5 out-neighbors and 5 in-neighbors (if possible) to
construct $D_{5-in,5-out}$. Note that the resulting graph will be similar to a
random 5-in 5-out directed graph, but still different as some vertices will
only have 1 in-neighbor and 1 out-neighbor even at time $m$. Finally, viewing
$D_{5-in,5-out}$ as a bipartite graph $G^{\prime}(V\cup V^{*},E^{\prime})$,
where $V^{*}$ is a copy of $V$, and $\\{u,v^{*}\\}\in E^{\prime}$ iff $u\to v$
belongs to $D_{5-in,5-out}$, one proves that $G^{\prime}$ has a perfect
matching. It turns out that this matching can be viewed as a uniform random
permutation of the set of vertices $V$. A well known fact about such
permutations is that they $whp$ consist of at most $O(\log n)$ cycles.
### 3.2 Phase 2 : Combining the cycles into a Hamilton cycle
In Phase 2, the cycles of the 1-factor are combined into a Hamilton cycle. The
technical issue to overcome in this step is the fact that in order to
construct $D_{5-in,5-out}$, all of the edges were scanned, and now supposedly
we have no remaining random edges in the process to combine the cycles of the
1-factor. However, note that since $D_{5-in,5-out}$ consists of at most $10n$
edges, the majority of edges need not be exposed. More rigorously, let $LARGE$
be the vertices whose degree is $\Omega(\log n/\log\log n)$ at time
$t_{0}=2n\log n/3$ in the directed graph process. For the $LARGE$ vertices,
its 5 neighbors in $D_{5-in,5-out}$ will be determined solely by the edges up
to time $t_{0}$, leaving the remaining edges (edges after time $t_{0}$) of the
process unexposed. Two key properties used in Phase 2 are that $whp$, $(a)$
$|LARGE|=n-o(n^{1/2})$, and $(b)$ every cycle of the 1-factor contains many
$LARGE$ vertices. Note that by $(a)$, out of the remaining $n\log n/3$ edges,
all but $o(1)$-fraction will connect two $LARGE$ vertices. Phase 2 can now be
summarized by the following theorem [14].
###### Theorem 3.1.
Let $V$ be a set of $n$ vertices and $L\subset V$ be a subset of size at least
$n-o(n^{1/2})$. Assume that $D$ is a directed 1-factor over $V$ consisting of
at most $O(\log n)$ cycles, and the vertices $V\setminus L$ are at distance at
least 10 away from each other in this graph.
If $(1/3-o(1))n\log n$ $L$-$L$ edges are given uniformly at random, then $whp$
the union of these edges and the graph $D$ contains a directed Hamilton cycle.
The proof of a slightly stronger version of Theorem 3.1 will be given in
Section 6.
### 3.3 Comparing with our setting
The main technical issue in this paper is to reprove Phase 1, namely, the
existence of a 1-factor with small number of cycles. In [14], the fact that
all vertices have the same distribution in $D_{5-in,5-out}$, led to an
argument showing the existence of a matching that translates into a uniform
random permutation. Our case is different because of the orientation rule. We
have different types of vertices each being oriented in a different way,
breaking the nice symmetry. The bulk of our technical work is spent in
resolving this technical issue.
Once this is done, that is after achieving the 1-factor, we come up with an
analogue of $LARGE$, which we call “saturated”. Similarly as in Phase 2
described above, we prove that $whp$ $(a^{\prime})$ most of the vertices are
saturated, and $(b^{\prime})$ every cycle in the 1-factor contains many
saturated vertices. However, the naive approach results in a situation where
one cannot apply Theorem 3.1 ($(a^{\prime})$ and $(b^{\prime})$ are
quantitatively weaker than $(a)$ and $(b)$). Thus we develop the argument of
“compressing” vertices of a given cycle. This idea allows us to get rid of all
the non-saturated vertices, leading to another graph which only has saturated
vertices in it. Details will be given in Section 5.2. Once we apply the
compression argument, we can use Theorem 3.1 to finish the proof. Let us
mention that the compression argument can be applied after Phase 1 in [14] as
well to simplify the proof.
## 4 A Typical Random Process
The following well-known concentration result (see, for example [1, Corollary
A.1.14]) will be used several times in the proof. We denote by $Bi(n,p)$ the
binomial random variable with parameters $n$ and $p$.
###### Theorem 4.1.
(Chernoff’s inequality) If $X\sim Bi(n,p)$ and $\varepsilon>0$, then
$\mathbb{P}\big{(}|X-\mathbb{E}[X]|\geq\varepsilon\mathbb{E}[X]\big{)}\leq
e^{-\Omega_{\varepsilon}(\mathbb{E}[X])}.$
### 4.1 Classifying Vertices
To analyze the algorithm it will be convenient to work with three sets of
vertices. The first is the set of saturated vertices at Step I. Throughout we
will use $A$ to denote this set. Let us now consider the non-saturated
vertices $B=V\setminus A$. Here we distinguish between two types. We say that
$v\in B$ blossoms if there are at least 12 edges of the form $\\{v,A\\}$ in
Step II (by $A$ we mean an arbitrary vertex from $A$), and let $B_{1}$ be the
collection of vertices which blossom. All the remaining vertices are
restricted, and is denoted by $B_{2}$. Thus every vertex either is saturated
($A$), blossoms ($B_{1}$), or is restricted ($B_{2}$).
Furthermore, the set of restricted vertices has two important subclasses which
are determined by the first vertex degree $d_{1}(v)$, second vertex degree
$d_{2}(v)$, and $A$-$B$ degree $d_{AB}(v)$ defined in the previous section. We
say that a restricted vertex $v$ partially-blossoms if the sum of its first
vertex degree, second vertex degree, and $A$-$B$ degree is at least 2. Note
that since we stopped the process when the graph has minimum degree 2, every
vertex $v$ has degree at least 2. Thus, if the above mentioned sum is at most
1, then $v$ either has a neglected edge, or a $B$-$B$ edge connected to it. A
useful fact that we prove in Lemma 4.5 says that $whp$ all such vertices $v$
have one $A$-$B$ edge (thus $d_{AB}(v)=1$), and at least one neglected edge.
Thus, we call a restricted vertex $v$ not being partially-blossomed, and
having one $A$-$B$ edge and at least one neglected edge as a bud.
### 4.2 Properties of a Typical Random Process
In this section we list several properties that hold $whp$ for random edge
processes. We will call an edge process typical if indeed the properties hold.
Let
$m_{1}=\frac{1}{2}n\log n+\frac{1}{2}n\log\log n-n\log\log\log n,\qquad
m_{2}=\frac{1}{2}n\log n+\frac{1}{2}n\log\log n+n\log\log\log n.$
Note that for a fixed vertex $v$, the probability of an edge being incident to
$v$ is $\frac{2n-1}{n^{2}}=\frac{2}{n}-\frac{1}{n^{2}}$ (this is because in
our process, each edge is given by an ordered pair of vertices). However as it
turns out the small order term $\frac{1}{n^{2}}$ is always negligible for our
purpose, so we will use the probability $\frac{2}{n}$ for this event, and
remind the reader that the term $\frac{1}{n^{2}}$ is omitted. Recall that the
stopping time $m_{*}$ is the time at which the last vertex of degree one
becomes a degree two vertex and the process stops.
###### Claim 4.2.
Let $m_{*}$ be the stopping time of the random process. Then $whp$
$m_{1}\leq m_{*}\leq m_{2}.$
###### Proof.
For a fixed vertex $v$, the probability of an edge being incident to $v$ is
about $\frac{2}{n}$. Hence the probability of $v$ having degree at most 1 at
time $m_{2}$ is,
$\left(1-\frac{2}{n}\right)^{m_{2}}+{m_{2}\choose
1}\frac{2}{n}\cdot\left(1-\frac{2}{n}\right)^{m_{2}-1}\leq 3\log n\cdot
e^{-\log n-\log\log n-2\log\log\log n}=O\left(\frac{1}{n(\log\log
n)^{2}}\right).$
Thus by Markov’s inequality, $whp$ there is no vertex of degree at most 1
after $m_{2}$ edges. This shows that $m_{*}\leq m_{2}$. Similarly, the
expected number of vertices having degree at most 1 after seeing $m_{1}$ edges
is $\Omega((\log\log n)^{2})$, and by computing the second moment of the
number of vertices having degree at most 1, we can show that after $m_{1}$
edges $whp$ at least one such vertex exits. This shows that $m_{*}\geq m_{1}$.
The rest of the details are fairly standard and are omitted. ∎
Next we are going to list some properties regarding the different types of
vertices.
###### Claim 4.3.
The number of saturated vertices satisfies $whp$
$|A|\geq n\left(1-\frac{(\log\log n)^{12}}{\log^{2}n}\right).$
###### Proof.
For a fixed vertex $v$, the probability of $v$ occurring as the first vertex
of an edge is (exactly) $\frac{1}{n}$, and thus the probability of $v$ ending
up non-saturated at Step I is at most
$\displaystyle\sum_{k=0}^{11}{2n\log\log n\choose
k}\left(\frac{1}{n}\right)^{k}\cdot\left(1-\frac{1}{n}\right)^{2n\log\log
n-k}\leq\sum_{k=0}^{11}(2\log\log
n)^{k}\frac{1}{\log^{2}n}=O\left(\frac{(\log\log n)^{11}}{\log^{2}n}\right).$
The claim follows from Markov’s inequality. ∎
Our next goal is to prove that the restricted vertices consist only of
partially-blossomed and bud vertices. For that we need the following auxiliary
lemma.
###### Claim 4.4.
Let $E_{BB}$ be the collection of all $B$-$B$ edges (in Step II). The graph
$G_{m_{*}}\setminus E_{BB}$ has $whp$ minimum degree 2.
###### Proof.
If the graph $G_{m_{*}}\setminus E_{BB}$ has minimum degree less than 2 for
some edge process ${\bf e}$, then there exists a vertex $v$ which gets at most
one edge other than a $B$-$B$ edge, and at least one $B$-$B$ edge. By Claim
4.2, it suffices to prove that the graph $whp$ does not contain a vertex which
has at most one edge other than a $B$-$B$ edge at time $m_{1}$, and at least
one $B$-$B$ edge at time $m_{2}$. Let $\mathcal{A}_{v}$ be the event that $v$
is such vertex. Let $\mathcal{BS}$ be the event that $|B|\leq\frac{(\log\log
n)^{12}}{\log^{2}n}n$ ($B$ is small), and note that
$\mathbb{P}(\mathcal{BS})=1-o(1)$ by Claim 4.3. Then we have
$\displaystyle\mathbb{P}(\text{$G_{m_{*}}\setminus E_{BB}$ has minimum degree
less than 2})=\mathbb{P}\left(\bigcup_{v\in V}{\mathcal{A}_{v}}\right)\leq
n\cdot\mathbb{P}\left(\mathcal{A}_{v}\cap\mathcal{BS}\right)+o(1).$ (1)
The event $\mathcal{A}_{v}$ is equivalent to the vertex $v$ receiving $k$
$B$-$B$ edges, for some $k>0$, and at most one edge other than a $B$-$B$ edge
at appropriate times. This event is contained in the event
$\mathcal{C}_{v}\cap\mathcal{D}_{v,k}$ where $\mathcal{C}_{v}$ is the event
“$v$ appears at most once in Step I”, and $\mathcal{D}_{v,k}$ is the event
“$d_{AB}(v)\leq 1$ by time $m_{1}$ and $v$ receives $k$ $B$-$B$ edges by time
$m_{2}$”. Therefore our next goal is to bound
$\displaystyle\mathbb{P}({\mathcal{C}_{v}}\cap{\mathcal{D}_{v,k}}\cap\mathcal{BS})=\mathbb{P}({\mathcal{C}_{v}}\cap\mathcal{BS})\cdot\mathbb{P}({\mathcal{D}_{v,k}}|{\mathcal{C}_{v}\cap\mathcal{BS}})\leq\mathbb{P}({\mathcal{C}_{v}})\cdot\mathbb{P}({\mathcal{D}_{v,k}}|{\mathcal{C}_{v}\cap\mathcal{BS}}).$
(2)
We can bound the probability of the event $\mathcal{C}_{v}$ by,
$\displaystyle\left(1-\frac{2}{n}\right)^{2n\log\log n}+{2n\log\log n\choose
1}\left(\frac{2}{n}\right)\cdot\left(1-\frac{2}{n}\right)^{2n\log\log
n-1}=O\left(\frac{\log\log n}{\log^{4}n}\right).$ (3)
To bound the event $\mathcal{D}_{v,k}$ which is “$d_{AB}(v)\leq 1$ at time
$m_{1}$ and $v$ receives $k$ $B$-$B$ edges by time $m_{2}$”, note that
$\mathcal{C}_{v}$ and $\mathcal{BS}$ are events which depend only on the first
$2\log\log n$ edges (Step I edges). Therefore conditioning on this event does
not affect the distribution of edges in Step II (each edge is chosen uniformly
at random among all possible $n^{2}$ pairs). We only consider the case
$d_{AB}(v)=1$ (the case $d_{AB}(v)=0$ can be handled similarly, and turns out
to be dominated by the case $d_{AB}(v)=1$). Thus to bound the probability, we
choose $k+1$ edges among the $m_{2}-2n\log\log n$ edges, let 1 of them to be
an $A$-$B$ edge, $k$ of them to be $B$-$B$ edges incident to $v$. Moreover,
since $d_{AB}(v)\leq 1$ at time $m_{1}$, we know that at least
$m_{1}-2n\log\log n-k-1$ edges are not incident to $v$. Thus,
$\displaystyle\mathbb{P}$
$\displaystyle({\mathcal{D}_{v,k}}\,|\,{\mathcal{C}_{v}\cap\mathcal{BS}})$
$\displaystyle\leq{m_{2}-2n\log\log n\choose
k+1}\left(\frac{2}{n}\right)^{k+1}{k+1\choose
1}\frac{|A|}{n}\left(\frac{|B|}{n}\right)^{k}\left(1-\frac{2}{n}\right)^{m_{1}-2n\log\log
n-k-1}.$
By using the inequalities $1-x\leq e^{-x}$, $|A|\leq n$, and
${m_{2}-2n\log\log n\choose k+1}\leq m_{2}^{k+1}$, the probability above is
bounded by
$\displaystyle(k+1)m_{2}^{k+1}\left(\frac{2}{n}\right)^{k+1}\left(\frac{|B|}{n}\right)^{k}\exp\left(-\frac{2}{n}(m_{1}-2n\log\log
n-k-1)\right).$ (4)
Therefore by (2), (3), and (4),
$\displaystyle\mathbb{P}$
$\displaystyle({\mathcal{C}_{v}}\cap\mathcal{D}_{v,k}\cap\mathcal{BS})\leq$
$\displaystyle O\left(\frac{\log\log
n}{\log^{4}n}\right)(k+1)m_{2}^{k+1}\left(\frac{2}{n}\right)^{k+1}\left(\frac{|B|}{n}\right)^{k}\exp\left(-\frac{2}{n}(m_{1}-2n\log\log
n-k-1)\right).$
Plugging the bound $|B|\leq\frac{n\log\log^{12}n}{\log^{2}n}$ and $m_{2}\leq
n\log n$ in the latter, one obtains:
$O(k)\left(\frac{\log\log n}{\log^{3}n}\right)\left(\frac{2(\log\log
n)^{12}}{\log n}\right)^{k}\exp\left(-\frac{2}{n}(m_{1}-2n\log\log
n-k-1)\right).$
By the definition $m_{1}=\frac{1}{2}n\log n+\frac{1}{2}n\log\log
n-n\log\log\log n$, this further simplifies to
$O(k)\left(\frac{(\log\log n)^{3}}{n}\right)\left(\frac{2e^{2/n}(\log\log
n)^{12}}{\log n}\right)^{k}.$
Summing over all possible values of $k$,
$\displaystyle\sum_{k=1}^{\infty}\mathbb{P}({\mathcal{C}_{v}}\cap\mathcal{D}_{v,k}\cap\mathcal{BS})\leq\sum_{k=1}^{\infty}\frac{O(k)(\log\log
n)^{3}}{n}\left(\frac{4(\log\log n)^{12}}{\log n}\right)^{k}=o(n^{-1}).$
Going back to $(\ref{eq:eq1})$, we get that
$\mathbb{P}(\text{$G_{m_{*}}\setminus E_{BB}$ has minimum degree less than
2})=n\cdot o(n^{-1})+o(1)=o(1).$
Note that as mentioned in the beginning of this section, we used $\frac{2}{n}$
to estimate the probability of an edge being incident to a fixed vertex. This
probability is in fact $\frac{2}{n}-\frac{1}{n^{2}}$, but the term
$\frac{1}{n^{2}}$ will only affect the lower order estimates. ∎
###### Claim 4.5.
Every restricted vertex is $whp$ either partially-blossomed, or a bud.
###### Proof.
Assume there exists a restricted vertex $v$ which is not partially-blossomed
or a bud. Then by definition, the sum $d_{1}(v)+d_{2}(v)+d_{AB}(v)\leq 1$. The
possible values of the degrees $(d_{1}(v),d_{2}(v),d_{AB}(v))$ are
$(1,0,0),(0,1,0),(0,0,1)$, or $(0,0,0)$. Vertices which correspond to
$(0,0,1)$ will all be bud vertices $whp$ by Claim 4.4. It suffices to show
then that $whp$ there does not exist vertices which correspond to
$(1,0,0),(0,1,0)$, or $(0,0,0)$. Let $T$ be the collection of vertices which
have $d_{1}(v)+d_{2}(v)\leq 1$ and $d_{AB}(v)=0$ at time $m_{1}$. By Claim 4.2
it suffices to prove that $T$ is empty. Let $\mathcal{BS}$ be the event
$|B|\leq\frac{(\log\log n)^{12}}{\log^{2}n}n$, and note that by Claim 4.3,
$\mathbb{P}(\mathcal{BS})=1-o(1)$. The event $\\{T\neq\emptyset\\}$ is the
same as $\cup_{v\in V}\\{v\in T\\}$, and thus by the union bound,
$\displaystyle\mathbb{P}(T\neq\emptyset)$ $\displaystyle\leq o(1)+\sum_{v\in
V}\mathbb{P}(\\{v\in T\\}\cap\mathcal{BS})$ $\displaystyle=o(1)+\sum_{v\in
V}\mathbb{P}\left(\\{d_{1}(v)+d_{2}(v)\leq
1\\}\cap\\{d_{AB}(v)=0\\}\cap\mathcal{BS}\right).$
By Bayes equation, the second term of right hand side splits into,
$\displaystyle\sum_{v\in V}\mathbb{P}\left(\\{d_{1}(v)+d_{2}(v)\leq
1\\}\cap\mathcal{BS}\right)\cdot\mathbb{P}\left(d_{AB}(v)=0\,|\,\\{d_{1}(v)+d_{2}(v)\leq
1\\}\cap\mathcal{BS}\right)$ $\displaystyle\leq\sum_{v\in
V}\mathbb{P}\left(d_{1}(v)+d_{2}(v)\leq
1\right)\cdot\mathbb{P}\left(d_{AB}(v)=0\,|\,\\{d_{1}(v)+d_{2}(v)\leq
1\\}\cap\mathcal{BS}\right).$ (5)
The probability $\mathbb{P}(d_{1}(v)+d_{2}(v)\leq 1)$ can be bounded by
$\mathbb{P}(\\{d_{1}(v)\leq 1\\}\cap\\{d_{2}(v)\leq 1\\})$ which satisfies,
$\mathbb{P}(\\{d_{1}(v)\leq 1\\}\cap\\{d_{2}(v)\leq
1\\})=\mathbb{P}(d_{1}(v)\leq 1)\cdot\mathbb{P}(d_{2}(v)\leq
1\,|\,d_{1}(v)\leq 1).$
The term $\mathbb{P}(d_{1}(v)\leq 1)$ can be easily calculated as,
$\displaystyle\left(1-\frac{1}{n}\right)^{2n\log\log n}+{2n\log\log n\choose
1}\left(\frac{1}{n}\right)\cdot\left(1-\frac{1}{n}\right)^{2n\log\log
n-1}=O\left(\frac{\log\log n}{\log^{2}n}\right).$
To estimate $\mathbb{P}(d_{2}(v)\leq 1\,|\,d_{1}(v)\leq 1)$, expose the edges
of Step I as follows: First expose all the first vertices. Then expose the
second vertices whose first vertex is saturated ($d_{2}(v)$ is now determined
for every $v\in V$). The number of second-vertex-spots that are considered is
at least $2n\log\log n-12n$, and thus $\mathbb{P}(d_{2}(v)\leq 1|d_{1}(v)\leq
1)$ is at most
$\displaystyle\left(1-\frac{1}{n}\right)^{2n\log\log n-12n}+{2n\log\log
n\choose 1}\left(\frac{1}{n}\right)\cdot\left(1-\frac{1}{n}\right)^{2n\log\log
n-12n-1}=O\left(\frac{\log\log n}{\log^{2}n}\right).$
Thus as a crude bound, we have
$\mathbb{P}(d_{1}(v)+d_{2}(v)\leq 1)\leq\mathbb{P}(d_{1}(v)\leq
1)\cdot\mathbb{P}(d_{2}(v)\leq 1\,|\,d_{1}(v)\leq 1)=O\left(\frac{(\log\log
n)^{2}}{\log^{4}n}\right).$
Since $d_{1}(v)+d_{2}(v)\leq 1$ implies that $v\in B$, and $d_{AB}(v)$ depends
only on the Step II edges (which are independent from $d_{1}(v)$, $d_{2}(v)$,
and $\mathcal{BS}$), the second term of the right hand side of equation (5),
the probability $\mathbb{P}\left(d_{AB}(v)=0\,|\,\\{d_{1}(v)+d_{2}(v)\leq
1\\}\cap\mathcal{BS}\right)$ can be bounded by
$\displaystyle\left(1-2\frac{1}{n}\frac{|A|}{n}\right)^{m_{1}-2n\log\log
n}\leq\exp\left(-2(m_{1}-2n\log\log n)|A|/n^{2}\right)$ $\displaystyle\leq$
$\displaystyle\exp\left(-(\log n-3\log\log n-2\log\log\log
n)\left(1-\frac{(\log\log n)^{12}}{\log^{2}n}\right)\right)$
$\displaystyle\leq$ $\displaystyle\exp\left(-\log n+3\log\log n+2\log\log\log
n+o(1)\right)=O\left(\frac{(\log n)^{3}(\log\log n)^{2}}{n}\right).$
Therefore in (5),
$\displaystyle\mathbb{P}(T\neq\emptyset)$ $\displaystyle\leq o(1)+\sum_{v\in
V}O\left(\frac{(\log\log n)^{2}}{\log^{4}n}\right)O\left(\frac{(\log
n)^{3}(\log\log n)^{2}}{n}\right)$ $\displaystyle=o(1)+O\left(\frac{(\log\log
n)^{4}}{\log n}\right)=o(1).$
∎
###### Claim 4.6.
The following properties hold $whp$ for _restricted_ vertices:
1. (i)
There are at most $\log^{13}n$ such vertices,
2. (ii)
every such two vertices are at distance at least 3 in $G_{m_{*}}$ from each
other.
###### Proof.
Since being a restricted vertex is a monotone decreasing property, by Claim
4.2 it suffices to prove $(i)$ at time $m_{1}$. Recall that $B_{2}$ is the
collection of restricted vertices (a vertex is restricted if it is not
saturated or blossomed).
First, condition on the whole outcome of Step I edges (first $2n\log\log n$
edges) and the event that $|B|\leq\frac{(\log\log n)^{12}}{\log^{2}n}n$. Then
the set $B$ is determined, and for a vertex $v\in B$, we can bound the
probability of the event $v\in B_{2}$ as following
$\displaystyle\mathbb{P}\left(v\in
B_{2}\right)\leq\sum_{\ell=0}^{11}{m_{2}\choose\ell}\left(\frac{2}{n}\right)^{\ell}\left(1-\frac{2|A|}{n^{2}}\right)^{m_{1}-2n\log\log
n-\ell}.$ (6)
Use the inequalities $m_{1}=\frac{1}{2}n\log n+\frac{1}{2}n\log\log
n-\log\log\log n\leq n\log n$, $m_{2}\leq n\log n$, $1-x\leq e^{-x}$, and
$|A|=n-|B|\geq n\left(1-\frac{(\log\log n)^{12}}{\log^{2}n}\right)$ to bound
the above by
$\sum_{l=0}^{11}(2\log n)^{\ell}\exp\left(-(\log n-3\log\log n-2\log\log\log
n-\ell)\left(1-\frac{(\log\log n)^{12}}{\log^{2}n}\right)\right).$
The sum is dominated by $\ell=11$, and this gives
$O\left(\log^{11}n\right)\exp\left(-\log n+3\log\log n+2\log\log\log
n+o(1)\right)\leq O\left(\frac{(\log\log n)^{2}\log^{14}n}{n}\right).$
Thus the expected size of $B_{2}$ given the Step I edges is
$\mathbb{E}[|B_{2}|\,|\,\textrm{Step I edges}]\leq|B|\cdot
O\left(\frac{(\log\log n)^{2}\log^{14}n}{n}\right)\leq O((\log\log
n)^{14}\log^{12}n).$
Since the assumptions on $A$ and $B$ holds $whp$ by Claim 4.3, we can use
Markov inequality to conclude that $whp$ there are at most $\log^{13}n$
vertices in $B_{2}$. Let us now prove $(ii)$.
For three distinct vertices $v_{1},v_{2}$ and $w$ in $V$, let
$\mathcal{A}(v_{1},v_{2},w)$ be the event that $w$ is a common neighbor of
$v_{1}$ and $v_{2}$. The probability of there being edges $(v_{1},w)$ (or
$(w,v_{1})$) and $(v_{2},w)$ (or $(w,v_{2})$) and $v_{1},v_{2}\in B_{2}$ can
be bounded by first choosing two time slots where $(v_{1},w)$ (or $(w,v_{1})$)
and $(v_{2},w)$ (or $(w,v_{2})$) will be placed, and then filling in the
remaining edges so that $v_{1},v_{2}\in B_{2}$. We will only bound the event
of there being edges $(v_{1},w)$ and $(w,v_{2})$ in the edge process (other
cases can be handled in a similar manner). The probability we would like to
bound is
$\mathbb{P}(\exists v_{1},v_{2},w,\exists 1\leq t_{1},t_{2}\leq
m_{2},e_{t_{1}}=(v_{1},w),e_{t_{2}}=(w,v_{2}),v_{1},v_{2}\in B_{2}).$
By the union bound this probability is at most
$\displaystyle\sum_{v_{1},v_{2},w\in
V}\sum_{t_{1},t_{2}=1}^{m_{2}}\mathbb{P}(e_{t_{1}}=(v_{1},w),e_{t_{2}}=(w,v_{2}),v_{1},v_{2}\in
B_{2})$ (7) $\displaystyle=$ $\displaystyle\sum_{v_{1},v_{2},w\in
V}\sum_{t_{1},t_{2}=1}^{m_{2}}\mathbb{P}(e_{t_{1}}=(v_{1},w),e_{t_{2}}=(w,v_{2}))\mathbb{P}(v_{1},v_{2}\in
B_{2}|e_{t_{1}}=(v_{1},w),e_{t_{2}}=(w,v_{2}))$ $\displaystyle\leq$
$\displaystyle\frac{1}{n}\sum_{t_{1},t_{2}=1}^{m_{2}}\mathbb{P}(v_{1},v_{2}\in
B_{2}|e_{t_{1}}=(v_{1},w),e_{t_{2}}=(w,v_{2})).$ (8)
To simplify the notation we abbreviate $\mathbb{P}(v_{1},v_{2}\in
B_{2}|e_{t_{1}}=(v_{1},w),e_{t_{2}}=(w,v_{2}))$ by $\mathbb{P}(v_{1},v_{2}\in
B_{2}|e_{t_{1}},e_{t_{2}})$. By using the independence of Step I and Step II
edges we have,
$\mathbb{P}(v_{1},v_{2}\in
B_{2}|e_{t_{1}},e_{t_{2}})=\mathbb{P}(v_{1},v_{2}\in
B|e_{t_{1}},e_{t_{2}})\mathbb{P}(v_{1},v_{2}\notin B_{1}|v_{1},v_{2}\in
B,e_{t_{1}},e_{t_{2}}).$
For fixed $t_{1}$ and $t_{2}$, we can bound $\mathbb{P}(v_{1},v_{2}\in
B|e_{t_{1}},e_{t_{2}})$ by the probability of “$v_{1}$ and $v_{2}$ appear at
most 22 times combined in Step I as a first vertex other than at time $t_{1}$
and $t_{2}$”, whose probability can be bounded as follows regardless of the
value of $t_{1}$ and $t_{2}$,
$\displaystyle\sum_{k=0}^{22}{2n\log\log n\choose
k}\left(\frac{2}{n}\right)^{k}\cdot\left(1-\frac{2}{n}\right)^{2n\log\log
n-2-k}\leq\sum_{k=0}^{22}(4\log\log
n)^{k}\frac{O(1)}{\log^{4}n}=O\left(\frac{(\log\log
n)^{22}}{\log^{4}n}\right).$
To bound $\mathbb{P}(v_{1},v_{2}\notin B_{1}|v_{1},v_{2}\in
B,e_{t_{1}},e_{t_{2}})$, it suffices to bound $\mathbb{P}(v_{1},v_{2}\notin
B_{1}|v_{1},v_{2}\in B,e_{t_{1}},e_{t_{2}},\mathcal{BS})$, which can be
bounded by the probability of “$v_{1}$ and $v_{2}$ receives at most 22 $A$-$B$
edges combined in Step II other than at time $t_{1}$ and $t_{2}$”. Regardless
of the value of $t_{1}$ and $t_{2}$, this satisfies the bound,
$\sum_{\ell=0}^{22}{m_{2}\choose\ell}\left(\frac{4}{n}\right)^{\ell}\left(1-\frac{4}{n}\frac{|A|}{n}\right)^{m_{1}-2-2n\log\log
n-l}.$
Note that $\frac{4}{n}$ and $\frac{2}{n}$ in this equation should in fact
involve some terms of order $\frac{1}{n^{2}}$, but we omitted it for
simplicity since it does not affect the asymptotic final outcome. By a similar
calculation to (6), this eventually can be bounded by
$O(\frac{\log^{29}n}{n^{2}})$. Thus we have
$\mathbb{P}(v_{1},v_{2}\in
B_{2}|e_{t_{1}},e_{t_{2}})=O\left(\frac{\log^{26}n}{n^{2}}\right),$
which by (8) and $m_{2}\leq n\log n$ gives,
$\mathbb{P}(\exists v_{1},v_{2},w,\exists 1\leq t_{1},t_{2}\leq
m_{2},e_{t_{1}}=(v_{1},w),e_{t_{2}}=(w,v_{2}),v_{1},v_{2}\in B_{2})\leq
O\left(\frac{\log^{28}n}{n}\right).$
Therefore by Markov’s inequality, $whp$ no such three vertices exist, which
implies that two vertices $v_{1},v_{2}\in B_{2}$ cannot be at distance two
from each other in $G_{m_{*}}$. Similarly, we can prove that $whp$ every two
vertices $v_{1},v_{2}\in B_{2}$ are not adjacent to each other, and hence
$whp$ every $v_{1},v_{2}\in B_{2}$ are at distance at least two away from each
other. ∎
### 4.3 Configuration of the edge process
To prove that our algorithm succeeds $whp$, we first reveal some pieces of
information of the edge process, which we call the “configuration” of the
process. These information will allow us to determine whether the underlying
edge process is typical or not. Then in the next section, using the remaining
randomness, we will construct a Hamilton cycle.
In the beginning, rather than thinking of edges coming one by one, we regard
our edge process ${\bf e}=(e_{1},e_{2},\cdots,e_{m_{*}})$ as a collection of
edges $e_{i}$ for $i=1,\cdots,m_{*}$ whose both endpoints are not known. We
can decide to reveal certain information as necessary. Let us first reveal the
following.
1. 1.
For $t\leq 2n\log\log n$, reveal the first vertex of the $t$-th edge $e_{t}$.
If this vertex already appeared as the first vertex at least 12 times among
the edges $e_{1},\cdots,e_{t-1}$, then also reveal the second vertex.
Given this information, we can determine the saturated vertices, and hence we
know the sets $A$ and $B$. Therefore, it is possible to reveal the following
information.
1. 2.
For $t>2n\log\log n$, reveal all the vertices that belong to $B$.
The information we revealed determines the blossomed ($B_{1}$), and restricted
($B_{2}$) vertices. Thus we can further reveal the following information.
1. 3.
For $t\leq 2n\log\log n$, further reveal all the non-revealed vertices that
belong to $B_{2}$.
2. 4.
For every edge $e_{t}=(v_{t},w_{t})$ in which we already know that either
$v_{t}\in B_{2}$ or $w_{t}\in B_{2}$, also reveal the other vertex.
We define the _configuration_ of an edge process as the above four pieces of
information.
We want to say that all the non-revealed vertices are uniformly distributed
over certain sets. But in order for this to be true, we must make sure that
the distribution of the non-revealed vertices is not affected by the fact that
we know the value of $m_{*}$ (some vertex has degree exactly 2 at time
$m_{*}$, and maybe a non-revealed vertex will make this vertex to have degree
2 earlier than $m_{*}$). This is indeed the case, since the last vertex to
have degree 2 is necessarily a restricted vertex, and all the locations of the
restricted vertices are revealed. Thus the non-revealed vertices cannot change
the value of $m_{*}$. Therefore, once we condition on the configuration of an
edge process, the remaining vertices are distributed in the following way:
1. (i)
For $t\leq 2n\log\log n$, if the first vertex of the edge $e_{t}$ appeared at
most 12 times among $e_{1},\cdots,e_{t-1}$, then its second vertex is either a
known vertex in $B_{2}$ or is a random vertex in $V\setminus B_{2}$.
2. (ii)
For $t>2n\log\log n$, if both vertices of $e_{t}$ are not revealed, then
$e_{t}$ consists of two random vertices of $A$. If only one of the vertices of
$e_{t}$ is not revealed, then the revealed vertex is in $B$, and the non-
revealed vertex is a random vertex of $A$.
###### Definition 4.7.
A configuration of an edge process is _typical_ if it satisfies the following.
1. (i)
The number of saturated and blossomed vertices satisfy $|A|\geq
n-\frac{(\log\log n)^{12}}{\log^{2}n}n$, and $|B_{1}|\leq\frac{(\log\log
n)^{12}}{\log^{2}n}n$ respectively.
2. (ii)
The number of restricted vertices satisfies $|B_{2}|\leq\log^{13}n$.
3. (iii)
Every vertex appears at least twice in the configuration even without
considering the $B$-$B$ edges.
4. (iv)
All the restricted vertices are either partially-blossomed or buds.
5. (v)
In the non-directed graph induced by the edges whose both endpoints are
revealed, every two restricted vertices $v_{1},v_{2}$ are at distance at least
3 away from each other.
6. (vi)
There are at least $\frac{1}{3}n\log n$ edges $e_{t}$ for $t>2n\log\log n$
whose both endpoints are not yet revealed.
###### Lemma 4.8.
The random edge process has a typical configuration $whp$.
###### Proof.
The fact that the random edge process has $whp$ a configuration satisfying
$(i),(iii)$, and $(iv)$ follows from Claims 4.3, 4.4, and 4.5 respectively.
$(ii)$ and $(v)$ follow from Claim 4.6. To verify $(vi)$, note that by Claim
4.2 and 4.3, $whp$ there are at least $\frac{1}{2}n\log n-2n\log\log n$ edges
of Step II, and $|A|=(1-o(1))n$. Therefore the probability of a Step II edge
being an $A$-$A$ edge is $1-o(1)$, and the expected number of $A$-$A$ edges is
$(1/2-o(1))n\log n$. Then by Chernoff’s inequality, $whp$ there are at least
$\frac{1}{3}n\log n$ $A$-$A$ edges. These edges are the edges we are looking
for in $(vi)$. ∎
## 5 Finding a Hamilton Cycle
In the previous section, we established several useful properties of the
underlying graph $G_{m_{*}}$. In this section, we study the algorithm ${\bf
Orient}$ using these properties, and prove that conditioned on the edge
process having a typical configuration, the graph $D_{m_{*}}$ $whp$ contains a
Hamilton cycle (recall that the graph $D_{m_{*}}$ is the set of random edges
of the edge process, oriented according to Orient). As described in Section 3,
the proof is a constructive proof, in the sense that we describe how to find
such a cycle. The algorithm is similar to that used in [14] which we described
in some details in Section 3. Let us briefly recall that it proceeds in two
stages:
1. 1.
Find a 1-factor of $G$. If it contains more than $O(\log n)$ cycles, fail.
2. 2.
Join the cycles into a Hamilton cycle.
The main challenge in our case is to prove that the first step of the
algorithm does not fail. Afterwards, we argue why we can apply Frieze’s
results for the remaining step.
### 5.1 Almost 5-in 5-out subgraph
Let $D_{5-in,5-out}$ be the following subgraph of $D_{m_{*}}$. For each vertex
$v$, assign a set of neighbors $\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$,
where $\textrm{{OUT}}(v)$ are out-neighbors of $v$ and $\textrm{{IN}}(v)$ are
in-neighbors of $v$. For saturated and blossomed vertices, $\textrm{{OUT}}(v)$
and $\textrm{{IN}}(v)$ will be of size 5, and for restricted vertices, they
will be of size 1 (thus $D_{5-in,5-out}$ is not a 5-in 5-out directed graph
under the strict definition).
Let $E_{1}$ be the edges of Step I (first $2n\log\log n$ edges), and $E_{2}$
be the edges of Step II (remaining edges).
* •
If $v$ is saturated, then consider the first 12 appearances in $E_{1}$ of $v$
as a first vertex. Some of these edges might later be used as OUT or IN for
other vertices. Hence among these 12 appearances, consider only those whose
second vertex is not in $B_{2}$. By property (v) of Definition 4.7, there will
be at least 11 such second vertices for a typical configuration. Define
$\textrm{{OUT}}(v)$ as the first 5 vertices among them which were directed out
from $v$, and $\textrm{{IN}}(v)$ as the first 5 vertices among them which were
directed in to $v$ in ${\bf Orient}$.
* •
If $v$ blossoms, then consider the first 10 $A$-$B$ edges in $E_{2}$ connected
to $v$, and look at the other end points. Let $\textrm{{OUT}}(v)$ be the first
5 vertices which are an out-neighbor of $v$ and $\textrm{{IN}}(v)$ be the
first 5 vertices which are an in-neighbor of $v$.
A partially-blossomed vertex, by definition, has
$d_{1}(v)+d_{2}(v)+d_{AB}(v)\geq 2$, and must fall into one of the following
categories. $(i)$ $d_{1}(v)\geq 2$, $(ii)$ $d_{2}(v)\geq 2$, $(iii)$
$d_{AB}(v)\geq 2$, $(iv)$ $d_{1}(v)=1$, $d_{2}(v)=1$, $(v)$ $d_{1}(v)=1$,
$d_{AB}(v)=1$, and $(vi)$ $d_{1}(v)=0,d_{2}(v)=1$, $d_{AB}(v)=1$. If it falls
into several categories, then pick the first one among them.
* •
If $v$ partially-blossoms and $d_{1}(v)\geq 2$, consider the first two
appearances of $v$ in $E_{1}$ as a first vertex. The first is an out-edge and
the second is an in-edge (see Section 2.1).
* •
If $v$ partially-blossoms and $d_{2}(v)\geq 2$, consider the first two
appearances of $v$ in $E_{1}$ as a second vertex whose first vertex is
saturated. The first is an in-edge and the second is an out-edge (see Section
2.1).
* •
If $v$ partially-blossoms and $d_{AB}(v)\geq 2$, consider the first two
$A$-$B$ edges in $E_{2}$ incident to $v$. One of it is an out-edge and the
other is an in-edge. Note that unlike other cases, the actual order of in-edge
and out-edge will depend on the configuration. But since the configuration
contains all the positions at which $v$ appeared in the process, the choice of
in-edge or out-edge only depends on the configuration and not on the non-
revealed vertices (note that this is slightly different from the blossomed
vertices).
* •
If $v$ partially-blossoms and $d_{1}(v)=1$, $d_{2}(v)=1$, consider the first
appearance of $v$ in $E_{1}$ as a first vertex, and the first appearance of
$v$ in $E_{1}$ as a second vertex whose first vertex is saturated. The former
is an out-edge and the latter is an in-edge.
* •
If $v$ partially-blossoms and $d_{1}(v)=1$, $d_{AB}(v)=1$, consider the first
appearance of $v$ in $E_{1}$ as a first vertex, and the first $A$-$B$ edge
connected to $v$ in $E_{2}$. The former is an out-edge and the latter is an
in-edge (see rule 1 in Section 2.2).
* •
If $v$ partially-blossoms and $d_{1}(v)=0,d_{2}(v)=1$, $d_{AB}(v)=1$, consider
the first appearance of $v$ in $E_{1}$ as a second vertex whose first vertex
is saturated, and the first $A$-$B$ edge connected to $v$ in $E_{2}$. The
former is an in-edge and the latter is an out-edge (see rule 2 in Section
2.2). Thus we can construct $\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$ of size
1 each, for all partially-blossomed vertices.
* •
If $v$ is a bud, then consider the first (and only) $A$-$B$ edge connected to
$v$. Let this edge be $e_{s}$. For a typical configuration, by property (iii)
of Definition 4.7, we know that $v$ has a neglected edge connected to it. Let
$e_{t}$ be the first neglected edge of $v$. By property (v) of Definition 4.7,
we know that the first vertex of the neglected edge is either in $A$ or
$B_{1}$. According to the direction of this edge, the direction of $e_{s}$
will be chosen as the opposite direction (see rule 3 in Section 2.2). As in
the partially-blossomed case with $d_{AB}(v)\geq 2$, the direction is solely
determined by the configuration. Thus we can construct $\textrm{{OUT}}(v)$ and
$\textrm{{IN}}(v)$ of size 1 each (which is already fixed once we fix the
configuration).
This in particular shows that $D_{m_{*}}$ has minimum in-degree and out-degree
at least 1, which is clearly a necessary condition for the graph to be
Hamiltonian. A crucial observation is that, once we condition on the random
edge process having a fixed typical configuration, we can determine exactly
which edges are going to be used to construct the graph $D_{5-in,5-out}$ just
by looking at the configuration.
For a set $X$, let $RV(X)$ be an element chosen independently and uniformly at
random in the set (consider each appearance of $RV(X)$ as a new independent
copy).
###### Proposition 5.1.
Let $V^{\prime}=V\setminus B_{2}$. Conditioned on the edge process having a
typical configuration, $D_{5-in,5-out}$ has the following distribution.
1. (i)
If $v$ is saturated, then $\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$ are a
union of 5 copies of $RV(V^{\prime})$.
2. (ii)
If $v$ blossoms, then $\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$ are a union
of 5 copies of $RV(A)$.
###### Proof.
For a vertex $v\in V$, the configuration contains the information of the time
of arrival of the edges that will be used to construct the set
$\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$.
If $v$ is a saturated vertex, then we even know which edges belong to
$\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$ (if there are no $B_{2}$ vertices
connected to the first 12 appearances of $v$ as a first vertex, then the first
five odd appearances of $v$ as a first vertex will be used to construct
$\textrm{{OUT}}(v)$, and the first five even appearances of $v$ as a first
vertex will be used to construct $\textrm{{IN}}(v)$). Since the non-revealed
vertices are independent random vertices in $V^{\prime}$, we know that
$\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$ of these vertices consist of 5
independent copies of $RV(V^{\prime})$.
If $v$ blossoms, then the analysis is similar to that of the saturated
vertices. However, even though the configuration contains the information of
which 10 edges will be used to construct $\textrm{{OUT}}(v)$ and
$\textrm{{IN}}(v)$, the decision of whether the odd edges or the even edges
will be used to construct $\textrm{{OUT}}(v)$ depends on the particular edge
process (this is determined by the orientation rule at Step I). However, since
the other endpoints are independent identically distributed random vertices in
$A$, the distribution of $\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$ is not be
affected by the previous edges, and is always $RV(A)$ (this is analogous to
the fact that the distribution of the outcome of a coin flip does not depend
on whether the initial position was head or tail). ∎
### 5.2 A small 1-factor
The main result that we are going to prove in this section is summarized in
the following proposition:
###### Proposition 5.2.
Conditioned on the random edge process having a typical configuration, there
exists $whp$ a 1-factor of $D_{5-in,5-out}$ containing at most $2\log n$
cycles, and in which at least $9/10$ proportion of each cycle are saturated
vertices.
Throughout this section, rather than vaguely conditioning on the process
having a typical configuration, we will consider a fixed typical configuration
${\bf c}$ and condition on the event that the edge process has configuration
${\bf c}$. Proposition 5.2 easily follows once we prove that there exists a
Hamilton cycle $whp$ under this assumption. The reason we do this more precise
conditioning is to fix the sets $A,B,B_{1},B_{2}$ and the edges incident to
vertices of $B_{2}$ (note that these are determined solely by the
configuration). In our later analysis, it is crucial to have these fixed.
To prove Proposition 5.2, we represent the graph $D_{5-in,5-out}$ as a certain
bipartite graph in which a perfect matching corresponds to the desired
1-factor of the original graph $D_{m^{*}}$. Then using the edge distribution
of $D_{5-in,5-out}$ given in the previous section, we will show that the
bipartite graph $whp$ contains a perfect matching. The proof of Proposition
5.2 will be given at the end after a series of lemmas.
Define a new vertex set $V^{*}=\\{v^{*}|\,v\in V\\}$ as a copy of $V$, and for
sets $X\subset V$, use $X^{*}$ to denote the set of vertices in $V^{*}$
corresponding to $X$. Then, in order to find a 1-factor in $D_{5-in,5-out}$,
define an auxiliary bipartite graph ${\bf BIP}(V,V^{*})$ over the vertex set
$V\cup V^{*}$ whose edges are given as following: for every (directed) edge
$(u,v)$ of $D_{5-in,5-out}$, add the (undirected) edge $(u,v^{*})$ to ${\bf
BIP}$. Note that perfect matchings of ${\bf BIP}$ has a natural one-to-one
correspondence with 1-factors of $D_{5-in,5-out}$. Moreover, the edge
distribution of ${\bf BIP}$ easily follows from the edge distribution of
$D_{5-in,5-out}$. We will say that $D_{5-in,5-out}$ is the underlying directed
graph of ${\bf BIP}$. A permutation $\sigma$ of $V^{*}$ acts on ${\bf BIP}$ to
construct another bipartite graph which has edges $(v,\sigma(w^{*}))$ for all
edges $(v,w^{*})$ in ${\bf BIP}$.
Our plan is to find a perfect matching which is (almost) a uniform random
permutation, and show that this permutation has at most $O(\log n)$ cycles (if
it were a uniform random permutation, then this is a well-known result, see,
e.g., [12]). Since our distribution is not a uniform distribution, we will
rely on the following lemma. Its proof is rather technical, and to avoid
distraction, it will be given in the end of this subsection.
###### Lemma 5.3.
Let $X$ be subset of $V$. Assume that $whp$, (i) ${\bf BIP}$ contains a
perfect matching, (ii) every cycle of the underlying directed graph
$D_{5-in,5-out}$ contains at least one element from $X$, and (iii) the edge
distribution of ${\bf BIP}$ is invariant under arbitrary permutations of
$X^{*}$. Then $whp$, there exists a perfect matching which when considered as
a permutation contains at most $2\log n$ cycles.
The next set of lemmas establish the fact that ${\bf BIP}$ satisfies all the
conditions we need in order to apply Lemma 5.3. First we prove that ${\bf
BIP}$ contains a perfect matching. We use the following version of the well-
known Hall’s theorem (see, e.g., [10]).
###### Theorem 5.4.
Let $\Gamma$ be a bipartite graph with vertex set $X\cup Y$ and $|X|=|Y|=n$.
If for all $X^{\prime}\subset X$ of size $|X^{\prime}|\leq n/2$,
$|N(X^{\prime})|\geq|X^{\prime}|$ and for all $Y^{\prime}\subset Y$ of size
$|Y^{\prime}|\leq n/2$, $|N(Y^{\prime})|\geq|Y^{\prime}|$, then $G$ contains a
perfect matching.
###### Lemma 5.5.
The graph ${\bf BIP}$ contains a perfect matching $whp$.
###### Proof.
We will verify Hall’s condition for the graph ${\bf BIP}$ to prove the
existence of a perfect matching. Recall that ${\bf BIP}$ is a bipartite graph
over the vertex set $V\cup V^{*}$.
Let us show that every set $D\subset V$ of size $|D|\leq n/2$ satisfies
$|N(D)|\geq|D|$. This will be done in two steps. First, if $D\subset B_{2}$,
then this follows from the fact that $\textrm{{OUT}}(v)$ are distinct sets for
all $v\in B_{2}$, (if they were not distinct, then there will be two
restricted vertices which are at distance 2 away, and it violates property (v)
of Definition 4.7). Second, we prove that for $D\subset V\setminus B_{2}$,
$\left|\,N(D)\cap(V^{*}\setminus N(B_{2}))\,\right|\geq|D|.$
It is easy to see that the above two facts prove our claim.
Let $D\subset V\setminus B_{2}$ be a set of size at most $k\leq n/2$. The
inequality $|N(D)\cap(V^{*}\setminus N(B_{2}))|<|D|$ can happen only if there
exists a set $N^{*}\subset V^{*}\setminus N(B_{2})$ such that $|N^{*}|<k$, and
for all $v\in D$ all the vertices of $\textrm{{OUT}}(v)$ belong to $N^{*}\cup
N(B_{2})$. Since $D\subset V\setminus B_{2}$, every vertex in $D$ has 5 random
neighbors distributed uniformly over some set of size $(1-o(1))n$, and thus
the probability of the above event happening is at most,
$k\binom{n}{k}^{2}\left(\frac{|N(B_{2})|+|N^{*}|}{(1-o(1))n}\right)^{5k}\leq\left(\frac{e^{2}n^{2}(\log^{13}n+k)^{5}}{k^{2}\cdot(1-o(1))n^{5}}\right)^{k}\leq\left(\frac{9(\log^{13}n+k)^{5})}{k^{2}n^{3}}\right)^{k}.$
For the range $9n/20\leq k\leq n/2$, we will use the following bound
$k\binom{n}{k}^{2}\left(\frac{\log^{13}n+k}{(1-o(1))n}\right)^{5k}\leq
2^{2n}\left(\frac{1+o(1)}{2}\right)^{9n/4}\leq 2^{-n/5}.$
Summing over all choices of $k$ we get,
$\displaystyle\sum_{k=1}^{n/2}k\binom{n}{k}^{2}\left(\frac{\log^{13}n+k}{(1-o(1))n}\right)^{5k}$
$\displaystyle\leq$
$\displaystyle\sum_{k=1}^{\log^{14}n}\left(\frac{9(\log^{13}n+k)^{5}}{k^{2}n^{3}}\right)^{k}+\sum_{k=\log^{14}n}^{9n/20}\left(\frac{9(\log^{13}n+k)^{5}}{k^{2}n^{3}}\right)^{k}+\sum_{k=9n/20}^{n/2}2^{-n/5}$
$\displaystyle\leq$
$\displaystyle\sum_{k=1}^{\log^{14}n}\left(\frac{10\log^{70}n}{n^{3}}\right)^{k}+\sum_{k=\log^{14}n}^{9n/20}\left(\frac{10k^{3}}{n^{3}}\right)^{k}+o(1)=o(1).$
This finishes the proof that $whp$ $|N(D)|\geq|D|$ for all $D\subset V$ of
size at most $n/2$. Similarly, for sets $D^{*}\subset V^{*}$ of size
$|D^{*}|\leq n/2$, using the sets $\textrm{{IN}}(v)$ instead of
$\textrm{{OUT}}(v)$ we can show that $whp$ $|N(D^{*})|\geq|D^{*}|$ in ${\bf
BIP}$. ∎
For restricted vertices $v$, the sets $\textrm{{OUT}}(v)$ and
$\textrm{{IN}}(v)$ are of size 1 and are already fixed since we fixed the
configuration. Thus the edge corresponding to theses vertices will be in ${\bf
BIP}$. Let
$\hat{A}=A\setminus(\cup_{v\in B_{2}}\textrm{{OUT}}(v)),$
and let $\hat{A}^{*}$ be the corresponding set inside $V^{*}$ (note that
$\hat{A}$ and $\hat{A}^{*}$ are fixed sets). This set will be our set $X$ when
applying Lemma 5.3. We next prove that every cycle of $D_{5-in,5-out}$
contains vertices of $\hat{A}$.
###### Lemma 5.6.
$Whp$, every cycle $C$ of $D_{5-in,5-out}$ contains at least
$\left\lceil\frac{9}{10}|C|\right\rceil$ vertices of $\hat{A}$.
###### Proof.
Recall that by Proposition 5.1, for vertices $v\in V\setminus B_{2}$, the set
$\textrm{{OUT}}(v)$ and $\textrm{{IN}}(v)$ are uniformly distributed over
$V\setminus B_{2}$, or $A$. Therefore, for a vertex $w\in B_{2}$, the only
out-neighbor of $w$ is $\textrm{{OUT}}(w)$, and the only in-neighbor is
$\textrm{{IN}}(w)$ (note that they are both fixed since we fixed the
configuration). Also note that,
$|V\setminus\hat{A}|\leq|V\setminus
A|+|B_{2}|\leq|B_{1}|+2|B_{2}|\leq\frac{(\log\log
n)^{12}}{\log^{2}n}n+2\log^{13}n\leq\frac{n}{\log n}.$
We want to show that in the graph $D_{5-in,5-out}$, $whp$ every cycle of
length $k$ has at most $k/10$ points from $V\setminus\hat{A}$, for all
$k=1,\ldots,n$. Let us compute the expected number of cycles for which this
condition fails and show that it is $o(1)$. First choose $k$ vertices
$v_{1},v_{2},\cdots,v_{k}$ (with order) and assume that $a$ of them are in
$B_{2}$. Then since we already know the (unique) out-neighbor and in-neighbor
for vertices in $B_{2}$, for the vertices $v_{1},\cdots,v_{k}$ to form a cycle
in that order, we must fix $3a$ positions ($a$ for the vertices in $B_{2}$,
and $2a$ for the in-, and out-neighbors of them by property $(v)$ of
Definition 4.7). Assume that among the remaining $k-3a$ vertices, $\ell$
vertices belong to $V\setminus(\hat{A}\cup B_{2})$. Then for there to be at
least $\lceil k/10\rceil$ vertices among $v_{1},\cdots,v_{k}$ not in
$\hat{A}$, we must have $3a+\ell\geq\lceil k/10\rceil$. There are at most
$3^{k}$ ways to assign one of the three types $\hat{A},B_{2}$, and
$V\setminus(\hat{A}\cup B_{2})$ to each of $v_{1},\cdots,v_{k}$. Therefore the
number of ways to choose $k$ vertices as above is at most
$3^{k}\cdot n^{k-\ell-3a}|V\setminus\hat{A}|^{\ell}|B_{2}|^{a}\leq 3^{k}\cdot
n^{k-\ell-3a}\left(\frac{n}{\log n}\right)^{\ell}\left(\log^{13}n\right)^{a}$
There are $k-2a$ random edges which has to be present in order to make the
above $k$ vertices into a cycle. For all $i\leq k-1$, the pair
$(v_{i},v_{i+1})$ can become an edge either by
$v_{i+1}\in\textrm{{OUT}}(v_{i})$ or $v_{i}\in\textrm{{IN}}(v_{i+1})$ (and
also for the pair $(v_{1},v_{k})$). There are two ways to choose where the
edge $\\{v_{i},v_{i+1}\\}$ comes from, and if both $v_{i}$ and $v_{i+1}$ are
not in $B_{2}$, then $\\{v_{i},v_{i+1}\\}$ will become an edge with
probability at most $\frac{5}{(1-o(1))n}$. Therefore the probability of a
fixed $v_{1},\cdots,v_{k}$ chosen as above being a cycle is at most
$2^{k-2a}\left(\frac{5}{(1-o(1))n}\right)^{k-2a}$, and the expected number of
such cycles is at most
$\displaystyle 2^{k-2a}\left(\frac{5}{(1-o(1))n}\right)^{k-2a}\cdot 3^{k}\cdot
n^{k-\ell-3a}\left(\frac{n}{\log n}\right)^{\ell}\left(\log^{13}n\right)^{a}$
$\displaystyle\leq$
$\displaystyle\left(\frac{\log^{13}n}{n}\right)^{a}\cdot\left(\frac{1}{\log
n}\right)^{\ell}\cdot(30+o(1))^{k}$ $\displaystyle\leq$
$\displaystyle\left(\frac{\log^{13}n}{n}\right)^{a}\cdot\left(\frac{1}{\log
n}\right)^{\lceil
k/10\rceil-3a}\cdot(30+o(1))^{k}\leq\left(\frac{\log^{16}n}{n}\right)^{a}\cdot\left(\frac{40}{(\log
n)^{1/10}}\right)^{k}.$
where we used $3a+\ell\geq\lceil k/10\rceil$ for the second inequality. Sum
this over $0\leq\ell\leq k$ and $0\leq a\leq k$ and we get
$\displaystyle\sum_{k=1}^{n}\sum_{\ell=0}^{k}\sum_{a=0}^{k}\left(\frac{\log^{16}n}{n}\right)^{a}\cdot\left(\frac{40}{(\log
n)^{1/10}}\right)^{k}=O\left(\sum_{k=1}^{n}(k+1)\left(\frac{40}{(\log
n)^{1/10}}\right)^{k}\right)=o(1),$
which proves our lemma.
∎
The following simple observation is the last ingredient of our proof.
###### Lemma 5.7.
The distribution of ${\bf BIP}$ is invariant under the action of an arbitrary
permutation of $\hat{A}^{*}$.
###### Proof.
This lemma follows from the following three facts about the distribution of
$D_{5-in,5-out}$. First, all the saturated vertices have the same distribution
of IN. Second, for the vertices $v\in V\setminus B_{2}$, the distribution of
OUT and IN is uniform over a set which contains all the saturated vertices
(for some vertices it is $V\setminus B_{2}$, and for others it is $A$). Third,
for the vertices $v\in B_{2}$, the set $\textrm{{OUT}}(v)$ lies outside
$\hat{A}$ by definition. Therefore, the action of an arbitrary permutation of
$\hat{A}^{*}$ does not affect the distribution of ${\bf BIP}$. ∎
Note that here it is important that we fixed the configuration beforehand, as
otherwise the set $\hat{A}^{*}$ will vary, and a statement such as Lemma 5.7
will not make sense.
By combining Lemmas 5.3, 5.5, 5.6, and 5.7, we obtain Proposition 5.2.
###### Proof of Proposition 5.2.
Lemmas 5.5, 5.6, and 5.7 show that the graph ${\bf BIP}$ has all the
properties required for the application of Lemma 5.3 (we use $X=\hat{A}$).
Thus we know that $whp$, $D_{5-in,5-out}$ has a 1-factor containing at most
$2\log n$ cycles, and in which at least 9/10 proportion of each cycle are
saturated vertices (second property by Lemma 5.6). ∎
We conclude this subsection with the proof of Lemma 5.3.
###### Proof of Lemma 5.3.
For simplicity of notation, we use the notation $\mathcal{B}$ for the random
bipartite graph ${\bf BIP}$. Note that both a 1-factor over the vertex set $V$
and a perfect matching of $(V,V^{*})$, can be considered as a permutation of
$V$. Throughout this proof we will not distinguish between these
interpretations and treat 1-factors and perfect matchings also as
permutations.
First, let $f$ be an arbitrary function which for every bipartite graph,
outputs one fixed perfect matching in it. Then, given a bipartite graph
$\Gamma$ over the vertex set $V\cup V^{*}$, let $\Phi$ be the random variable
$\Phi(\Gamma):=\tau^{-1}f(\tau\Gamma)$, where $\tau$ is a permutation of the
vertices $\hat{A}^{*}$ chosen uniformly at random. Since the distribution of
$\mathcal{B}$ and the distribution of $\tau\mathcal{B}$ are the same by
condition $(iii)$, for an arbitrary permutation $\sigma$ of $\hat{A}^{*}$,
$\Phi$ has the following property,
$\displaystyle\mathbb{P}(\Phi(\mathcal{B})=\phi)$
$\displaystyle=\mathbb{P}(\tau^{-1}f(\tau\mathcal{B})=\phi)\stackrel{{\scriptstyle(*)}}{{=}}\mathbb{P}((\tau\sigma)^{-1}f(\mathcal{\tau\sigma
B})=\phi)$
$\displaystyle=\mathbb{P}(\tau^{-1}f(\tau\sigma\mathcal{B})=\sigma\phi)\stackrel{{\scriptstyle(*)}}{{=}}\mathbb{P}(\tau^{-1}f(\tau\mathcal{B})=\sigma\phi)=\mathbb{P}(\Phi(\mathcal{B})=\sigma\phi).$
(9)
In the $(*)$ steps, we used $(iii)$, and the fact that if $\tau$ is a uniform
random permutation of $\hat{A}^{*}$, then so is $\tau\sigma$, and therefore,
$\mathcal{B},\tau\mathcal{B}$, and $\tau\sigma\mathcal{B}$ all have identical
distribution.
Define a map $\Pi$ from the 1-factors over the vertex set $V$ to the 1-factors
over the vertex set $\hat{A}$ obtained by removing all the vertices that
belong to $V\setminus\hat{A}$ from every cycle. For example, a cycle of the
form $(x_{1}x_{2}y_{1}y_{2}x_{3}y_{3}x_{4})$ will become the cycle
$(x_{1}x_{2}x_{3}x_{4})$ when mapped by $\Pi$ (where
$x_{1},\ldots,x_{4}\in\hat{A}$, and $y_{1},y_{2},y_{3}\in V\setminus\hat{A}$).
Note that if all the original 1-factors contained at least one element from
$\hat{A}$, then the total number of cycles does not change after applying the
map $\Pi$. This observation combined with condition $(ii)$ implies that it
suffices to obtain a bound on the number of cycles after applying $\Pi$.
Let $\sigma,\rho$ be permutations of the vertex set $\hat{A}^{*}$. We claim
that for every 1-factor $\phi$ of the vertex set $V$, the equality
$\sigma\cdot\Pi(\phi)=\Pi(\sigma\cdot\phi)$ holds. This claim together with
(9) gives us,
$\displaystyle\mathbb{P}(\Pi(\Phi(\mathcal{B}))=\rho)$
$\displaystyle=\mathbb{P}(\Phi(\mathcal{B})\in\Pi^{-1}(\rho))\stackrel{{\scriptstyle(\ref{eqn_maybe2})}}{{=}}\mathbb{P}(\sigma\Phi(\mathcal{B})\in\Pi^{-1}(\rho))=\mathbb{P}(\Pi(\sigma\Phi(\mathcal{B}))=\rho)$
$\displaystyle=\mathbb{P}(\sigma\cdot\Pi(\Phi(\mathcal{B}))=\rho)=\mathbb{P}(\Pi(\Phi(\mathcal{B}))=\sigma^{-1}\rho).$
Since $\sigma$ and $\rho$ were an arbitrary permutation of the vertex set
$\hat{A}$, we can conclude that conditioned on there existing a perfect
matching, $\Pi(\Phi(\mathcal{B}))$ has a uniform distribution over the
permutations of $\hat{A}$. It is a well-known fact (see, e.g., [12]) that a
uniformly random permutation over a set of size $n$ has $whp$ at most $2\log
n$ cycles. Since $\mathcal{B}$ $whp$ contains a perfect matching by condition
$(i)$, it remains to verify the equality
$\sigma\cdot\Pi(\phi)=\Pi(\sigma\cdot\phi)$. Thus we conclude the proof by
proving this claim.
For a vertex $x\in\hat{A}$, assume that the cycle of $\phi$ which contains $x$
is of the form $(\cdots xy_{1}y_{2}\cdots y_{k}x_{+}\cdots)$ ($k\geq 0$) for
$y_{1},\ldots,y_{k}\in V\setminus\hat{A}$. Then by definition
$\Pi(\phi)(x)=x_{+}$, and thus $(\sigma\cdot\Pi(\phi))(x)=\sigma(x_{+})$. On
the other hand, since $\sigma$ only permutes $\hat{A}$ and fixes every other
element of $V$, we have $(\sigma\cdot\phi)(x)=\sigma(y_{1})=y_{1}$, and
$(\sigma\cdot\phi)(y_{i})=y_{i+1}$ for all $i\leq k-1$, and
$(\sigma\cdot\phi)(y_{k})=\sigma(x_{+})$. Therefore the cycle in
$\sigma\cdot\phi$ which contains $x$ will be of the form $(\cdots
xy_{1}y_{2}\cdots y_{k}\sigma(x_{+})\cdots)$ , and then by definition we have
$(\Pi(\sigma\cdot\phi))(x)=\sigma(x_{+})$. ∎
### 5.3 Combining the cycles into a Hamilton cycle
Assume that as in the previous subsection, we started with a fixed typical
configuration ${\bf c}$, conditioned on the edge process having configuration
${\bf c}$, and found a 1-factor of $D_{5-in,5-out}$ by using Proposition 5.2.
Since this 1-factor only uses the edges which have been used to construct the
graph $D_{5-in,5-out}$, it is independent of the $A$-$A$ edges in Step II that
we did not reveal. Moreover, by the definition of a typical configuration,
there are at least $\frac{1}{3}n\log n$ such edges. Note that the algorithm
gives a random direction to these edges. So interpret this as receiving
$\frac{1}{3}n\log n$ randomly directed $A$-$A$ edges with repeated edges
allowed. Then the problem of finding a directed Hamilton cycle in $D_{m_{*}}$
can be reduced to the following problem.
Let $V$ be a given set and $A$ be a subset of size $(1-o(1))n$. Assume that we
are given a 1-factor over this vertex set, where at least $9/10$ proportion of
each cycle lies in the set $A$. If we are given $\frac{1}{3}n\log n$
additional $A$-$A$ edges chosen uniformly at random, can we find a directed
Hamilton cycle?
To further simplify the problem, we remove the vertices $V\setminus A$ out of
the picture. Given a 1-factor over the vertex set $V$, mark in red, all the
vertices not in $A$. Pick any red vertex $v$, and assume that
$v_{-},v,v_{+}\in V$ appear in this order in some cycle of the given 1-factor.
If $v_{-}\neq v_{+}$, replace the three vertices $v_{-},v,v_{+}$ by a new
vertex $v^{\prime}$, where $v^{\prime}$ takes as in-neighbors the in-neighbors
of $v_{-}$, and as out-neighbors, the out-neighbors of $v_{+}$. We call the
above process as a _compression_ of the three vertices $v_{-},v,v_{+}$. A
crucial property of compression is that every 1-factor of the compressed graph
corresponds to a 1-factor in the original graph (with the same number of
cycles). Since a directed Hamilton cycle is also a 1-factor, if we can find a
Hamilton cycle in the compressed graph, then we can also find one in the
original graph.
Now for each $v\in V\setminus A$, compress the three vertices $v_{-},v,v_{+}$
into a vertex $v^{\prime}$ and mark it red if and only if either $v_{-}$ or
$v_{+}$ is a red vertex. This process always decreases the number of red
vertices. Repeat it until there are no red vertices remaining, or
$v_{-}=v_{+}$ for all red vertices $v$. As long as there is no red vertex in a
cycle of length 2 at any point of the process, the latter will not happen.
Consider a cycle whose length was $k$ at the beginning. Since at least 9/10
proportion of each cycle comes from $A$ and every compression decreases the
number of vertices by 2, at any time there will be at least $(8/10)k$ non-red
vertices, and at most $(1/10)k$ red vertices remaining in the cycle. Thus if a
cycle has a red vertex, then its length will be at least 9, and this prevents
length 2 red cycles. So the compressing procedure will be over when all the
red vertices disappear. Note that since $|V\setminus A|=|B|=o(n)$, the number
of remaining vertices after the compression procedure is over is at least
$n-2|B|=(1-o(1))n$. As mentioned above, it suffices to find a Hamilton cycle
in the graph after the compression process is over.
Another important property of this procedure is related to the additional
$A$-$A$ edges that we are given. Assume that $v$ is the first red vertex that
we have compressed, where the vertices $v_{-},v,v_{+}$ appeared in this order
in some 1-factor. Further assume that $v_{-}$ and $v_{+}$ are not red
vertices. Then since the new vertex $v^{\prime}$ obtained from the compression
will take as out-neighbors the out-neighbors of $v_{+}$, and in-neighbors the
in-neighbors of $v_{-}$, we may assume that this vertex $v^{\prime}$ is a
vertex in $A$ from the perspective of the new $\frac{1}{3}n\log n$ edges that
will be given.
This observation shows that every pair of vertices of the compressed graph has
the same probability of being one of the new $\frac{1}{3}n\log n$ edges. Since
the number of vertices reduced by $o(n)$, only $o(n\log n)$ of the new edges
will be lost because of the compression. Thus $whp$ we will be given
$(\frac{1}{3}-o(1))n\log n$ new uniform random edges of the compressed graph.
###### Theorem 5.8.
For a typical configuration ${\bf c}$, conditioned on the random edge process
having configuration ${\bf c}$, the directed graph $D_{m_{*}}$ $whp$ contains
a Hamilton cycle.
###### Proof.
By Proposition 5.2, there exists $whp$ a perfect matching of ${\bf BIP}$ which
corresponds to a 1-factor in $D_{m_{*}}$ consisting of at most $2\log n$
cycles. Also, at least $9/10$ proportion of the vertices in each cycle lies in
$A$. After using the compression argument which has been discussed above, we
may assume that we are given a 1-factor over some vertex set of size
$(1-o(1))n$. Moreover, the random edge process contains at least
$(\frac{1}{3}-o(1))n\log n$ additional random directed edges (distributed
uniformly over that set). By Theorem 3.1 with $L$ being the whole vertex set,
we can conclude that $whp$ the compressed graph contains a directed Hamilton
cycle, and this in turn implies that $D_{m_{*}}$ contains a directed Hamilton
cycle. ∎
###### Corollary 5.9.
The directed graph $D_{m_{*}}$ $whp$ contains a Hamilton cycle.
###### Proof.
Let ${\bf e}$ be a random edge process. Let $D=D_{m_{*}}({\bf e})$ and
$\mathcal{HAM}$ be the collection of directed graphs that contain a directed
Hamilton cycle. For a configuration ${\bf c}$, denote by ${\bf e}\in{\bf c}$,
the event that ${\bf e}$ has configuration ${\bf c}$. If ${\bf e}\in{\bf c}$
for some typical configuration ${\bf c}$, then we say that ${\bf e}$ is
typical.
By Theorem 5.8, we know that for any typical configuration ${\bf c}$,
$\mathbb{P}(D\notin\mathcal{HAM}|{\bf e}\in{\bf c})=o(1)$, from which we know
that $\mathbb{P}(\\{D\notin\mathcal{HAM}\\}\cap\\{\textrm{${\bf e}$ is
typical}\\})=o(1)$. On the other hand, by Lemma 4.8 we know that the
probability of an edge process having a non-typical configuration is $o(1)$.
Therefore $whp$, the directed graph $D$ is Hamiltonian ∎
## 6 Going back to the original process
Recall that the distribution of the random edge process is slightly different
from that of the random graph process since it allows repeated edges and
loops. In fact, one can show that at time $m_{*}$, the edge process $whp$
contains at least $\Omega(\log^{2}n)$ repeated edges. Therefore, we cannot
simply condition on the event that the edge process does not contain any
repeated edges or loops to obtain our main theorem for random graph processes.
Our next theorem shows that there exists an on-line algorithm ${\bf
OrientPrime}$ which successfully orients the edges of the random graph
process.
###### Theorem 6.1.
There exists a randomized on-line algorithm ${\bf OrientPrime}$ which orients
the edges of the random graph process, so that the resulting directed graph is
Hamiltonian $whp$ at the time at which the underlying graph has minimum degree
2.
The algorithm ${\bf OrientPrime}$ will mainly follow ${\bf Orient}$ but with a
slight modification. Assume that we are given a random graph process (call it
the underlying process). Using this random graph process, we want to construct
an auxiliary process whose distribution is identical to the random edge
process. Let $t=1$ at the beginning and $a_{t}$ be the number of distinct
edges up to time $t$ in our auxiliary process (disregarding loops). Thus
$a_{1}=0$. At time $t$, with probability $(2a_{t}+n)/{n^{2}}$ we will produce
a redundant edge, and with probability $1-(2a_{t}+n)/{n^{2}}$, we will receive
an edge from the underlying random graph process. Once we decided to produce a
redundant edge, with probability $2a_{t}/(2a_{t}+n)$ choose uniformly at
random an edge out of the $a_{t}$ edges that already appeared, and with
probability $n/(2a_{t}+n)$ choose uniformly at random a loop. Let $e_{t}$ be
the edge produced at time $t$ (it is either a redundant edge, or an edge from
the underlying process), and choose its first vertex and second vertex
uniformly at random. One can easily check that the process
$(e_{1},e_{2},\cdots,)$ has the same distribution as the random edge process.
In the algorithm ${\bf OrientPrime}$, we feed this new auxiliary process into
the algorithm ${\bf Orient}$ and orient the edges accordingly. Since the
distribution of the auxiliary process is the same as that of the random edge
process, ${\bf Orient}$ will give an orientation which $whp$ contains a
directed Hamilton cycle. However, what we seek for is a Hamilton cycle with no
redundant edge. Thus in the edge process, whenever we see a redundant edge
that is a repeated edge (not a loop), color it by blue. In order to show that
${\bf OrientPrime}$ gives a Hamiltonian graph $whp$, it suffices to show that
we can find a Hamilton cycle in $D_{m_{*}}$ which does not contain a blue edge
(note that loops cannot be used in constructing a Hamilton cycle). We first
state two useful facts.
###### Claim 6.2.
$Whp$, there are no blue edges incident to $B$ used in constructing
$D_{5-in,5-out}$.
###### Proof.
The expected number of blue edges incident to $B$ in Step I used in
constructing $D_{5-in,5-out}$ can be computed by choosing two vertices $v$ and
$w$ and then computing the probability that $v\in B$, and $(v,w)$ or $(w,v)$
together appears twice among Step I edges. The probability that $v$ appears as
a first vertex exactly $i$ times is ${n\log\log n\choose
i}\left(\frac{1}{n}\right)^{i}\left(1-\frac{1}{n}\right)^{n\log\log n-i}$.
Condition on the event that $v$ appeared $i$ times as a first vertex for some
$i<12$ (and also reveal the $i$ positions in which $v$ appeared). We then
compute the probability that some two Step I edges are $(v,w)$ or $(w,v)$.
There are three events that we need to consider. First is the event that
$(v,w)$ appears twice, whose probability is ${i\choose
2}\left(\frac{1}{n}\right)^{2}$. Second is the event that $(v,w)$ appears once
and $(w,v)$ appears once, whose probability is at most ${n\log\log n\choose
1}\frac{1}{n(n-1)}\cdot{i\choose 1}\frac{1}{n}$. Third is the event that
$(w,v)$ appears twice, whose probability is at most ${n\log\log n\choose
2}\left(\frac{1}{n(n-1)}\right)^{2}$. Combining everything, we see that the
expected number of Step I blue edges incident to $B$ is at most,
$\displaystyle n^{2}\cdot\sum_{i=0}^{11}$ $\displaystyle{n\log\log n\choose
i}\left(\frac{1}{n}\right)^{i}\left(1-\frac{1}{n}\right)^{n\log\log
n-i}\times$ $\displaystyle\left({i\choose
2}\left(\frac{1}{n}\right)^{2}+{n\log\log n\choose
1}\left(\frac{1}{n(n-1)}\right){i\choose 1}\frac{1}{n}+{n\log\log n\choose
2}\left(\frac{1}{n(n-1)}\right)^{2}\right).$
The main term comes from $i=11$, and the third term in the final bracket.
Consequently, we can bound the expectation by
$\displaystyle(1+o(1))\cdot n^{2}\cdot{n\log\log n\choose
11}\left(\frac{1}{n}\right)^{11}\left(1-\frac{1}{n}\right)^{n\log\log
n-11}\cdot{n\log\log n\choose 2}\left(\frac{1}{n(n-1)}\right)^{2}=o(1).$
We then would like to compute the expected number of blue edges incident to
$B$ in Step 2 used in constructing $D_{5-in,5-out}$. Condition on the first
vertices of the Step I edges so that we can determine the sets $A$ and $B$. By
Claim 4.3, we may condition on the event $|B|=O(\frac{(\log\log
n)^{12}}{\log^{2}n})$. Fix a vertex $v\in B$, and expose all appearances of
$v$ in Step II, and note that only the first 10 appearances are relevant. By
Claim 4.4, it suffices to bound the probability of the event that there exists
a vertex $w\in A$ such that $(v,w)$ or $(w,v)$ appears twice among the at most
24 Step I edges where $v$ or $w$ are the first vertices, and the at most 10
Step II edges which we know is going to be used to construct the OUT and IN of
the vertex $v$. Therefore the expectation is
$\displaystyle|B|\cdot
n\cdot\left(\frac{34}{n}\right)^{2}=O\left(\frac{(\log\log
n)^{12}}{\log^{2}n}n^{2}\right)\cdot\left(\frac{34}{n}\right)^{2}=o(1).$
∎
###### Claim 6.3.
$Whp$, there are at most $\log n$ blue edges used in constructing
$D_{5-in,5-out}$.
###### Proof.
By Claim 6.2, we know that whp, all the blue edges used in constructing
$D_{5-in,5-out}$ are incident to $A$. Therefore it suffices to show that there
are at most $\log n$ blue edges among the Step I edges. The expected number of
such edges can be computed by choosing two vertices $v,w$, and computing the
probability that $(v,w)$ or $(w,v)$ appears twice. Thus is at most
$n^{2}\cdot{n\log\log n\choose 2}\left(\frac{2}{n^{2}}\right)^{2}=o(\log n).$
Consequently, by Markov’s inequality, we can derive the conclusion. ∎
###### Claim 6.4.
$Whp$, each vertex is incident to at most one blue edge.
###### Proof.
It suffices to show that there does not exist three distinct vertices
$v,w_{1},w_{2}$ such that both $\\{v,w_{1}\\}$ and $\\{v,w_{2}\\}$ appear at
least twice. The probability of this event is at most
${n\choose 3}{m_{2}\choose 4}\cdot{4\choose
2}\left(\frac{2}{n^{2}}\right)^{4}=o(1).$
∎
Now assume that we found a 1-factor as in Section 5.2. By Claim 6.3, $whp$, it
contains at most $\log n$ blue edges. Then after performing the compression
process given in the beginning of Section 5.3, by Claim 6.2, the number of
blue edges remains the same as before. Therefore, if we can find a Hamilton
cycle in the compressed graph which does not use any of the blue edges, then
the original graph will also have a Hamilton cycle with no blue edges. Thus
our goal now is to combine the cycles into a Hamilton cycle without any blue
edges, by using the non-revealed $A$-$A$ edges.
In order to do this, we provide a proof of a slightly stronger form of Theorem
3.1 for $L=V$. In fact, it can be seen that when combined with the compression
argument, this special case of the theorem implies the theorem for general
$L$. Note that we have at least $\frac{n\log n}{3}$ non-revealed $A$-$A$ edges
remaining after finding the 1-factor described in the previous paragraph. Note
that these edges cannot create more blue edges in the 1-factor we previously
found, since all the $A$-$A$ edges used so far appears earlier in the process
than these non-revealed edges. We will find a Hamilton cycle in two more
phases. The strategy of our proof comes from that of Frieze [14]. In the first
phase, given a 1-factor consisting of at most $O(\log n)$ cycles, we use the
first half of the remaining non-revealed $A$-$A$ edges to combine some of the
cycles into a cycle of length $n-o(n)$. In this phase, we repeatedly combine
two cycles of the 1-factor until there exists a cycle of length $n-o(n)$.
###### Lemma 6.5.
$Whp$, there exists a 1-factor consisting of $O(\log n)$ cycles, one of which
is of length $n-o(n)$. Moreover, this 1-factor contains at most $O(\log n)$
blue edges.
###### Proof.
Condition on the conclusion of Claim 6.3. Then we are given a 1-factor
consisting of at most $c\log n$ cycles and containing at most $\log n$ blue
edges. Our goal is to modify this 1-factor into a 1-factor satisfying the
properties as in the statement. Consider the non-revealed random $A$-$A$ edges
we are given. Since we will use only the first half of these edges, we have at
least $\frac{n\log n}{6}$ random $A$-$A$ edges given uniformly among all
choices. Let $E_{N}$ be these edges. Partition $E_{N}$ as $E_{0}\cup
E_{1}\cup\cdots\cup E_{c\log n}$, where $E_{0}$ is the first half of edges,
$E_{1}$ is the next $\frac{1}{2c\log n}$ proportion of edges, $E_{2}$ is the
next $\frac{1}{2c\log n}$ proportion of edges, and so on. Thus
$|E_{0}|=\frac{1}{2}|E_{N}|$ and $|E_{1}|=\cdots=|E_{c\log n}|=\frac{1}{2c\log
n}|E_{N}|$. Since $|E_{0}|\geq\frac{n\log n}{12}$, by applying Chernoff’s
inequality and taking the union bound, we can see that $whp$, for every set of
vertices $X$ of size at least $|X|\geq\frac{n}{\log^{1/2}n}$, there exists at
least $\frac{1}{2}|X||V\setminus X|\frac{\log
n}{12n}\geq\frac{n\log^{1/2}n}{48}$ edges of $E_{0}$ between $X$ and
$V\setminus X$. Condition on this event.
Assume that the 1-factor currently does not contain a cycle of length at least
$n-\frac{2n}{\log^{1/2}n}$. Then we can partition the cycles into two sets so
that the number of vertices in the cycles belonging to each part is between
$\frac{n}{\log^{1/2}n}$ and $n-\frac{n}{\log^{1/2}n}$. Thus by the observation
above, there exist at least $\frac{n\log^{1/2}n}{48}$ edges of $E_{0}$ between
the two parts. Let $(v,w)$ be one such edge. Let $v^{+}$ be the vertex that
succeeds $v$ in the cycle of the 1-factor that contains $v$, and let $w^{-}$
be the vertex that precedes $w$ in the cycle of the 1-factor that contains
$w$. If $(w^{-},v^{+})\in E_{1}$, then the cycle containing $v$ and the cycle
containing $w$ can be combined into one cycle (see Figure 6). Therefore, each
edge in $E_{0}$ gives rise to some pair $e$ for which if $e\in E_{1}$, then
some two cycles of the current 1-factor can be combined into another cycle.
The probability of no such edge being present in $E_{1}$ is at most
$\left(1-\frac{1}{n^{2}}\cdot\frac{n\log^{1/2}n}{48}\right)^{|E_{1}|}\leq
e^{-\Big{(}\log^{1/2}n/(48n)\Big{)}\cdot\Big{(}|E_{N}|/(2c\log n)\Big{)}}\leq
e^{-\Omega(\log^{1/2}n)}.$
Therefore with probability $1-e^{-\Omega(\log^{1/2}n)}$, we can find an edge
in $E_{0}$ and an edge in $E_{1}$ which together will reduce the total number
of cycles in the 1-factor by one.
We can repeat the above using $E_{i}$ instead of $E_{1}$ in the $i$-th step.
Since the total number of cycles in the initial 1-factor is at most $c\log n$,
the process must terminate before we run out of edges. Therefore at some step,
we must have found a 1-factor that has at most $O(\log n)$ cycles, and
contains a cycle of length $n-o(n)$. It suffices to check that the estimate on
the number of blue edges hold. Indeed, every time we combine two cycles, we
use two additional edges which are not in the 1-factor, and therefore by the
time we are done, we would have added $O(\log n)$ edges to the initial
1-factor. Therefore even if all these edges were blue edges, we have $O(\log
n)$ blue edges in the 1-factor in the end. ∎
&
$v_{i+1}$$v_{i}$$v_{j}$$v_{j-1}$$v_{0}$$v_{\ell}$
Figure 1: Combining two cycles, and rotating a path.
Consider a 1-factor given by the previous lemma. In the second phase, we use
the other half of the remaining new random edges to prove that the long cycle
we just found, can “absorb” the remaining cycles. Let
$P=(v_{0},\cdots,v_{\ell})$ be a path of a digraph. If there exist two edges
$(v_{\ell},v_{i+1})$ and $(v_{i},v_{j})$ for $1\leq i<\ell$ and
$i+1<j\leq\ell$, then we can _rotate_ the path $P$ using $v_{i}$ and $v_{j-1}$
as breaking points to obtain a new path
$(v_{0},v_{1},\cdots,v_{i},v_{j},v_{j+1},\cdots,v_{\ell},v_{i+1},v_{i+2},\cdots,v_{j-1})$
(see Figure 6). We call $v_{i}$ the intermediate point of this rotation. Note
that if the graph contains the edge $(v_{j-1},v_{0})$, then one can close the
path into a cycle. Our strategy is to repeatedly rotate the given path until
one can find such an edge and close the path (see Figure 6).
Further note that the path obtained from $P$ by rotating it once as above can
be described as following. Let $P_{1},P_{2},P_{3}$ be subpaths of $P$ obtained
by removing the edges $(v_{i},v_{i+1})$ and $(v_{j-1},v_{j})$. Then there
exists a permutation $\pi$ of the set $[3]$ such that the new path is the path
obtained by concatenating $P_{\pi(1)},P_{\pi(2)},P_{\pi(3)}$ (in order). More
generally, assume that we rotate the path $P$ in total $s$ times by using
distinct breaking points $v_{a_{1}},v_{a_{2}},\cdots,v_{a_{2s}}$. Let
$P_{1},\cdots,P_{2s+1}$ be the subpaths of $P$ obtained by removing the edges
$(v_{a_{j}},v_{a_{j}+1})$ for $1\leq j\leq 2s$. Then there exists a
permutation $\sigma$ of the set $[2s+1]$ such that the path we have in the end
is the path obtained by concatenating
$P_{\sigma(1)},P_{\sigma(2)},\cdots,P_{\sigma(2s+1)}$. We will use this fact
later. Note that it is crucial to have distinct breaking points here.
After finding a 1-factor described in Lemma 6.5, there are at least
$\frac{n\log n}{6}$ non-revealed $A$-$A$ edges that we can use. Let $E_{L}$ be
the later $\frac{n\log n}{6}$ of these edges, and reveal all the non-revealed
edges not in $E_{L}$. Note that there exists a positive constant $C$ such that
$whp$, the graph induced by the revealed edges before beginning this phase has
maximum degree at most $C\log n$ (it follows from Chernoff’s inequality and
union bound). Condition on this event.
We will use the remaining edges $E_{L}$ in a slightly different way from how
we did in the previous phase since in this phase, it will be more important to
know if some certain edge is present among the non-revealed edges. For an
ordered pair of vertices $e=(x,y)$, let the flip of $e$ be $r(e)=(y,x)$
(similarly define a flip of some set of pairs). Fix some pair $e=(x,y)$, and
suppose that we are interested in knowing whether $e\in E_{L}$ holds or not,
and if $e\in E_{L}$, then whether it is a blue edge or not. Thus for each of
the non-revealed edge in $E_{L}$, ask if it is $e$ or $r(e)$. Since we know
how many times $e$ and $r(e)$ appeared among the already revealed edges, in
the end, we not only know if $e\in E_{L}$, but also know if if it is a blue
edge or not. We call this procedure as exposing the pair $e$, and say that $e$
has been exposed. Note that the process of exposing the pair $e$ is symmetric
in the sense that even if we are looking only for the edge $e$ we seek for the
existence of $r(e)$ as well. This is because we would like to determine
whether $e$ is blue or not at the same time. We can similarly define the
procedure of exposing a set of pairs, instead of a single pair. We would like
to carefully expose the edges in order to construct a Hamilton cycle without
blue edges.
Note that the expected number of times that $e$ or $r(e)$ appears in $E_{L}$
is $\frac{2}{n^{2}}\cdot\frac{n\log n}{6}=\frac{\log n}{3n}$. Thus if $S$ is
the set of exposed pairs at some point, we say that the outcome is typical if
the number of times that a pair belonging to $S$ appears in $E_{L}$ is at most
$\frac{|S|\log n}{n}$ (which is three times its expected value). While
exposing sets of pairs, we will maintain the outcome to be typical, since we
would like to know that there are enough non-revealed pairs remaining in
$E_{L}$. For a set $X$ of vertices, let $Q(X)$ be the set of ordered pairs
$(x_{1},x_{2})$ such that $x_{1}\in X$ or $x_{2}\in X$.
###### Lemma 6.6.
Let $X$ and $Y$ be sets of vertices of size at most $\frac{n}{32}$. Assume
that the set of exposed pairs so far is a subset of $Q(X)$ and the outcome is
typical. Further assume that a path $P$ from $v_{0}$ to $v_{\ell}$ of length
$\ell=n-o(n)$ is given for some $v_{0},v_{\ell}\notin X\cup Y$.
Then there exists a set $Z\subset V(P)$ disjoint from $Y$ of size at most
$|Z|\leq\frac{n}{\log n\cdot\log\log n}$ such that with probability at least
$1-o((\log n)^{-1})$, by further exposing only pairs that intersect $Z$ (thus
a subset of $Q(Z)$), one can find a cycle over the vertices of $P$.
Furthermore, the outcome of exposing these pairs is typical and no new blue
edges are added (thus the set of blue edges in the cycle is a subset of the
set of blue edges in $P$).
Informally, $Y$ is the set of ‘reserved’ vertices which we would like to keep
non-exposed for later usage. The lemma asserts that we can close the given
path into a cycle by further exposing pairs that intersect some set $Z$ which
is disjoint from $Y$ and has relatively small cardinality.
###### Proof.
Denote the path as $P=(v_{0},v_{1},\cdots,v_{\ell})$. For a subset of vertices
$A=\\{v_{a_{1}},v_{a_{2}},\cdots,v_{a_{t}}\\}$, define
$A^{-}=\\{v_{a_{1}-1},v_{a_{2}-1},\cdots,v_{a_{t}-1}\\}$ and
$A^{+}=\\{v_{a_{1}+1},v_{a_{2}+1},\cdots,v_{a_{t}+1}\\}$ (if the index reaches
either $-1$ or $\ell+1$, then we remove the corresponding vertex from the
set).
Our strategy can be described as following. We repeatedly rotate the path to
obtain endpoints, and in each iteration select a set of vertices and expose
only pairs incident to these vertices (call these vertices as the involved
vertices). Thus a pair consisting of two non-involved vertices will remain
non-exposed. The set $Z$ will be the set of involved vertices, and our goal
will be to construct a cycle while maintaining $Z$ to be small.
To keep track of the set of vertices that have been involved and the set of
endpoints that we obtained, we maintain two sets $T_{i}$ and $S_{i}$ for
$i\geq 0$, where $T_{0}=\\{v_{\ell}\\}$ and $S_{0}=X$. Informally, $T_{i}$
will be the set of endpoints that have not yet been involved, and $S_{i}$ will
be the set of involved vertices while obtaining the set $T_{i}$. For example,
suppose that we performed a rotation as in Figure 6 in the first round. We
will later see that in the process, we expose the neighbors of $v_{\ell}$ and
$v_{i}$ for this round of rotation to obtain a new endpoint $v_{j-1}$. Thus we
will add the vertices $v_{\ell}$ and $v_{i}$ to $S_{1}$ and $v_{j-1}$ to
$T_{1}$. It is crucial to maintain $T_{i}$ as a subset of the set of non-
involved vertices, since we will need to expose its neighbors in the next
round of rotation.
Let $Y_{0}=Y\cup\\{v_{0}\\}$. Throughout the rotation process, $T_{i}$ and
$S_{i}$ will satisfy the following properties:
1. (i)
for every $w\in T_{i}$, there exists a path of length $\ell$ from $v_{0}$ to
$w$ whose set of blue edges is a subset of that of $P$,
2. (ii)
the set of exposed pairs after the $i$-th step is a subset of $Q(S_{i})$,
3. (iii)
all the breaking points used in constructing the paths above belong to
$S_{i}\cup T_{i}$,
4. (iv)
$|T_{i}|=\left(\frac{\log n}{500}\right)^{2i}$ and $|S_{i}\setminus
S_{i-1}|\leq 2\left(\frac{\log n}{500}\right)^{2i-1}=\frac{1000}{\log
n}|T_{i}|$ (for $i\geq 1$),
5. (v)
$X\cup T_{i-1}\cup S_{i-1}\subset S_{i}$,
6. (vi)
$S_{i}$, $T_{i}$, and $Y_{0}$ are mutually disjoint, and
7. (vii)
the outcome at each iteration is typical.
Recall that $T_{0}=\\{v_{\ell}\\}$ and $S_{0}=X$, and note that the properties
above indeed hold for these sets. Since $S_{0}=X$, property (iv) in particular
implies that
$|S_{i}|\leq|X|+\sum_{a=1}^{i}\frac{1000}{\log
n}|T_{a}|\leq|X|+\frac{2000}{\log n}|T_{i}|.$
Suppose that we completed constructing the sets $T_{i}$ and $S_{i}$ for some
index $i$ so that $|T_{i}|\leq\frac{n}{(\log n)^{2}\log\log n}$. By (iv), we
have $|S_{i}|\leq|X|+\frac{2000n}{\log n}|T_{i}|\leq(\frac{1}{32}+o(1))n$ and
$i=O(\frac{\log n}{\log\log n})$. We will show how to construct the sets
$T_{i+1}$ and $S_{i+1}$ from these sets.
By $|X|\leq\frac{n}{32}$, (ii), (iv) and (vii), we know that at any step of
the process the number of edges in $E_{L}$ that remain non-revealed is at
least
$|E_{L}|-\frac{|Q(S_{i})|\cdot\log n}{n}\geq\frac{n\log
n}{6}-\frac{2|S_{i}|n\log n}{n}\geq\frac{n\log n}{12}.$
Moreover, the number of non-exposed pairs remaining is at least
$n^{2}-|Q(S_{i})|\geq n^{2}-2n|S_{i}|\geq\frac{n^{2}}{2}.$
We will make use of the following three claims whose proof will be given
later.
###### Claim 6.7.
Assume that some pairs have been exposed and the outcome is typical. Once we
expose the remaining edges, the probability that there exists a vertex
incident to two new blue edges is at most $o((\log n)^{-2})$.
###### Claim 6.8.
Assume that some pairs have been exposed and the outcome is typical. Let $R$
be a set of pairs of size $|R|=\Omega(\frac{n}{\log n})$ disjoint to the
exposed pairs. Then with probability at least $1-o((\log n)^{-2})$, the number
of times a pair in $R$ appear among the non-revealed edges of $E_{L}$ is at
least $\frac{|R|\log n}{24n}$, and is at most $\frac{|R|\log n}{2n}$.
###### Claim 6.9.
Assume that some pairs have been exposed and the outcome is typical. Then with
probability at least $1-o((\log n)^{-2})$, for every disjoint sets
$A_{1},A_{2}$ of vertices satisfying $|A_{1}|\leq\frac{n}{\log n\cdot(\log\log
n)^{1/2}}$ and $|A_{2}|=\frac{|A_{1}|\log n}{500}$, the number of edges
between $A_{1}$ and $A_{2}$ among the non-revealed edges of $E_{L}$ is at most
$\frac{|A_{1}|\log n}{100}$.
For each vertex $w\in T_{i}$, there exists a path $P_{w}$ of length $\ell$
from $v_{0}$ to $w$ satisfying (i). Let $P_{w,1}$ be the first half and
$P_{w,2}$ be the second half of $P_{w}$. Let $S_{i+1,0}=S_{i}\cup T_{i}$, and
$N=S_{i+1,0}\cup S_{i+1,0}^{-}\cup Y_{0}$ and
$Q_{1}=\\{(w,x^{+}):w\in T_{i},x\in V(P_{w,1})\setminus N\\}.$
We have $Q_{1}\subset Q(T_{i})$ and
$|Q_{1}|\geq|T_{i}|\cdot\left(\frac{\ell}{2}-2|S_{i}|-2|T_{i}|-|Y|-1\right)\geq\frac{n}{4}|T_{i}|.$
By (vi) and the definition of $N$, the pairs in $Q_{1}$ have both of their
endpoints not in $S_{i}$, thus have not been exposed yet. Now expose the set
$Q_{1}$. By Claim 6.8, we know that with probability at least $1-o((\log
n)^{-2})$, the outcome is typical, and the number of pairs in $Q_{1}$ that
appear in $E_{L}$ is at least
$\frac{|Q_{1}|\log n}{24n}\geq\frac{|T_{i}|\log n}{96}.$
Condition on this event. Note that if some pair $(w,x^{+})\in Q_{1}$ appears
in $E_{L}$ and is not a blue edge, then $x$ can serve as an intermediate point
in our next round of rotation. Since we forced not to use the same breaking
point twice by avoiding the set $N$ (see properties (iii) and (v)), if there
is a non-blue edge of the form $(x,y^{+})$ for some $y\in P_{w,2}$, then we
can find a path of length $\ell$ from $v_{0}$ to $y$ satisfying (i) (see
Figure 6).
Let
$S_{i+1,1}=\\{x:(w,x^{+})\in Q_{1}\cap E_{L},(w,x^{+})\,\textrm{is not
blue}\\}.$
By Claim 6.7, with probability at least $1-o((\log n)^{-2})$, among the edges
in $Q_{1}\cap E_{L}$, the number of blue edges is at most $|T_{i}|$. Condition
on this event. Then the number of non-blue edges between $T_{i}$ and
$S_{i+1,1}^{+}$ is at least $\frac{|T_{i}|(\log n-1)}{96}>\frac{|T_{i}|\log
n}{100}$. By Claim 6.9, with probability at least $1-o((\log n)^{-2})$, we see
that $|S_{i+1,1}|\geq\frac{|T_{i}|\log n}{500}$. Redefine $S_{i+1,1}$ as an
arbitrary subset of it of size exactly $\frac{|T_{i}|\log n}{500}$. Note that
$S_{i+1,1}\cap N=\emptyset$. The vertices in $S_{i+1,1}$ will serve as
intermediate points of our rotation.
Now let
$Q_{2}=\\{(x,y^{+}):x\in S_{i+1,1},\,\,(w,x^{+})\in Q_{1}\cap
E_{L},\,\,\textrm{and}\,\,y\in V(P_{w,2})\setminus(N\cup S_{i+1,1}^{-})\\},$
and note that $Q_{2}\subset Q(S_{i+1,1})$. Further note that we are
subtracting $S_{i+1,1}^{-}$ from $V(P_{w,2})$ in the above definition. This is
to avoid having both a pair and its reverse in the set $Q_{2}$. Even though
the set $S_{i+1,1}$ was defined as a collection of vertices belonging to
$P_{w^{\prime},1}$ for various choices of $w^{\prime}$, it can still intersect
$P_{w,2}$ for some vertex $w$, since we are considering different paths for
different vertices. Similarly as before, all the pairs in $Q_{2}$ are not
exposed yet and we have $|Q_{2}|\geq\frac{n}{4}|S_{i+1,1}|$. Moreover, with
probability at least $1-o((\log n)^{-2})$, the number of pairs in $Q_{2}$ that
appear in $E_{L}$ which are not blue edges is at least $\frac{|S_{i+1,1}|\log
n}{100}$ and the outcome is typical. Let $T_{i+1,0}=\\{y:(x,y^{+})\in
Q_{2}\cap E_{L},(x,y^{+})\,\textrm{is not blue}\\}$. As in above, with
probability at least $1-o((\log n)^{-2})$, we have
$|T_{i+1,0}|\geq\frac{|S_{i+1,1}|\log n}{500}$. Moreover, by the observation
above, for all the vertices $y\in T_{i+1,0}$, there exists a path of length
$\ell$ from $v_{0}$ to $y$ satisfying (i).
Let $T_{i+1}=T_{i+1,0}$ and $S_{i+1}=S_{i+1,0}\cup S_{i+1,1}$. Since
$|T_{i+1}|\geq\Big{(}\frac{\log n}{500}\Big{)}^{2}|T_{i}|\geq\Big{(}\frac{\log
n}{500}\Big{)}^{2(i+1)}$, we may redefine $T_{i+1}$ as an arbitrary subset of
it of size exactly $\Big{(}\frac{\log n}{500}\Big{)}^{2(i+1)}$. In the
previous paragraph we saw that (i) holds for $T_{i+1}$. Property (ii) holds
since the set of newly exposed pairs is $Q_{1}\cup Q_{2}\subset
Q(S_{i+1,0}\cup S_{i+1,1})=Q(S_{i+1})$. Properties (iii), (v), and (vi) can
easily be checked to hold. By Claim 6.8, the outcome is typical, and we have
(vii). For property (iv), the size of $T_{i+1}$ by definition satisfies the
bound, and the size of $S_{i+1}\setminus S_{i}$ is
$\displaystyle|S_{i+1}\setminus S_{i}|$ $\displaystyle\leq|S_{i+1,0}\setminus
S_{i}|+|S_{i+1,1}|$ $\displaystyle\leq|T_{i}|+\frac{|T_{i}|\log
n}{500}\leq\left(\frac{\log n}{500}\right)^{2i}\cdot\left(1+\frac{\log
n}{500}\right)\leq 2\left(\frac{\log n}{500}\right)^{2i+1}.$
Repeat the above until we reach a set $T_{t}$ of size $\frac{n}{(\log
n)^{2}\cdot\log\log n}\leq|T_{t}|\leq\frac{n}{(500)^{2}\log\log n}$. By (iv),
we have $t=O(\frac{\log n}{\log\log n})$ and $|S_{t}|\leq|X|+\frac{n}{125\log
n\cdot\log\log n}$. Redefine $T_{t}$ as an arbitrary subset of size exactly
$\frac{n}{(\log n)^{2}\cdot\log\log n}$. Note that the size of $S_{t}$ does
not necessarily decrease, and thus we still have
$|S_{t}|\leq|X|+\frac{n}{125\log n\cdot\log\log n}$. We will repeat the
process above for the final time with the sets $S_{t}$ and $T_{t}$. This will
give $|T_{t+1}|=\frac{n}{(500)^{2}\log\log n}$ and $|S_{t+1}\setminus
S_{t}|\leq\frac{n}{125\log n\cdot\log\log n}$, from which it follows that
$|S_{t+1}|\leq\frac{2n}{125\log n\log\log n}$. Let $Q_{3}=\\{(v_{0},z):z\in
T_{t+1}\\}$ and expose $Q_{3}$ (note that the pairs in $Q_{3}$ has not yet
been exposed since $(T_{t+1}\cup\\{v_{0}\\})\cap S_{t+1}=\emptyset$, while the
set of exposed pairs is $Q(S_{t+1})$). Since
$|Q_{3}|=|T_{t+1}|=\Omega(\frac{n}{\log\log n})$, by Claims 6.7 and 6.8, with
probability at least $1-o((\log n)^{-2})$, we have a pair in $Q_{3}$ that
appears in $E_{L}$ as a non-blue edge. This gives a cycle over the vertices of
$P$ whose set of blue edges is a subset of that of $P$.
For the set $Z=(S_{t+1}\cup\\{v_{0}\\})\setminus X$, we see that the set of
exposed pairs is a subset of $Q(X\cup Z)$. Furthermore, since $Y_{0}$ and
$S_{t+1}$ are disjoint and $v_{0}\notin Y$, the sets $Y$ and $Z$ are disjoint
as well. By $t=O(\frac{\log n}{\log\log n})$, the total number of events
involved is $O(\frac{\log n}{\log\log n})$. Since each event hold with
probability at least $1-o((\log n)^{-2})$, by taking the union bound, we
obtain our set and cycle as claimed with probability at least $1-o((\log
n)^{-1})$. ∎
The proofs of Claims 6.7, 6.8, and 6.9 follow.
###### Proof of Claim 6.7.
Let $G^{\prime}$ be the graph induced by the edges that have been revealed
before the final phase (thus all the edges but $E_{L}$). It suffices to
compute the probability of the following events: (i) there exist
$v,w_{1},w_{2}\in V$ such that both $\\{v,w_{1}\\}$ and $\\{v,w_{2}\\}$
appears at least twice among the remaining edges, (ii) there exist
$v,w_{1},w_{2}\in V$ such that $\\{v,w_{1}\\}$ and $\\{v,w_{2}\\}$ were
already in $G^{\prime}$, and both appears at least once among the remaining
edges, and (iii) there exist $v,w_{1},w_{2}\in V$ such that $\\{v,w_{1}\\}$
were already in $G^{\prime}$, appears at least once among the remaining edges,
and $\\{v,w_{2}\\}$ appears at least twice among the remaining edges.
The probability of the first event happening is at most
$n^{3}\cdot{n\log n/6\choose 4}\cdot{4\choose
2}\left(\frac{2}{n^{2}}\right)^{4}=O\left(\frac{(\log n)^{4}}{n}\right).$
Recall that we conditioned on the event that each vertex has degree at most
$C\log n$ in the graph induced by the edges revealed before this phase.
Consequently, the probability of the second event happening is at most
$n\cdot{C\log n\choose 2}\cdot{n\log n/6\choose 2}{2\choose
1}\cdot\left(\frac{2}{n^{2}}\right)^{2}=O\left(\frac{(\log n)^{4}}{n}\right),$
and similarly, the probability of the third event happening is at most
$n^{2}\cdot{C\log n\choose 1}\cdot{n\log n/6\choose 3}{3\choose
1}\cdot\left(\frac{2}{n^{2}}\right)^{3}=O\left(\frac{(\log n)^{4}}{n}\right).$
Therefore we have our conclusion. ∎
###### Proof of Claim 6.8.
Recall that at any time of the process, the number of non-revealed edges in
$E_{L}$ is at least $\frac{n\log n}{12}$. The probability of a single non-
revealed edge of $E_{L}$ being in $R$ is at least $\frac{|R|}{n^{2}}$.
Therefore the expected number of times a pair in $R$ appear among the non-
revealed edges is at least,
$\frac{|R|}{n^{2}}\cdot\frac{n\log n}{12}=\frac{|R|\log n}{12n}.$
On the other hand, recall that at any time of the process, the probability
that a non-revealed edge of $E_{L}$ is some fixed pair at most
$\frac{2}{n^{2}}$, since the number of non-exposed pairs is at least
$\frac{n^{2}}{2}$. Therefore the expected number of times a pair in $R$ appear
among the non-revealed edges is at most,
$\frac{2|R|}{n^{2}}\cdot\frac{n\log n}{6}=\frac{|R|\log n}{3n}.$
Since $|R|=\Omega(\frac{n}{\log\log n})$, the conclusion follows from
Chernoff’s inequality and union bound. ∎
###### Proof of Claim 6.9.
Recall that at any time of the process, the probability that a non-revealed
edge of $E_{L}$ is $(v,w)$ or $(w,v)$ is at most $\frac{4}{n^{2}}$, since the
number of non-exposed pairs is at least $\frac{n^{2}}{2}$.
Let $k$ be a fixed integer satisfying $k\leq\frac{n}{\log n\cdot\log\log n}$.
Let $A_{1}$ be a set of vertices of size $k$ and $A_{2}$ be a set of vertices
of size $\frac{k\log n}{500}$ disjoint from $A_{1}$. The number of choices for
such sets is at most
$n^{k}{n\choose k\log n/500}\leq\left(n^{\frac{500}{\log
n}}\cdot\frac{500en}{k\log n}\right)^{k\log
n/500}\leq\left(\frac{e^{1000}n}{k\log n}\right)^{k\log n/500}.$
The probability of there being more than $\frac{k\log n}{100}$ edges between
$A_{1}$ and $A_{2}$ can be computing by first choosing $\frac{k\log n}{100}$
pairs between $A_{1}$ and $A_{2}$, and then computing the probability that
they all appear among the remaining edges. Thus is at most
$\displaystyle{k^{2}\log n/500\choose k\log n/100}\cdot\left(\frac{n\log
n}{3}\right)^{k\log n/100}\left(\frac{4}{n^{2}}\right)^{k\log n/100}$
$\displaystyle\leq$ $\displaystyle\left(\frac{ek}{5}\cdot\frac{n\log
n}{3}\cdot\frac{4}{n^{2}}\right)^{k\log n/100}$ $\displaystyle\leq$
$\displaystyle\left(\frac{4ek\log n}{15n}\right)^{k\log
n/100}\leq\left(\frac{k\log n}{n}\right)^{k\log n/100}.$
Thus by taking the union bound, we see that the probability of there being
such sets $A_{1}$ and $A_{2}$ is at most
$\sum_{k=1}^{n/(\log n\cdot\log\log n)}\left(\frac{e^{1000}n}{k\log
n}\right)^{k\log n/500}\cdot\left(\frac{k\log n}{n}\right)^{k\log
n/100}\leq\sum_{k=1}^{n/(\log n\cdot\log\log
n)}\left(\frac{e^{1000}k^{4}\log^{4}n}{n^{4}}\right)^{k\log n/500}.$
Since the summand is maximized at $k=1$ in the range $1\leq k\leq\frac{n}{\log
n\cdot\log\log n}$, we see that the right hand side of above is $o((\log
n)^{-2})$. ∎
We now can find a Hamilton cycle without any blue edges, and conclude the
proof that ${\bf OrientPrime}$ succeeds $whp$.
###### Theorem 6.10.
There exists a Hamilton cycle with no blue edges $whp$.
###### Proof.
By Proposition 5.2, we $whp$ can find a 1-factor, which by Claims 6.3 and 6.4
contains at most $\log n$ blue edges that are vertex-disjoint. By Claim 6.2,
it suffices to find a Hamilton cycle after compressing the vertices in $B$
from the 1-factor, since $whp$ there are no blue edges incident to $B$. With
slight abuse of notation, we may assume that the compressed graph contains $n$
vertices, and that we are given at least $\frac{n\log n}{3}$ random edges over
this 1-factor. By Lemma 6.5, by using half of these random edges, we can find
a 1-factor consisting of cycles $C_{0},C_{1},\cdots,C_{t}$ so that
$|C_{0}|=n-o(n)$ and $t=O(\log n)$. Suppose that there are $k$ blue edges that
belong to the 1-factor, for some $k=O(\log n)$. We still have a set of at
least $\frac{n\log n}{6}$ non-revealed edges $E_{L}$ that we are going to use
in Lemma 6.6.
Let $X$ be a set which we will update throughout the process. Consider the
cycle $C_{1}$. If it contains a blue edge, then remove it from the cycle to
obtain a path $P_{1}$. Otherwise, remove an arbitrary edge from $C_{1}$ to
obtain $P_{1}=(w_{0},w_{1},\cdots,w_{a})$. Expose the set of pairs
$\\{(w_{a},x):x\in V(C_{0}),(w_{a},x)\textrm{ is not exposed}\\}$ which is of
size at least $|C_{0}|-|X|-2k=n-o(n)$. By Claims 6.7 and 6.8, with probability
at least $1-o((\log n)^{-2})$, the outcome is typical and there exists at
least one non-blue edge of the form $(w_{a},x)$ for some $x\in V(C_{0})$.
Condition on this event. Note that the set of exposed pairs is a subset of
$Q(\\{w_{a}\\})$, and that this gives a path $P$ over the vertices of $C_{0}$
and $P_{1}$, which starts at $w_{0}$ and ends at some vertex in $C_{1}$ (thus
$w_{a}$ is not a endpoint). Add $w_{a}$ to the set $X$, and let $Y_{1}$ be the
set of vertices incident to some blue edge that belongs to $C_{0}$ or $P_{1}$.
Note that $X$, $Y_{1}$ are disjoint, the set of exposed pairs is a subset of
$Q(X)$, and neither of the two endpoints of $P$ belong to $X\cup Y_{1}$. By
applying Lemma 6.6 with $X$ and $Y=Y_{1}$, with probability at least
$1-o((\log n)^{-1})$, we obtain a cycle that contains all the vertices of
$C_{0}$ and $C_{1}$. Moreover, the pairs we further exposed will be a subset
of $Q(Z_{1})$ for some set $Z_{1}$ of size at most $\frac{n}{\log
n\cdot\log\log n}$. Condition on this event and update $X$ as the union of
itself with $Z_{1}$. Note that by the definition of $Y_{1}$, $X$ does not
intersect any blue edge of the new cycle.
Repeat the above for cycles $C_{2},C_{3},\cdots,C_{t}$. At each step, the
success probability is $1-o((\log n)^{-1})$, and the size of $X$ increases by
at most $1+\frac{n}{\log n\cdot\log\log n}\leq\frac{2n}{\log n\cdot\log\log
n}$. Since $t=O(\log n)$, we can maintain $X$ to have size $o(n)$, and thus
the process above indeed can be repeated. In the end, by the union bound, with
probability $1-o(1)$, we find a Hamiltonian cycle which has at most $k$ blue
edges. Let $Y$ be the vertices incident to the blue edges that belong to this
Hamilton cycle. Note that $|Y|\leq 2k$ and $X\cap Y=\emptyset$. Remove one of
the blue edges $(y,z)$ from the cycle to obtain a Hamilton path. Apply Lemma
6.6 with the sets $X$ and $Y\setminus\\{y,z\\}$ to obtain another Hamilton
cycle with fewer blue edges. Since the total number of blue edges is at most
$k=O(\log n)$, the blue edges are vertex-disjoint, and the probability of
success is at least $1-o((\log n)^{-1})$, after repeating this argument for
all the blue edges in the original cycle, we obtain a Hamilton cycle with no
blue edge. ∎
## 7 Concluding Remarks
In this paper we considered the following natural question. Consider a random
edge process where at each time $t$ a random edge $(u,v)$ arrives. We are to
give an on-line orientation to each edge at the time of its arrival. At what
time $t^{*}$ can one make the resulting directed graph Hamiltonian? The best
that one can hope for is to have a Hamilton cycle when the last vertex of
degree one disappears, and we prove that this is indeed achievable $whp$.
The main technical difficulty in the proof arose from the existence of bud
vertices. These were degree-two vertices that were adjacent to a saturated
vertex in the auxiliary graph $D_{5-in,5-out}$. Note that for our proof, we
used the method of deferred decisions, not exposing the end-points of certain
edges and leaving them as random variables. Bud vertices precluded us from
doing this naively and forced us to expose the end-point of some of the edges
which we wanted to keep unexposed (it is not difficult to show that without
exposing these endpoints, we cannot guarantee the bud vertices to have degree
at least 2). If one is willing to settle for an asymptotically tight upper
bound on $t^{*}$, then one can choose $t^{*}=(1+\varepsilon)n\log n/2$, and
then for $n=n(\varepsilon)$ sufficiently large there are no bud vertices.
Moreover, since for this range of $t^{*}$, the vertices will have
significantly larger degree, the orienting rule can also be simplified. While
not making the analysis “trivial” (i.e., an immediate consequence of the work
in [14]), this will considerably simplify the proof.
Acknowledgement. We are grateful to Alan Frieze for generously sharing this
problem with us, and we thank Igor Pak for reference [12]. We would also like
to thank the two referees for their valuable comments.
## References
* [1] N. Alon, J. Spencer, The Probabilistic Method, 2nd ed., Wiley, New York, 2000.
* [2] Y. Azar, A., Broder, A. Karlin, E. Upfal, Balanced allocations, SIAM Journal on Computing 29 (1999), 180–200.
* [3] J. Balogh, B. Bollobas, M. Krivelevich, T. Muller and M. Walters, Hamilton cycles in random geometric graphs, manuscript.
* [4] T. Bohman, A. Frieze, Avoiding a giant component, Random Structures and Algorithms 19 (2001), 75–85.
* [5] T. Bohman, A. Frieze, and N. Wormald, Avoidance of a giant component in half the edge set of a random graph, Random Structures and Algorithms 25 (2004), 432–449.
* [6] T. Bohman and D. Kravitz, Creating a giant component, Combinatorics, Probability, Computing 15 (2006), 489–511.
* [7] B. Bollobás, The evolution of sparse graphs, in “Graph theory and combinatorics proceedings, Cambridge Combinatorial Conference in Honour of Paul Erdős, 1984” (B. Bollobas, Ed.), 335–357.
* [8] B. Bollobás, T. Fenner, and A. Frieze, An algorithm for finding hamilton cycles in random graphs, in “Proceedings, 17th Annual ACM Symposium on Theory of Computing, 1985”, 430–439.
* [9] C. Cooper and A. Frieze, Hamilton cycles in random graphs and directed graphs, Random Structures and Algorithms 16 (2000), 369–401.
* [10] R. Diestel, Graph theory, Volume 173 of Graduate Texts in Mathematics, Springer-Verlag, Berlin, 3rd edition, 2005.
* [11] P. Erdős and A. Renyi, On the evolution of random graphs, Publ. Math. Inst. Hung. Acad. Sci. 5A (1960), 17–61.
* [12] P. Erdős and P. Turán, On some problems of a statistical group-theory. I, Z. Wahrsch. Verw. 4 (1965), 175–186.
* [13] A. Flaxman, D. Gamarnik, and G. Sorkin, Embracing the giant component, Random Structures and Algorithms 27 (2005), 277–289.
* [14] A. Frieze, An algorithm for finding hamilton cycles in random directed graphs, Journal of Algorithms, 9 (1988), 181–204.
* [15] A. Frieze, Personal communication.
* [16] J. Komlos and E. Szemerédi, Limit distribution for the existence of Hamilton cycles in random graphs, Discrete Math, 43 (1983), 55–63.
* [17] A. Korshunov, Solution of a problem of Erdős and Rényi on Hamilton cycles non-oriented graphs, Soviet Math. Dokl., 17 (1976), 760–764.
* [18] M. Krivelevich, P. Loh, and B. Sudakov, Avoiding small subgraphs in Achlioptas processes, Random Structures and Algorithms 34 (2009), 165–195.
* [19] M. Krivelevich, E. Lubetzky, and B. Sudakov, Hamiltonicity thresholds in Achlioptas processes, Random Structures and Algorithms 37 (2010), 1–24.
* [20] L. Pósa, Hamiltonian circuits in random graphs, Discrete Math, 14 (1976), 359–364.
* [21] R. Robinson and N. C. Wormald, Almost all regular graphs are hamiltonian, Random Structures and Algorithms, 5 (1994), 363–374.
* [22] A. Sinclair and D. Vilenchik, Delaying Satisfiability for random 2SAT, APPROX-RANDOM (2010), 710–723.
* [23] J. Spencer and N. Wormald, Birth control for giants, Combinatorica 27 (2007), 587–628.
|
arxiv-papers
| 2011-03-29T01:15:17 |
2024-09-04T02:49:17.988188
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Choongbum Lee, Benny Sudakov, Dan Vilenchik",
"submitter": "Choongbum Lee",
"url": "https://arxiv.org/abs/1103.5522"
}
|
1103.5540
|
XXXX Vol. X No. XX, 000–000
11institutetext: Purple Mountain Observatory, Chinese Academy of Sciences,
Nanjing 210008, China
qhtan@pmo.ac.cn 22institutetext: Graduate School of Chinese Academy of
Sciences, Beijing 100039, China 33institutetext: Center of Astrophysics,
Tianjin Normal University, Tianjin 300384, China Received [0] [0] [0];
accepted [0] [0] [0]
# 12CO, 13CO and C18O observations along the major axes of nearby bright
infrared galaxies
Q.H. Tan 112233 Yu Gao 11 Z.Y. Zhang 1122 X.Y. Xia 33
###### Abstract
We present simultaneous observations of 12CO, 13CO and C18O $J$=1$-$0 emission
in 11 nearby ($cz<$1000 km s-1) bright infrared galaxies. Both 12CO and 13CO
are detected in the centers of all galaxies, except for 13CO in NGC 3031. We
have also detected C18O, CS$J$=2$-$1, and HCO${}^{+}J$=1$-$0 emission in the
nuclear regions of M82 and M51. These are the first systematical extragalactic
detections of 12CO and its isotopes from the PMO 14m telescope.
We have conducted half-beam spacing mapping of M82 over an area of
$4^{\prime}\times 2.5^{\prime}$ and major axis mapping of NGC 3627, NGC 3628,
NGC 4631, and M51. The radial distributions of 12CO and 13CO in NGC 3627, NGC
3628, and M51 can be well fitted by an exponential profile. The 12CO/13CO
intensity ratio, ${\cal R}$, decreases monotonically with galactocentric
radius in all mapped sources. The average ${\cal R}$ in the center and disk of
the galaxies are 9.9$\pm$3.0 and 5.6$\pm$1.9 respectively, much lower than the
peculiar ${\cal R}$($\sim$24) found in the center of M82.
The intensity ratios of 13CO/C18O, 13CO/HCO+ and 13CO/CS (either ours or
literature data) show little variations with galactocentric radius, in sharp
contrast with the greatly varied ${\cal R}$. This supports the notion that the
observed gradient in ${\cal R}$ could be the results of the variations of the
physical conditions across the disks. The H2 column density derived from C18O
shows that the Galactic standard conversion factor ($X$-factor) overestimates
the amount of the molecular gas in M82 by a factor of $\sim$2.5. These
observations suggest that the $X$-factor in active star-forming regions (i.e.,
nuclear regions) should be lower than that in normal star-forming disks, and
the gradient in ${\cal R}$ can be used to trace the variations of the
$X$-factor.
###### keywords:
galaxies: ISM— ISM: clouds — ISM: molecules — radio lines: ISM
## 1 Introduction
It is well known that molecules constitute a significant fraction of the
interstellar medium (ISM) and stars form from molecular clouds. However, the
most abundant molecule, H2, cannot be detected directly in typical cold (10-40
K) molecular cloud, owing to its lack of a permanent electric dipole moment.
The next most abundant molecule is 12CO, which is found to be an excellent
tracer of H2 due to the low excitation temperature ($\sim$10 K) and critical
density ($\sim$300 cm-3) (Evans 1999). Generally, the lowest transition
($J$=1$-$0) of the rotational lines of 12CO and its optically thin isotopic
variants (e.g., 13CO and C18O) can be used to estimate the column density
($N$(H2)) of molecular clouds in our Galaxy (Frerking et al. 1982; Young &
Scoville 1982; Wilson et al. 2009). However, up to now, the column density of
molecular gas in galaxies is difficult to obtain directly in this way due to
the beam dilution and the weakness of line emission in CO isotopes. Accurate
determination of the H2 column densities from CO observations has therefore
been a longstanding challenge in external galaxies.
In our Galaxy, results from independent methods show a tight correlation
between the integrated 12CO line intensity $I_{\rm{}^{12}CO}$ and $N$(H2), and
the ratios of $N$(H2) to $I_{\rm{}^{12}CO}$ appear to be constant with values
of (1-5)$\times$1020 cm${}^{-2}{\rm K}^{-1}{\rm km}^{-1}$s across the Galaxy
(Bohlin et al. 1978; Dickman 1978; Hunter et al. 1997; Scoville et al. 1987;
Young & Scoville 1991). This constant value is denoted as the 12CO-to-H2
conversion factor, or the $X$-factor ($X\equiv N({\rm
H}_{2})/I_{\rm{}^{12}CO}$). It is beyond the scope of this paper to analyze in
great detail the origin of the empirical ’$X$-factor’ since many studies have
shown that the $X$-factor varies with different physical conditions and
environments, and is influenced by the CO abundance, the gas excitation and
the radiative transfer processes (Scoville & Solomon 1974). For example, the
amounts of molecular gas in the metal-poor Magellanic Clouds and M31 were
claimed to be underestimated by up to a factor of $\sim$10 if a standard
Galactic $X$-factor was adopted (Maloney & Black 1988; Allen & Lequeux 1993;
Israel 1997). Nevertheless, recent imaging of CO clouds reports similar
standard $X$-factor in metal-poor galaxies (Bolatto et al. 2008). Conversely,
the amounts of molecular gas in the ultraluminous infrared galaxies (ULIRGs)
might be overestimated by factors of $\sim$5 (Downes & Solomon 1998) using the
standard Galactic $X$-factor.
The $IRAS$ all-sky survey has revealed a large population of infrared bright
galaxies with bulk energy emission in the far-infrared (Soifer et al. 1987;
Sanders et al. 2003), which is mostly due to dust heating from starburst
(Sanders & Mirabel 1996). Early numerous CO observations of infrared galaxies
found that these galaxies are rich in molecular gas (Young et al. 1995), and
there exist a correlation between CO and far-infrared luminosity (Solomon &
Sage 1988; Young & Scoville 1991), though the correlation is non-linear (Gao &
Solomon 2004b) since $L_{\rm IR}/L_{\rm CO}$ correlates with $L_{\rm IR}$.
Moreover, recent studies on high-redshift star forming galaxies further
confirmed the validity of CO-IR luminosity correlation (Solomon & Vanden Bout
2005; Daddi et al. 2010).
Although large quantities of observational studies have aimed at mapping the
molecular gas distribution/kinematics in nearby infrared bright galaxies with
single-dish and/or interferometer telescopes (Braine et al. 1993; Young et al.
1995; Sakamoto et al. 1999; Nishiyama et al. 2001; Helfer et al. 2003; Leroy
et al. 2009), these surveys are almost all based on the observations of CO
$J$=1$-$0 or $J$=2$-$1, and only limited systematical studies of CO isotope
variants such as 13CO, have been published so far (Paglione et al. 2001).
Owing to the large isotope abundance ratio [12CO]/[13CO]$\approx$30-70 across
the Galaxy (Langer & Penzias 1990), the opacity of 13CO is much lower than
that of 12CO, and thus 13CO is believed to be mostly optical thin and trace
the molecular gas column density adequately in most cases. Consequently, the
variations in the intensity ratio of 12CO to 13CO as a function of
galactocentric radius, could give a reliable test of whether $X$-factor varies
systematically within galaxies. Studies of the Galactic $X$-factor have
revealed large variations in its value towards the Galactic nuclear region,
with an order of magnitude increase from the center to the outer disk
(Sodroski et al. 1995; Dahmen et al. 1998). In other galaxies, however,
Rickard & Blitz (1985) claimed that the value of integrated intensity ratio of
12CO to 13CO in the nucleus is on average a factor of two higher than that in
the disk. In fact, both Young & Sanders (1986) and Sage & Isbell (1991)
suggested that there was no clear evidence of a systematic difference in
${\cal R}$ within a galaxy, and they suggested that the variations in ${\cal
R}$ observed by Rickard & Blitz (1985) is likely to be caused by pointing
errors. Recently, mapping surveys of 12CO and 13CO emission towards a large
sample of nearby galaxies were carried out by Paglione et al. (2001), finding
that the same physical processes (e.g., the molecular gas kinetic temperature)
may both affect ${\cal R}$ and $X$-factor, moreover, the latter is also
expected to decrease from the disks to the nuclear regions by factors of 2-5.
However, any study of the physical properties of molecular gas that involves
using line intensity ratios might be influenced by measurement errors, owing
to the telescopes’ different beam sizes, uncertain beam efficiencies, pointing
errors, and calibrations. Recently updated and improved sensitivity and system
performance with the Purple Mountain Observatory (PMO) 14m telescope at
Delingha, China allow us to simultaneously observe extragalactic sources
systematically in 12CO, 13CO, and C18O for the first time. Consequently, our
simultaneous observations of three CO isotope variants with same telescope are
better suited to obtain the well-calibrated line intensity ratios than
observations from different telescopes carried out with different tunings at
different time.
In this paper, we present the results of simultaneous observations of 12CO,
13CO, and C18O along the major axes in nearby infrared bright galaxies. The
sample selection, observations, and data reduction are described in § 2; the
observed CO spectra, CO radial distributions, and position-velocity diagrams
are presented in § 3. These results together with the radial distributions of
molecular gas, CO isotopic ratio ${\cal R}$, and possible physical mechanisms
that could be responsible to the observed variations in ${\cal R}$ are
discussed in § 4. Finally a summary is presented in § 5. A stability analysis
of the PMO 14m telescope is presented in Appendix A.
## 2 Sample, Observations and Data Reduction
The galaxies in this study were selected from the Revised Bright Galaxy Sample
(RBGS, Sanders et al., 2003). We selected 11 galaxies based on the following
three criteria: (1) $f_{60\mu m}\geq$ 50 Jy or $f_{100\mu m}\geq$ 100 Jy. This
infrared flux cutoff was chosen to ensure both 12CO and 13CO could be detected
with reasonable integration time since it is well known that the infrared
luminosity of galaxies is correlated with the CO luminosity (e.g., Solomon &
Sage, 1988). (2) $cz\leq$ 1000 km s-1. This velocity limit was chosen due to
the limited tuning range of the SIS receiver with the PMO 14m telescope.
(3)9h$\leq$ R.A. $\leq$15h and Decl.$\geq 0^{\circ}$. For the reason that we
can take full advantage of the Galactic dead time to observe galaxies in the
northern sky. Some general properties of the sample galaxies are summarized in
Table 1.
Our observations were made between 2008 January and 2009 December using the
PMO 14m millimeter telescope at the Delingha. We used the 3mm SIS receiver
operated in double-sideband mode, which allowed for simultaneous observations
of three CO isotope variants, with 12CO in the upper sideband and 13CO and
C18O in the lower sideband. The Half Power Beam Width (HPBW) is $\sim$ 60′′,
and the main beam efficiency $\eta_{\rm mb}$ = 0.67. Typical system
temperatures during our runs were about 180-250 K. The FFT spectrometer was
used as back end with a usable bandwidth of 1 GHz and a velocity resolution of
0.16 $\rm{km~{}s}^{-1}$ at 115GHz. Observations were done in position
switching mode and calibrated using the standard chopper wheel method. The
absolute pointing uncertainty was estimated to be $\sim$10′′ from continuum
observations of planets, and the pointing were checked every two hours by
taking spectra toward the standard source IRC+10216 throughout our
observations. Each galaxy was first observed at the center position, and then
along its major axis from the center to the outer disk positions separated by
half-beam size. Besides the CO observations, we also observed the dense
molecular gas tracers HCO+ and CS in the nuclear regions of galaxies.
The data were reduced using CLASS, which is part of the
GILDAS111http://iram.fr/IRAMFR/GILDAS/ software package. All scans were
individually inspected, and those with either strongly distorted baselines or
abnormal rms noise levels were discarded. Line-free channels which exhibited
positive or negative spikes more than 3 $\sigma$ above the rms noise were
blanked and substituted with values interpolated from the adjacent channels,
and then a linear baseline was subtracted from the ’line-free’ spectrum. After
correcting each spectrum for main beam efficiency $\eta_{\rm mb}$, the
temperature scale of the spectra was converted to the main beam temperature
$T_{\rm mb}$ scale from the $T^{*}_{A}$. The spectra were then co-added,
weighted by the inverse of their rms noises, and the final spectral resolution
was smoothed to $\sim$20 $\rm{km~{}s}^{-1}$.
Since this is the first time that the PMO 14m millimeter telescope
systematically observed galaxies other than our own, relatively rather long
observing time at each pointing is devoted to accumulate relatively adequate
integration time. Thus our observations are the first long integrations for
the PMO 14m telescope ever conducted and offer an excellent opportunity to
test the stability of the upgraded system (see Appendix A). The work present
here represents the total of $\sim$ 500 hours observing time in CO
observations before the telescope system maintenance in the summer of 2009 at
the PMO 14m and $\sim$ 200 hours after the upgrade, with $\sim 40\%$ data
discarded owing to the unstable spectrum baseline and bad weather conditions.
## 3 Results and analysis
### 3.1 CO Isotopic Spectra
We detected 12CO emission from the centers of all 11 galaxies observed, of
which 10 were also detected in 13CO simultaneously. Four galaxies (NGC 3627,
NGC 3628, NGC 4631 and M51) were observed along the major axes, and 12CO
emission was detected at all 42 off-center positions along the major axes,
while 13CO was detected at 27 out of the 42 positions. For M82, the starburst
galaxy, 12CO emission was detected at all 47 positions that were mapped in the
central $4^{\prime}\times 2.5^{\prime}$ region, while 13CO was tentatively
detected at 15 positions. C18O emission was only detected at 13 points close
to the nuclear regions in M51 and M82.
Here, we focus on presenting 13CO and C18O spectra as compared to 12CO
observed simultaneously at the same positions, 12CO spectra at those 13CO
undetected positions won’t be shown here. Since many previous observations of
12CO emission in these galaxies are available in literature (e.g., Young et
al., 1995), and our 12CO observations show similar results and comparable
spectra. The spectra of both 12CO and 13CO, as well as Spitzer IRAC 3.6
$\mu$m, 8 $\mu$m and MIPS 24 $\mu$m infrared composite color images showing
the observed beam positions of the mapped galaxies, are shown in Fig.1. All
the three CO isotopic spectra at these C18O detected positions are shown in
Fig. 2.
### 3.2 Derived Parameters
The observational results and derived properties are summarized in Table 2.
The velocity-integrated 12CO (and isotopes) intensity, $I_{\rm CO}\equiv\int
T_{\rm mb}dv$, is obtained by integrating $T_{\rm mb}$ over the velocity range
with CO line emission feature. Using the standard error formulae in Gao (1996)
(also in Matthews & Gao, 2001), the error in the integrated intensity is
$\Delta I=T_{\rm rms}\ \Delta v_{\rm FWZI}/[f\ (1-\Delta v_{\rm
FWZI}/W)]^{1/2}\ [{\rm K~{}km~{}s}^{-1}],$ (1)
where $T_{\rm rms}$ is the rms noise in the final spectrum in mK, $f=\Delta
v_{\rm FWZI}/\delta_{v}$, where $\Delta v_{\rm FWZI}$ is the linewidth of the
emission feature, $\delta_{v}$ is the channel bin spacing, and $W$ is the
entire velocity coverage of the spectrum in units of kilometers per second.
For non-detections (only CO isotopes), some spectra are further smoothed and
found to be marginally detected with signal-to-noise of $\sim$2.5. Otherwise,
a 2 $\sigma_{I}$ upper limits are given based on the estimation by using the
expected line width from the detected 12CO lines at exactly the same position.
The H2 column density and mass surface density for galaxies in our sample are
derived from the empirical relations (Nishiyama et al. 2001)
$N({\rm H}_{2})[{\rm cm}^{-2}]=2\times 10^{20}I_{\rm{}^{12}CO}[{\rm
K~{}km~{}s^{-1}}],$ (2)
and
$\Sigma({\rm H}_{2})[M_{\odot}{\rm pc}^{-2}]=3.2I_{\rm{}^{12}CO}[{\rm
K~{}km~{}s}^{-1}]{\rm cos}i,$ (3)
where cos$i$ corrects the mass to face-on and a Galactic 12CO-to-H2 conversion
factor $X$=2.0$\times$1020 cm${}^{-2}{\rm K}^{-1}{\rm km}^{-1}$s is adopted
(Dame et al. 2001). Obviously, it can be seen from Table 2 that the column
density in M82 is usually a magnitude higher than that in normal spiral
galaxies (the rest of the sample).
Assuming that 13CO has the same excitation temperature as 12CO and the
molecular cloud is under LTE conditions, then we can calculate the average
13CO optical depth from
$\tau(^{13}{\rm CO})={\rm ln}[1-\frac{\int T^{*}_{R}(^{13}{\rm CO})dv}{\int
T^{*}_{R}(^{12}{\rm CO})dv}]^{-1},$ (4)
where $T^{*}_{R}$ should be corrected for filling factor, and we can only
estimate an average over all of the unresolved clouds in the beam. Using the
definition in Wilson et al. (2009), the H2 column density can be derived from
the 13CO line intensity as
$N({\rm H_{2}})(^{13}{\rm CO})=[\frac{\tau(^{13}{\rm
CO})}{1-e^{-\tau(^{13}{\rm CO})}}]2.25\times 10^{20}\frac{\int T_{\rm
mb}(^{13}{\rm CO})dv}{1-e^{-5.29/T_{\rm ex}}},$ (5)
where the 13CO abundance [13CO]/[H2] is 8$\times 10^{-5}$/60 (Frerking et al.
1982) and the excitation temperature, $T_{\rm ex}$, is taken to be the kinetic
temperature of the gas, $T_{\rm K}$. Thus, we can estimate $T_{\rm K}$ by
equating the column densities derived from both 12CO and 13CO. Both the
derived $\tau(^{13}{\rm CO})$ and $T_{\rm K}$ are listed in Table 2. Note that
LTE assumption is most likely invalid in the central regions of M82.
Therefore, the extremely low optical depth of 13CO ($\sim$0.04) should be
treated only as a lower limit, and the resulting kinetic temperature (between
70 and 120 K), which are about 3-4 times higher than that in normal galaxies,
should be treated as an upper limit.
Similarly, we estimated the optical depth of C18O adopting the same method as
equation (4), and derived the H2 column density from C18O intensity using
(Sato et al. 1994)
$N({\rm H_{2}})({\rm C}^{18}{\rm O})=[\frac{\tau(^{18}{\rm
CO})}{1-e^{-\tau(^{18}{\rm CO})}}]1.57\times 10^{21}\frac{\int T_{\rm mb}({\rm
C^{18}O})dv}{1-e^{-5.27/T_{\rm ex}}},$ (6)
where the abundance of [C18O]/[H2] is 1.7$\times 10^{-7}$ (Frerking et al.
1982) and $T_{\rm ex}$ is adopted from the value listed in Table 2. The
derived values of $\tau{\rm(C^{18}O)}$ and $N({\rm H_{2}})({\rm C}^{18}{\rm
O})$ for the 13 points of M82 and M51 detected in C18O are listed in Table 3.
The average optical depths of C18O in M82 and M51 are 0.02 and 0.05
respectively, both are about three times lower than that of 13CO. Therefore,
although the optical depth of 13CO is moderate ($\tau(^{13}{\rm
CO})\sim$0.3–0.4) in a few galaxies (e.g., NGC 3627 and NGC 4631; see Table
2), C18O is always optically thin in all cases here.
### 3.3 Distribution of CO Isotopes and Line Ratios
The distributions of 12CO and 13CO velocity-integrated intensities and their
ratio ${\cal R}$ as a function of galacto-central radius are shown in Fig. 3.
Note that none of these profiles have been corrected for inclination. Both the
12CO and 13CO emission line intensities show an exponential decrease with
radius, and the ratio ${\cal R}$ decreases from nucleus to the outer disk.
#### 3.3.1 Radial distributions of 12CO and 13CO
The obvious trends shown in Fig. 3 are that the observed radial distribution
of 13CO follows that of 12CO. The integrated line intensities usually peak in
the center of galaxies and fall monotonically toward larger galactocentric
radii. For the barred spiral galaxies NGC 3627 and NGC 4631, however, both
12CO and 13CO intensities peak at radius of $\sim$ 1 kpc ($\sim$0.5’) rather
than at the nuclei. The same feature has also been presented in Paglione et
al. (2001) for NGC 4631. Some previous high resolution observations of barred
galaxies revealed that the molecular gas was concentrated in the central
regions, with secondary peaks at the bar ends due to the bar potential and the
viscosity of the gas (Regan et al. 1999; Sakamoto et al. 1999).
Similar to the stellar luminosity profiles, the observed 12CO radial
distributions in NGC 3627, NGC 3628 and M51 could also be well fitted by an
exponential fit in $R$
$I(R)=I_{\rm 0}\ {\rm exp}\ (-R/R_{\rm 0}),$ (7)
where $R$ is the galactocentric distance, $R_{\rm 0}$ is the disk scale
length, and $I_{\rm 0}$ is the central integrated intensity. The solid curves
in Fig. 3 show the least-squares fit to the data excluding the center point as
the nuclear gas could be a distinct component, yielding $I_{\rm 0}$ = 32.6 K
km s-1 and $R_{\rm 0}$ = 3.5 kpc for the 12CO emission in NGC 3628, $I_{\rm
0}$ = 37.8 K km s-1 and $R_{\rm 0}$ = 2.7 kpc and $I_{\rm 0}$ = 3.5 K km s-1
and $R_{\rm 0}$ = 3.8 kpc for 12CO and 13CO emission in M51. In NGC 3627, the
scale lengths are 2.8 kpc and 3.9 kpc for the 12CO and 13CO emission,
respectively. However, the power law fit with functional form, $I=I_{\rm 0}\
R^{\alpha}$, is more suitable than exponential fit for the 12CO distribution
in NGC 4631. The fit result gives $\alpha=-1.5$ and $I_{\rm 0}$ = 41.1 K km
s-1. For the other three exponential fit galaxies, the power law could also
fit 12CO distribution almost equally well. The exponential 12CO scale lengths
of 2-4 kpc equal to $\sim$ 0.2 $R_{\rm 25}$, which is consistent with the
results in Nishiyama et al. (2001). The distributions of 12CO and 13CO
emission along the axis with position angle of 90∘ in M82 are similar to the
other four galaxies. The distributions along the major axis are not shown
here, since the observations of M82 were carried out by mapping an area of
$4^{\prime}\times 2.5^{\prime}$.
#### 3.3.2 The integrated line intensity ratios
The intensity ratio ${\cal R}$ ranges from 3.3$\pm$0.7 to 12.8$\pm$4.3 for all
36 points both detected in 12CO and 13CO emission from normal spiral galaxies,
with mean value of 9.9$\pm$3.0 and 5.6$\pm$1.9 for the central and disk
regions, respectively. However, the average ${\cal R}$ in M82 is about 2.5
times higher than that in the nucleus of normal spiral galaxies. We use the
Equation (2) in Young & Sanders (1986) to calculate the mean value of ${\cal
R}$ at each radius. The most prevalent trend is that the ${\cal R}$ drops at
larger radius in both the barred and non-barred spiral galaxies that we have
mapped along the major axis in our sample (Fig. 3). Here we note that two
points ($\sim 2.5^{\prime}$ away from center) are found to have significantly
higher ratios ($\sim$9) in M51. These abnormal high values in the disks are
tended to be observed once the telescope beams are located in the most active
star-forming regions along spiral arms.
The detection of C18O in M82 and M51 also allow us to estimate the intensity
ratio of 13CO to C18O, which ranges between 0.9$\pm$0.7 and 5.3$\pm$2.8 with
mean value of 2.9$\pm$1.4. The ratios measured in the nucleus and disk of M51
are 3.7 and 2.6 respectively, agree well with the results of Vila-Vilaró
(2008), which claimed first detect C18O emission in the center of M51 with
13CO/C18O ratio of 3.6. Also the similar 13CO/C18O ratio values have been
found in some starburst galaxies (e.g., Sage et al., 1991; Aalto et al.,
1995). In addition, our detection of C18O emission in the off-center regions
of M51 represents the first report of such detection for this object by far.
Ongoing observations of dense gas tracers HCO+ and CS, toward the nuclear
regions of NGC 4736, M82, NGC 3628, and M51 by far only yield detection in the
latter three galaxies. Here, we also use these limited dense gas observations,
along with the literature data, to help further analyze the CO isotopic
results. The intensity ratios of 13CO to HCO+ and 13CO to CS are found to show
little variations between starburst and normal galaxies, with average values
of 1.3$\pm$0.3 and 3.2$\pm$1.4, respectively. The observed integrated
intensity and line ratios and the literature data used are listed in Table 3.
### 3.4 Kinematics of CO
Figure 4 shows the CO position-velocity ($P-V$) diagrams along the major axes
of the galaxies NGC 3627, NGC 3628, NGC 4631, and M51, and the $P-V$ diagrams
with position angle of 0∘ and 90∘ in M82 as well. It can be seen in Fig. 4
that $P-V$ diagrams along the major axes tend to show a gradual increase of
rotation velocity in the inner regions (rigid rotation) and a nearly constant
velocity in the outer regions (differential rotation).
For NGC 3627 and M51, the $P-V$ spatial velocity maps of 13CO are also shown
in Fig. 4. Obviously, both 12CO and 13CO share essentially similar kinematics
and distribution. At each position where observations were made, the
distribution of line intensity and the velocity range for 12CO and 13CO
emission are in good agreement. Accordingly, it could also hint that the line
ratios at each position derived from our observations are relatively quite
reliable.
Figure 5 shows the CO rotation curve and the variation in line width along the
major axis. Using the mean velocities, inclination, systemic velocity, and
position angle of the major axis that listed in Table 1, and by the assumption
that the observed velocity reflect only the systemic motion of the whole
galaxy and circular rotation, the rotation velocity $V_{\rm R}$ could be
derived via
$V_{\rm R}=(V_{\rm obs}-V_{\rm sys})/{\rm sin}i\ {\rm cos}\theta\,,$ (8)
where $V_{\rm obs}$ is an observed velocity of 12CO, $V_{\rm sys}$ is the
systemic velocity of the galaxy, $i$ is the inclination angle, and $\theta$ is
the azimuth measured in the disk plane. The peak velocity, $V_{\rm max}$, in
the rotation curve of our sample galaxies, ranges from 120 to 240 km s-1
($V{\rm max}$ in spiral galaxies are usually between 150 and 300 km s-1,
Sparke & Gallagher, 2000).
## 4 Discussion
### 4.1 Radial Distribution of Molecular Gas
In section 3.3, we show that the surface density of the molecular gas in the
galaxies observed in our sample can be well fitted both by exponential and
power law function. The $\chi^{2}$ values indicate that the data fitting are
at about 85% and 90% confidence level by the exponential and power law
function, respectively. We are therefore unable to distinguish clearly between
an exponential and a power law radial distribution over the regions observed
here with limited sampling and low resolution. The similar conclusion has been
pointed out in both Young & Scoville (1982) and Nishiyama et al. (2001).
However, Scoville et al. (1983) considered that the exponential distribution
was more suitable than that of power law for M51. The region in M51 observed
by Scoville et al. (1983) is more extended ($\sim 8.5^{\prime}$) than the
region ($\sim 7^{\prime}$) observed here, and thus the exponential functional
form seems to be better for describing the distribution of molecular gas in
the whole galaxy. In NGC 3627 and M51, the scale lengths $R_{\rm 0}$ of the
12CO profiles agree well with the optical $K$-band scale lengths of 3.5 and
3.2 kpc (Regan et al. 2001), respectively. The results are in line with the
finding first noted by Young & Scoville (1982) that the large-scale radial
distribution of the 12CO gas follows the light of the stellar disk. In fact,
some recent high resolution observations also reveal that the 12CO profiles in
a majority of galaxies follow the two-component (bulge and disk) stellar
brightness distribution quite well (Regan et al. 2001). The 13CO profiles
detected in our observations are in good agreement with that in Paglione et
al. (2001). In NGC 3627 and M51, 13CO profiles follow similar exponential
distributions as 12CO. Because of insufficient data points with significantly
high signal-to-noise, however, the detected 13CO data in other galaxies are
too limited to reliably present more useful information on their
distributions.
Comparing the radial distribution of CO integrated intensities in Fig. 3 with
the $P-V$ map of CO emission intensities in Fig. 4, it is found that the
intensities are deficient in the center of NGC 3627, NGC 4631, and M51.
However, the centrally deficient feature in molecular ring emission is
apparent in NGC 3627 and NGC 4631, but not in M51. So combining the variations
in line width shown in Fig. 5, we believe the decrement of CO intensity in the
central region of M51 is likely as a result of the dilution in velocity with
much wider line width in the center than that in the outer regions of the
central disk. On the contrary, the molecular ring emission features with
little variations in line width within the bar region in NGC 3627 and NGC
4631, are probably either due to orbital disruption at the inner Lindblad
resonance or the central holes with gas exhausted by star formation in the
nuclei. Sakamoto et al. (1999) have modeled the $P-V$ diagrams to analyze gas
distributions, and found that the central gas hole is easier to find from
$P-V$ diagrams than from integrated intensity maps due to the velocity
information.
### 4.2 Variations in the Intensity Ratio of 12CO to 13CO
Previous studies on the variations in ${\cal R}$ in external galaxies have
pointed out that some mergers, LIRGs/ULIRGs, and the central regions of
circumnuclear starburst galaxies tend to have higher values of ${\cal R}$ than
that in normal spiral galaxies (Aalto et al. 1991, 1995; Casoli et al. 1992;
Greve et al. 2009; Sage & Isbell 1991), of which is similar to that in giant
molecular clouds of our Galaxy. Besides the enhancement of 12C caused by
nucleosynthesis from type-II SN of massive stars, the deficiency of 13CO
caused by isotope-selective photodissociation, and the different distributions
of 12CO and 13CO were disputed to be alternative reasons for the high ${\cal
R}$ value for a long time (Aalto et al. 1995; Casoli et al. 1992; Sage et al.
1991; Taniguchi et al. 1999). However, the single-component model calculation
of non-LTE CO excitation in Paglione et al. (2001) suggested that the
variations of kinetic temperature, cloud column and volume density, as well as
starburst superwinds might all contribute to the explanations for the
variation in ${\cal R}$. We here explore the possible causes of the observed
variations in ${\cal R}$.
#### 4.2.1 Possible causes of variations in ${\cal R}$
Our results show that ${\cal R}$ varies not only in the nuclei of various
types of galaxies, but also within galaxies between nuclear regions and outer
disks. The Galactic 12C/13C abundance ratio was found to range from $\sim$30
at 5 kpc in the Galactic center to $\sim$70 at 12 kpc (Langer & Penzias 1990).
However, this isotopic gradient is opposite to the gradient in the molecular
abundance ratio ${\cal R}$ (Fig. 3). Therefore, the enhancement of 12C in
starburst regions is unlikely to be an appropriate explanation for the
measured high ${\cal R}$.
Some authors argue that the selective dissociation of 13CO caused by
ultraviolet (UV) field in massive star formation regions can stimulate the
ratio of ${\cal R}$, since the rarer isotope is less shielded from
photodissociation (van Dishoeck & Black 1988). Consequently, C18O should be
even more dissociated by UV photons than 13CO due to its lower abundance, and
a positive correlation between ${\cal R}$ and 13CO/C18O intensity ratio would
expect to exist if this is available. However, Fig.6b shows a very marginal
anti-correlation between ${\cal R}$ and 13CO/C18O with a correlation
coefficient $R$=-0.34. Therefore, contrary to the expection, our results
reveal a weak anti-correlation, and the deficiency of 13CO caused by isotope-
selective photodissociation could be ruled out.
In addition to the high ${\cal R}$ measured in M82, the integrated
$J$=2-1/$J$=1-0 line ratio was found to range between 1.0 and 1.4 (Mao et al.
2000), revealing the existence of highly excited molecular gas. In the PDR
model of Mao et al. (2000), it was suggested that the bulk of 12CO emission
arises from warm diffuse interclumpy medium whereas 13CO emission originate in
denser cores. In the warm PDRs, the optical depth of 12CO emission from the
envelope gas could decrease to a moderate value of $\tau\sim$1, result in the
corresponding $\tau(^{13}{\rm CO})<<$ 1\. Moreover, the large concentrations
of molecular gas in the nuclear starburst with high $T_{\rm K}$ can be excited
by strong UV emission from young massive stars, shocks and turbulence caused
by supernova remnants, and cosmic rays (Pineda et al. 2008; Zhang et al.
2010). The most recent $Herschel$ observations of M82 also suggested that
turbulence from stellar winds and supernova may dominate the heating of warm
molecular gas in the central region (Panuzzo et al. 2010). Therefore, it is
likely to imply that nonthermal motions produced by the stellar superwinds can
broaden 12CO lines, thus enhance 12CO emission as more photons located deeper
in the clouds are allowed to escape. Furthermore, the significant high value
of ${\cal R}$ observed in the spiral arms of M51 also demonstrate that 12CO
emission can be enhanced in active star-forming regions compared with that in
inter-arm regions.
#### 4.2.2 $X$-factor and dense gas ratio in extreme starburst
Both theoretical and observational investigations have revealed that 12CO
emission is influenced by the intrinsic properties of molecular cloud (e.g.,
Pineda et al., 2008; Goodman et al., 2009; Shetty et al., 2011). In the
magneto-hydrodynamic models of Shetty et al. (2011), 12CO integrated intensity
is found to be not an accurate measure of the amount of molecular gas, even
for 13CO, which may also not be an ideal tracer in some regions. In this case,
the much lower opacity C18O can give much more reliable constraints on H2
column density than optically thick 12CO isotopes. Comparing the H2 column
density derived from 12CO with that from C18O listed in Table 2 and Table 3,
we find that the amount of molecular gas estimated by standard Galactic
$X$-factor is consistent with that derived from C18O in M51, whereas is
overestimated in M82 by a factor of 2.5, equal to the ratio of ${\cal R}$
between M82 and M51. Consequently, our results confirm that the $X$-factor
adopted in starburst active regions should be lower than that in normal star-
forming regions, and the gradient in ${\cal R}$ can trace the variations in
$X$-factor. However, surveys of C18O in a larger sample are required to
confirm the relation between the variations in ${\cal R}$ and $X$-factor found
in this study.
The average ratio of 13CO to C18O ($\sim$2.9$\pm$1.4) derived from our
observations in M51 and M82 indicates that a portion of 13CO emission has a
moderate optical depth, since the ratio of 13CO to C18O should be $\sim$7 if
both 13CO and C18O lines are optically thin (Penzias et al. 1981). This result
is in line with the two-type cloud model suggested in Aalto et al. (1995), in
which a large fraction of 13CO emission might originate from denser gas
component. Some previous surveys in dense molecule have provided support for
the presence of such dense gas (e.g., Sage et al., 1990; Nguyen et al., 1992;
Gao & Solomon, 2004a). In addition, our detection of HCO+ and CS in M82 and
M51 is consistent with that in Sage et al. (1990) and Naylor et al. (2010).
The ratios of 13CO to HCO+ and 13CO to CS in (U)LIRGs NGC 3256, NGC 6240 and
Arp 220 (Casoli et al. 1992; Greve et al. 2009) are found to be lower than
those in the nuclear regions of normal spirals and M82 (Fig. 6), which
probably indicate that the dense gas fraction is higher for these (U)LIRGs,
since 13CO can be considered as a relative reliable tracer of total molecular
gas due to its low abundance. This result agrees well with the conclusion in
Gao & Solomon (2004b) that galaxies with high star formation efficiency tend
to have higher dense gas fraction. Moreover, the ratio ${\cal R}$ measured in
NGC 3256, NGC 6240 and Arp 220 are found to be much higher than those in M82
(Fig. 6), which is likely to imply that the bulk of 12CO emission arise from
warm diffuse gas is enhanced in the extreme starburst.
Summarizing the above analysis, the systematical gradient in ${\cal R}$ can be
explained by the variations in the physical conditions of molecular gas. The
standard Galactic $X$-factor used in M82 overestimates the amount of molecular
gas by a factor of 2.5, and the variations in $X$-factor can be well traced by
the gradient in ${\cal R}$. Nevertheless, additional observations of both 12CO
and 13CO lines at J $\geq$ 2 and more C18O lines are required to better
constrain the physical conditions of the molecular gas in external galaxies.
## 5 Summary
We observed simultaneously the 12CO, 13CO and C18O emission lines in 11 nearby
infrared-brightest galaxies, of which four (NGC 3627, NGC 3628, NGC 4631 and
M51) were mapped with half-beam spacing along the major axes and M82 was fully
mapped in an area of $4^{\prime}\times 2.5^{\prime}$. These are the first
systematic extragalactic observations for the PMO 14m telescope and the main
results are summarized as follows:
1.We detected the 12CO emission towards 99 of the positions observed, with the
13CO seen towards 51 of these. C18O was detected at 13 positions close to the
nuclear regions in M51 and M82, among which the off-center positions in M51
were the first C18O detection reported here.
2.In the four galaxies with major axes mapping, the 13CO line intensity
decrease from center to outer disk, similar to that of 12CO. In NGC 3627, NGC
3628, and M51, the radial distribution of both 12CO and 13CO can be well
fitted by an exponential function, whereas the 12CO distribution in NGC 4631
is better fitted by power law. The scale length of 12CO emission is about 0.2
$R_{25}$ with a mean value of $\sim$3 kpc. Moreover, the 12CO scale lengths in
NGC 3627 and M51 are in good agreement with the optical scale lengths.
3.The peak velocity of 12CO rotation curves ranges from 120 to 240 km s-1, and
the line widths of 12CO lines tend to drop with radius from center to outer
disk in all mapped galaxies. Of all positions observed, the distribution of
both line intensity and profiles of 12CO and 13CO have good agreement, as
expected with simultaneously 12CO and 13CO observations. Thus, a reliable line
intensity ratio ${\cal R}$ can be obtained.
4.A decreasing tendency of ${\cal R}$ with radius from center to outer disk is
found in mapped galaxies. ${\cal R}$ varies from 3.3$\pm$0.7 to 24.8$\pm$2.5
in positions with both 12CO and 13CO detected. The average ${\cal R}$ are
9.9$\pm$3.0 and 5.6$\pm$1.9 in the center and disk regions of normal spiral
galaxies,respectively.
5.The high ${\cal R}$ measured in M82 is likely to be caused by enhanced 12CO
emission from deeper cloud with broad 12CO line produced by the stellar winds
and supernova. The low values of 13CO/C18O ($\sim$2.8$\pm$1.2) found in M82
support the suggestion that a considerable fraction of 13CO emission
originates in denser gas component. Comparing the ratios of 13CO/HCO+ and
13CO/CS in normal galaxies with those in U/LIRGs, the lower values found in
U/LIRGs agree with the notion that the galaxies with high SFEs tend to have
higher dense gas fraction.
6.Comparing with the H2 column density derived from C18O, the standard
Galactic $X$-factor is found to overestimate the amount of molecular gas in
M82 by a factor of $\sim$2.5. This confirms the assertion that a lower
$X$-factor should be adopted in active starburst regions than that in normal
star-forming disks, and moreover, the gradient in ${\cal R}$ can be used
reliably to trace the variations of the $X$-factor.
###### Acknowledgements.
We thank the staff of Qinghai station for their continuous help. We are
grateful to Thomas Greve and anonymous referee for helpful comments. This work
was funded by NSF of China (Distinguished Young Scholars #10425313, grants
#10833006 and #10621303) and Chinese Academy of Sciences’ Hundred Talent
Program.
## References
* Aalto et al. (1991) Aalto, S., Black, J. H., Johansson, L. E. B., & Booth, R. S. 1991, A&A, 249, 323
* Aalto et al. (1995) Aalto, S., Booth, R. S., Black, J. H., & Johansson, L. E. B. 1995, A&A, 300, 369
* Allen & Lequeux (1993) Allen, R. J., & Lequeux, J. 1993, ApJ, 410, L15
* Bohlin et al. (1978) Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, ApJ, 224, 132
* Bolatto et al. (2008) Bolatto, A. D., Leroy, A. K., Rosolowsky, E., et al. 2008, ApJ, 686, 948
* Braine et al. (1993) Braine, J., Combes, F., Casoli, F., et al. 1993, A&AS, 97, 887
* Casoli et al. (1992) Casoli,F., Dupraz, C., Combes, F. 1992, A&A, 264, 49
* Daddi et al. (2010) Daddi, E., Elbaz,D., Walter, F., et al. 2010, ApJ, 714, 118
* Dame et al. (2001) Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, ApJ, 547, 792
* Dahmen et al. (1998) Dahmen,G., Huttemeister, S., Wilson, T. L., & Mauersberger, R. 1998, A&A, 331, 959
* Dickman (1978) Dickman, R. L. 1978, ApJS, 37, 407
* Downes & Solomon (1998) Downes, D., & Solomon, P. M. 1998, ApJ, 507, 615
* Evans (1999) Evans, N. J., II 1999, ARA&A, 37, 311
* Frerking et al. (1982) Frerking, M. A., Langer, W. D.,& Wilson, R. W. 1982, ApJ, 262, 590
* Gao (1996) Gao, Y. 1996, PhD. thesis, State Univ. New York, Stony Brook
* Gao & Solomon (2004a) Gao, Y., & Solomon, P. M. 2004a, ApJS, 152, 63
* Gao & Solomon (2004b) Gao, Y., & Solomon, P. M. 2004b, ApJ, 606, 271
* Goodman et al. (2009) Goodman, A. A, Pineda, J. E., & Schnee, S. L. 2009, ApJ, 692, 91
* Greve et al. (2009) Greve, T. R., Papadopoulos, P. P. P., Gao, Y., & Radford, S. J. E. 2009, ApJ, 692, 1432
* Helfer et al. (2003) Helfer, T. T, Thornley, M. D., Regan, M. W., et al. 2003, ApJS, 145, 259
* Henkel et al. (1993) Henkel, C., Mauersberger, R., Wiklind, T., et al. 1993, A&A, 268, L17
* Hunter et al. (1997) Hunter, S. D., Bertsch, D. L., Catelli, J. R., et al. 1997, ApJ, 481, 205
* Israel (1997) Israel, F. P. 1997, A&A, 328, 471
* Langer & Penzias (1990) Langer, W. D., & Penzias, A.A. 1990, ApJ, 357, 477
* Leroy et al. (2009) Leroy, A. K., Walter, F., Bigiel, F., et al. 2009, AJ, 137, 4670
* Maloney & Black (1988) Maloney, P., & Black, J. H. 1988, ApJ, 325, 389
* Mao et al. (2000) Mao, R. Q., Henkel, C., Schulz, A., et al. 2000, A&A, 358, 433
* Matthews & Gao (2001) Matthew, L. D., & Gao, Y. 2001, ApJ, 549, L191
* Naylor et al. (2010) Naylor, B. J., Bradford, C. M., Aguirre, J. E., et al. 2010, ApJ, 722, 668
* Nishiyama et al. (2001) Nishiyama, K., Nakai, N., & Kuno, N. 2001, PASJ, 53, 757
* Nguyen et al. (1992) Nguyen, Q. -Rieu, Jackson, J. M., Henkel, C., et al. 1992, ApJ, 399, 521
* Paglione et al. (2001) Paglione, T. A. D., Wall, W. F., Young, J. S., et al. 2001, ApJS, 135, 183
* Panuzzo et al. (2010) Panuzzo, P., Rangwala, N., Rykala, A., et al. 2010, A&A, 518, L37
* Penzias et al. (1981) Penzias, A. A. 1981, ApJ, 249, 518
* Pineda et al. (2008) Pineda, J. E., Caselli, P., & Goodman, A. A. 2008, ApJ, 679, 481
* Regan et al. (1999) Regan, M. W., Sheth, K., &Vogel, S. N. 1999, ApJ, 526, 97
* Regan et al. (2001) Regan, M. W., Thornley, M. D., Tamara, T. H., et al. 2001, ApJ, 561, 218
* Rickard & Blitz (1985) Rickard, L. J., & Blitz, L. 1985, ApJ, 292, L57
* Sage et al. (1990) Sage, L. J., Shore, S. N., & Solomon, P. M. 1990, ApJ, 351, 422
* Sage & Isbell (1991) Sage, L. J., & Isbell, D. W. 1991, A&A, 247, 320
* Sage et al. (1991) Sage, L. J., Mauersberger, R., & Henkel, C. 1991, A&A, 249, 31
* Sakamoto et al. (1999) Sakamoto, K., Okumura, S. K., Ishizuki, S., & Scoville, N. Z. 1999, ApJS, 124, 403
* Sanders & Mirabel (1996) Sanders, D. B., & Mirabel, I. F. 1996, ARA&A, 34, 749
* Sanders et al. (2003) Sanders, D. B., Mazzarella, J. M., Kim, D.-C., et al. 2003, AJ, 126, 1607
* Sato et al. (1994) Sato, F., Mizuno, A., Nagahama, T., et al. 1994, ApJ, 435, 279
* Scoville & Solomon (1974) Scoville, N. Z., & Solomon, P. M. 1974, ApJ, 187, L67
* Scoville et al. (1983) Scoville, N., & Young, J.S. 1983, ApJ, 265, 148
* Scoville et al. (1987) Scoville, N., Yun, M. S., Clemens, D. P., et al. 1987, ApJS, 63, 821
* Shetty et al. (2011) Shetty, R., Glover, S. C., Dullemond, C., et al. 2011, /mnras, 11
* Sodroski et al. (1995) Sodroski, T. J., Odegard, N., Dwek, E., et al. 1995, ApJ, 452, 262
* Soifer et al. (1987) Soifer, B. T., Houck, J. R., & Neugebauer, G. 1987, ARA&A, 25, 187
* Solomon & Sage (1988) Solomon, P. M., & Sage, L. J. 1988, ApJ, 334, 613
* Solomon & Vanden Bout (2005) Solomon, P. M., & Vanden Bout, P. A. 2005, ARA&A, 43, 677
* Sparke & Gallagher (2000) Sparke, L. S., & Gallagher, J. S., III 2000, Galaxies in the Universe: An Introduction (1st ed.; Cambridge: University Press)
* Taniguchi et al. (1999) Taniguchi, Y., Ohyama, Y., & Sanders, D. B. 1999, ApJ, 522, 214
* van Dishoeck & Black (1988) van Dishoeck, E. F., & Black, J. H. 1988, ApJ, 334, 771
* Vila-Vilaró (2008) Vila-Vilaró, B. 2008, PASJ, 60, 1231
* Wilson et al. (2009) Wilson, T. L., Rohlfs, K., & Hüttemeister, S., 2009, Tools of Radio Astronomy (5th ed.; Berlin:Springer)
* Young & Scoville (1982) Young, J. S., & Scoville, N. Z. 1982, ApJ, 258, 467
* Young & Sanders (1986) Young, J. S., & Sanders, D. B. 1986, ApJ, 302, 680
* Young & Scoville (1991) Young, J. S., & Scoville, N. Z. 1991, ARA&A, 29, 581
* Young et al. (1995) Young, J. S., Xie, S., Tacconi, L., et al. 1995, ApJS, 98, 219
* Zhang et al. (2010) Zhang, Z. Y., Gao, Y., & Wang, J. Z. 2010, ScChG, 53, 1357
Figure 1a: Spectra of 12CO ($thick\ lines$) and 13CO ($thin\ lines$) from the
central region of M82 with 13CO tentatively detected. All spectra are on the
$T_{\rm mb}$ scale and binned to a velocity resolution of $\sim$20 km s-1 (for
some weak 13CO emission positions, the spectra are further smoothed to
$\sim$40 km s-1 for display). 12CO spectra are divided by 20 for comparison
purposes. The offset from the center position is indicated in each box. A
linear baseline has been subtracted using the line-free portions of each
spectrum. M82 was mapped in the $4^{\prime}\times 2.5^{\prime}$ central
region. The top panel shows the observed positions(crosses) and 12CO contours
(contours begin at 10 K km s-1 and increase by 40 K km s-1 each step) overlaid
on infrared image taken from Spitzer (8.0 $\mu$m ($red$), 5.8 $\mu$m
($green$), 3.6 $\mu$m ($blue$))
Figure 1b: Same as figure 1a, but for NGC 3627. The spectra were measured
along the major ($PA$=176∘) axes of the galactic disk. The left panel shows
the half-beam spacing observations overlaid on Spitzer infrared image
(24.0$\mu$m ($red$),8.0$\mu$m ($green$),3.6$\mu$m ($blue$)). The size of HPBW
($\sim$60′′) is represented by circle.
Figure 1c: Same as figure 1b, but for NGC 3628. The spectra were measured
along the major ($PA$=103∘) axes of the galactic disk.
Figure 1d: Same as figure 1b, but for NGC 4631. The spectra were measured
along the major ($PA$=88∘) axes of the galactic disk.
Figure 1e: Same as figure 1b, but for m51. The spectra were measured along the
major ($PA$=0∘) axes of the galactic disk.
Figure 2: Spectra of 12CO, 13CO and C18O obtained from the same positions on
the galaxy M82 and M51. 12CO spectra are divided by 30 and 15 for display in
M82 and M51, while 13CO spectra are divided by 3 for display. M51 disk spectra
represent the average emission over the disk region except the center (0,0).
Figure 3: Radial distributions of 12CO and 13CO integrated intensity and their
ratios at each position along the major axes (for M82,the distributions along
the axis with position angle of 90 degree are shown). Error bars are 1
$\sigma$ statistical uncertainty based on the measured rms noise in each
spectrum. Upper limits (2$\sigma$) are denoted with downward arrow for the
non-detection 13CO emission, the corresponding lower limits of line ratio
${\cal R}$ are denoted with upward arrow. The solid line in NGC 3627, NGC 3628
and M51 represent the exponential fit to the radial distribution of mean
intensity of 12CO and 13CO emission, while in NGC 4631 represents the power
law fit to 12CO emission.
Figure 4: Position-velocity diagram of 12CO and 13CO (only for NGC 3627 and
M51) emission along the major axes of galaxies. For M82, the $P-V$ diagrams
with position angle of 0∘ and 90∘ are shown. All the spectra were smoothed to
have a velocity resolution of $\sim$20 km s-1. The dashed contour on the 13CO
panel represents the lowest 12CO contour for comparison.
Figure 5: Rotation velocity derived from 12CO emission line using eq.(8) and
line width measured at each position along the major axis. The different
symbols represent the same as in Fig.3
Figure 6: The relationship for the normal spiral and starburst galaxies
between the integrated intensity ratio of 12CO/13CO and a) 12CO/C18O, b)
13CO/C18O, c) 13CO/HCO+, and d) 13CO/CS. The black symbol represents the
emission of 12CO, 13CO, and C18O in M82 and M51 (add some new detections in IC
342 and NGC 6949); HCO+ and CS in M82, NGC 3628 and M51. Lower limits
(2$\sigma$) of ratio are denoted with right pointing arrow for some non-
detection C18O emission. The triangle symbol represents the data taken from
literatures (see Table 4), the red triangles: NGC 3256, NGC 6240, Arp 220; the
blue triangles: NGC 253, NGC 1808, NGC 2146, NGC 4826 and Circinus. Note that
all the ratios have been corrected for the different beamsizes.
Table 1: Source List and Galaxy Properties
Source | Alias | R.A. | Decl. | $V$ | $i$ | P.A. | Type | $D_{25}$ | $D$ | $d$
---|---|---|---|---|---|---|---|---|---|---
| | (J2000.0) | (J2000.0) | (km s-1) | (deg) | (deg) | | (arcmin) | (Mpc) | (kpc)
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11)
NGC 2903 | UGC 5079 | 09 32 10.5 | +21 30 05.0 | 556 | 61 | 17 | SAB(rs)bc, H$\amalg$ | 12.6$\times$6.0 | 7.96 | 1.9
NGC 3031 | M 81 | 09 55 33.6 | +69 03 56.0 | $-$34 | 58 | 157 | SA(s)ab, LINER/Sy1.8 | 26.9$\times$14.1 | 1.00 | 0.3
NGC 3034 | M 82 | 09 55 49.6 | +69 40 41.0 | 203 | 80a | 65a | I0, Sbrst/H$\amalg$ | 11.2$\times$4.3 | 2.90 | 1.3
NGC 3521 | UGC 6150 | 11 05 49.2 | $-$00 02 15.0 | 805 | 58 | 164 | SAB(rs)bc, H$\amalg$/LINER | 11.0$\times$5.1 | 11.47 | 2.6
NGC 3627 | M 66 | 11 20 15.3 | +12 59 32.0 | 727 | 63 | 176 | SAB(s)b, LINER/Sy2 | 9.1$\times$4.2 | 10.41 | 2.6
NGC 3628 | UGC 6350 | 11 20 16.2 | +13 35 22.0 | 843 | 89a | 103a | SAb pec sp, H$\amalg$/LINER | 14.8$\times$3.0 | 12.07 | 3.1
NGC 4631 | UGC 7865 | 12 42 07.1 | +32 32 33.0 | 606 | 85a | 88a | SB(s)d | 15.5$\times$2.7 | 8.67 | 2.5
NGC 4736 | M 94 | 12 50 52.9 | +41 07 15.0 | 308 | 35 | 100 | (R)SA(r)ab, Sy2/LINER | 11.2$\times$9.1 | 4.40 | 1.4
NGC 5055 | M 63 | 13 15 49.5 | +42 01 39.0 | 504 | 55 | 105 | SA(rs)bc, H$\amalg$/LINER | 12.6$\times$7.2 | 7.21 | 2.3
NGC 5194 | M 51a | 13 29 53.5 | +47 11 42.0 | 463 | 20a | 0 | SA(s)bc pec, H$\amalg$/Sy2.5 | 11.2$\times$6.9 | 6.62 | 2.2
NGC 5457 | M 101 | 14 03 09.0 | +54 21 24.0 | 241 | 27 | 40 | SAB(rs)cd | 28.8$\times$26.9 | 3.45 | 1.4
aFrom Young et al. (1995). Cols.(1) and (2): Galaxy name. Col.(3) and (4):
Adopted tracking center. Units of right ascension are hours,minutes,and
seconds,and units of declination are degrees,arcminutes,and arcseconds.
Col.(5): Heliocentric velocity drawn from the literature and NED. Cols.(6) and
(7): Inclination ($i$) and position angle (P.A.) from Helfer et al.2003,
except where noted. Col.(8): Morphological type and nuclear classification
from the NED database. Col.(9): Major- and minor- axis diameters from the NED
database. Col.(10): Luminosity distance, calculated assuming $H_{0}$ = 70 km
s-1MPc-1. Col.(11): Linear scale of 60′′ at distance $D$, taken from the NED
database.
Table 2: Observed and Derived Properties of 12CO and 13CO
| | | 12CO | | 13CO | | | |
---|---|---|---|---|---|---|---|---|---
Source | $\Delta\alpha^{a}$ | $\Delta\delta^{a}$ | $I_{\rm{}^{12}CO}\pm\sigma_{I}b$ | $V_{\rm mean}^{c}$ | $\Delta V^{c}$ | | $I_{\rm{}^{12}CO}\pm\sigma_{I}^{b}$ | $V_{\rm mean}^{c}$ | $\Delta V^{c}$ | ${\cal R}^{d}$ | $\tau$(${}^{13}{\rm CO})^{e}$ | $N({\rm H}_{2})^{f}$ | $T_{k}^{g}$
| (arcmin) | (arcmin) | (K km s-1) | (km s-1) | (km s-1) | | (K km s-1) | (km s-1) | (km s-1) | | | $(10^{20}\ {\rm cm}^{-2})$ | (K)
NGC 2903 | 0.00 | 0.00 | 24.89$\pm$0.92 | 548 | 155 | | 1.95$\pm$0.58 | 591 | 151 | 12.8$\pm$4.3 | 0.08 | 49.8 | 55
NGC 3031 | 0.00 | 0.00 | 0.66$\pm$0.28 | -38 | 127 | | $<0.68$ | … | … | $>$1.0 | … | 1.3 | …
M82 | 0.00 | 0.00 | 224.61$\pm$1.51 | 159 | 138 | | 10.25$\pm$0.82 | 126 | 123 | 21.9$\pm$1.9 | 0.05 | 449.2 | 98
| 0.00 | 0.50 | 163.0$\pm$1.62 | 180 | 162 | | 11.80$\pm$1.06 | 153 | 167 | 13.8$\pm$1.4 | 0.08 | 328.8 | 66
| 0.00 | -0.50 | 113.4$\pm$1.12 | 150 | 115 | | 5.59$\pm$0.90 | 137 | 150 | 20.3$\pm$3.5 | 0.05 | 230.0 | 110
| 0.50 | 0.00 | 344.77$\pm$2.04 | 214 | 184 | | 13.91$\pm$1.31 | 230 | 194 | 24.8$\pm$2.5 | 0.04 | 689.5 | 111
| 0.50 | 0.50 | 278.07$\pm$1.53 | 233 | 183 | | 11.37$\pm$0.97 | 238 | 180 | 24.5$\pm$2.2 | 0.04 | 556.1 | 110
| 0.50 | -0.50 | 134.00$\pm$1.50 | 169 | 125 | | 5.61$\pm$1.10 | 162 | 87 | 23.9$\pm$4.9 | 0.04 | 269.8 | 110
| 1.00 | 0.00 | 228.83$\pm$1.66 | 242 | 180 | | 11.29$\pm$1.25 | 265 | 142 | 20.3$\pm$2.4 | 0.05 | 457.7 | 90
| 1.00 | 0.50 | 218.00$\pm$1.69 | 268 | 162 | | 6.38$\pm$0.83 | 313 | 110 | 34.2$\pm$4.7 | 0.03 | 435.8 | 120
| 1.50 | 0.50 | 92.34$\pm$2.04 | 255 | 164 | | 6.99$\pm$1.01 | 264 | 215 | 13.2$\pm$2.2 | 0.08 | 184.7 | 57
| 1.50 | -0.50 | 48.95$\pm$1.74 | 173 | 94 | | 3.64$\pm$1.36 | 158 | 82 | 13.4$\pm$5.5 | 0.08 | 97.9 | 58
| 2.50 | 0.00 | 48.64$\pm$2.44 | 219 | 147 | | 1.94$\pm$0.96 | 158 | 46 | 25.1$\pm$13.7 | 0.04 | 97.3 | 113
| -0.50 | 0.00 | 75.91$\pm$1.88 | 144 | 112 | | 6.78$\pm$2.83 | 114 | 100 | 11.2$\pm$5.0 | 0.09 | 151.8 | 48
| -0.50 | -0.50 | 45.75$\pm$1.66 | 140 | 110 | | 5.75$\pm$1.33 | 226 | 250 | 7.9$\pm$2.1 | 0.13 | 91.5 | 32
| -1.00 | 0.00 | 47.85$\pm$1.96 | 135 | 104 | | 2.52$\pm$1.49 | 141 | 44 | 18.9$\pm$12.0 | 0.05 | 95.7 | 84
| -1.50 | 0.00 | 47.40$\pm$3.86 | 162 | 147 | | 5.81$\pm$2.16 | 183 | 111 | 8.2$\pm$3.7 | 0.13 | 94.8 | 33
NGC 3521 | 0.00 | 0.00 | 22.00$\pm$0.37 | 770 | 231 | | 2.05$\pm$0.66 | 770 | 210 | 10.7$\pm$3.7 | 0.10 | 44.0 | 45
NGC 3627 | 0.00 | 0.00 | 17.24$\pm$0.87 | 769 | 179 | | 1.66$\pm$0.49 | 866 | 200 | 10.4$\pm$3.6 | 0.10 | 34.5 | 44
| 0.00 | 0.50 | 25.26$\pm$0.83 | 653 | 157 | | 4.06$\pm$0.98 | 625 | 163 | 6.2$\pm$1.7 | 0.18 | 50.5 | 24
| 0.00 | 1.00 | 13.88$\pm$0.38 | 633 | 177 | | 2.28$\pm$0.51 | 597 | 138 | 6.1$\pm$1.5 | 0.18 | 27.8 | 23
| 0.00 | 1.50 | 8.55$\pm$0.52 | 554 | 58 | | 1.46$\pm$0.48 | 580 | 75 | 5.9$\pm$2.3 | 0.19 | 17.1 | 22
| 0.00 | 2.00 | 6.90$\pm$0.50 | 583 | 75 | | 1.79$\pm$0.35 | 527 | 74 | 3.9$\pm$1.0 | 0.30 | 13.8 | 13
| 0.00 | -0.50 | 22.25$\pm$1.96 | 827 | 115 | | 2.87$\pm$0.69 | 829 | 145 | 7.7$\pm$2.5 | 0.14 | 44.5 | 31
| 0.00 | -1.00 | 8.96$\pm$0.53 | 851 | 185 | | 1.78$\pm$0.35 | 885 | 91 | 5.0$\pm$1.3 | 0.22 | 17.9 | 18
| 0.00 | -1.50 | 6.82$\pm$0.31 | 872 | 56 | | 1.96$\pm$0.41 | 877 | 90 | 3.5$\pm$0.9 | 0.34 | 13.6 | 11
| 0.00 | -2.00 | 5.90$\pm$0.42 | 886 | 66 | | 1.02$\pm$0.40 | 902 | 43 | 5.1$\pm$2.5 | 0.19 | 11.8 | 22
NGC 3628 | 0.00 | 0.00 | 35.80$\pm$0.67 | 823 | 232 | | 3.22$\pm$0.50 | 839 | 211 | 11.1$\pm$1.9 | 0.09 | 71.6 | 47
| 0.49 | -0.11 | 28.95$\pm$0.52 | 865 | 204 | | $<$2.52 | … | … | $>$11.5 | … | 57.9 | …
| 0.98 | -0.22 | 9.95$\pm$0.77 | 962 | 150 | | 2.52$\pm$0.60 | 986 | 110 | 3.9$\pm$1.2 | 0.29 | 19.9 | 13
| 1.96 | -0.44 | 3.77$\pm$0.92 | 998 | 88 | | $<$1.76 | … | … | $>$2.1 | … | 7.5 | …
| 2.94 | -0.66 | 2.13$\pm$0.64 | 1030 | 72 | | $<$1.18 | … | … | $>$1.8 | … | 4.3 | …
| -0.49 | 0.11 | 14.06$\pm$0.46 | 751 | 177 | | 3.07$\pm$0.49 | 745 | 206 | 4.6$\pm$0.9 | 0.25 | 28.1 | 16
| -0.98 | 0.22 | 16.16$\pm$1.47 | 738 | 113 | | $<$2.04 | … | … | $>$7.9 | … | 32.3 | …
| -1.96 | 0.44 | 6.45$\pm$0.81 | 658 | 130 | | 1.35$\pm$0.48 | 675 | 83 | 4.8$\pm$2.3 | 0.23 | 12.9 | 17
| -2.94 | 0.66 | 2.95$\pm$1.11 | 636 | 64 | | $<$2.22 | … | … | $>$1.3 | … | 5.9 | …
NGC 4631 | 0.00 | 0.00 | 17.36$\pm$0.62 | 623 | 144 | | 1.65$\pm$0.46 | 606 | 146 | 10.5$\pm$3.3 | 0.10 | 34.7 | 44
| 0.51 | 0.02 | 23.77$\pm$0.24 | 674 | 93 | | 2.53$\pm$0.46 | 677 | 80 | 9.4$\pm$1.8 | 0.11 | 47.5 | 39
| 1.00 | 0.03 | 13.82$\pm$0.67 | 673 | 152 | | 1.48$\pm$0.42 | 697 | 198 | 9.3$\pm$3.0 | 0.11 | 27.6 | 39
| 1.50 | 0.06 | 4.98$\pm$0.26 | 726 | 102 | | 0.71$\pm$0.27 | 704 | 108 | 7.0$\pm$3.0 | 0.15 | 9.9 | 28
| 2.00 | 0.06 | 4.07$\pm$0.25 | 715 | 112 | | 1.25$\pm$0.21 | 731 | 148 | 3.3$\pm$0.7 | 0.37 | 8.1 | 10
| 2.50 | 0.11 | 3.29$\pm$0.19 | 718 | 90 | | $<$0.52 | … | … | $>$6.3 | … | 6.6 | …
| 2.99 | 0.22 | 2.48$\pm$0.33 | 739 | 70 | | $<$0.80 | … | … | $>$3.1 | … | 5.0 | …
| -0.51 | -0.02 | 19.50$\pm$0.27 | 571 | 105 | | 1.76$\pm$0.31 | 528 | 141 | 11.1$\pm$2.1 | 0.09 | 39.0 | 47
| -1.00 | -0.03 | 11.40$\pm$0.44 | 590 | 146 | | 1.77$\pm$0.47 | 600 | 140 | 6.4$\pm$1.9 | 0.17 | 22.8 | 25
| -1.50 | -0.06 | 3.41$\pm$0.23 | 524 | 99 | | $<$0.83 | … | … | $>$4.0 | … | 6.8 | …
| -2.00 | -0.07 | 1.89$\pm$0.16 | 529 | 108 | | $<$0.52 | … | … | $>$3.6 | … | 3.8 | …
| -2.50 | -0.11 | 1.96$\pm$0.14 | 535 | 83 | | $<$0.40 | … | … | $>$4.9 | … | 3.9 | …
| -2.99 | -0.22 | 2.20$\pm$0.31 | 526 | 76 | | $<$0.90 | … | … | $>$2.4 | … | 4.4 | …
| -3.50 | -0.38 | 2.86$\pm$0.19 | 521 | 51 | | $<$1.10 | … | … | $>$2.6 | … | 5.7 | …
| -3.68 | -0.30 | 3.68$\pm$0.30 | 510 | 43 | | 0.71$\pm$0.25 | 486 | 34 | 5.2$\pm$2.2 | 0.21 | 7.4 | 19
NGC 4736 | 0.00 | 0.00 | 8.01$\pm$0.70 | 340 | 153 | | 1.12$\pm$0.29 | 391 | 114 | 7.2$\pm$2.5 | 0.15 | 16.0 | 28
NGC 5055 | 0.00 | 0.00 | 16.15$\pm$1.05 | 527 | 214 | | 2.09$\pm$0.86 | 527 | 275 | 7.7$\pm$3.7 | 0.14 | 32.3 | 31
M51 | 0.00 | 0.00 | 33.35$\pm$0.39 | 475 | 103 | | 3.86$\pm$0.35 | 472 | 87 | 8.6$\pm$0.9 | 0.12 | 66.7 | 35
| 0.00 | 0.50 | 28.90$\pm$0.42 | 434 | 80 | | 3.01$\pm$0.48 | 425 | 37 | 9.6$\pm$1.7 | 0.11 | 57.8 | 40
| 0.00 | 1.00 | 17.45$\pm$0.56 | 410 | 30 | | 2.43$\pm$0.41 | 408 | 23 | 7.2$\pm$1.4 | 0.15 | 34.9 | 29
| 0.00 | 1.50 | 10.35$\pm$0.50 | 406 | 30 | | 1.26$\pm$0.41 | 409 | 41 | 8.2$\pm$3.1 | 0.13 | 20.7 | 34
| 0.00 | 2.00 | 9.70$\pm$0.45 | 401 | 32 | | 1.98$\pm$0.59 | 406 | 26 | 4.9$\pm$1.7 | 0.23 | 19.4 | 18
| 0.00 | 2.50 | 6.90$\pm$0.37 | 398 | 29 | | 0.80$\pm$0.29 | 396 | 37 | 8.6$\pm$3.5 | 0.12 | 13.8 | 36
| 0.00 | 3.00 | 2.62$\pm$0.56 | 388 | 20 | | $<$1.00 | … | … | $>$2.6 | … | 5.2 | …
| 0.00 | -0.50 | 27.35$\pm$0.53 | 513 | 62 | | 2.28$\pm$0.49 | 520 | 58 | 12.0$\pm$2.8 | 0.09 | 54.7 | 51
| 0.00 | -1.00 | 16.25$\pm$0.55 | 534 | 49 | | 1.90$\pm$0.44 | 536 | 42 | 8.5$\pm$2.3 | 0.12 | 32.5 | 35
| 0.00 | -1.50 | 6.55$\pm$0.23 | 534 | 51 | | 0.86$\pm$0.22 | 563 | 53 | 7.6$\pm$2.2 | 0.14 | 13.1 | 31
| 0.00 | -2.00 | 7.90$\pm$0.37 | 538 | 36 | | 1.64$\pm$0.38 | 539 | 43 | 4.8$\pm$1.3 | 0.23 | 15.8 | 17
| 0.00 | -2.50 | 5.77$\pm$0.33 | 534 | 46 | | $<$0.68 | … | … | $>$8.5 | … | 11.5 | …
| 0.00 | -3.00 | 2.28$\pm$0.35 | 553 | 36 | | $<$0.62 | … | … | $>$3.7 | … | 4.6 | …
M51 diskh | … | … | 11.85$\pm$0.30 | 462 | 151 | | 1.73$\pm$0.37 | 411 | 20 | 6.8$\pm$1.6 | 0.16 | 23.7 | 27
NGC 5457 | 0.00 | 0.00 | 9.82$\pm$0.51 | 255 | 72 | | 1.97$\pm$0.49 | 277 | 128 | 5.0$\pm$1.5 | 0.22 | 19.6 | 18
aOffset from the nucleus position listed in Table 1, in units of arcminutes.
bThe measured integrated intensities and associated uncertainties, calculated
using the prescription explained in the text. For non-detections, a 2
$\sigma_{I}$ upper limit was given.
cVelocity and line widths are Gaussian fit values, or else are calculated from
the moment for non-Gaussian lines. dThe ratio of 12CO to 13CO integrated
intensities. The errors are based on the statistical uncertainties of
integrated intensities, which can be derived from the error transfer formula
$\sigma({\cal
R}_{12/13})=([\frac{\sigma(I_{12})}{I_{13}}]^{2}+[\frac{\sigma(I_{13})\times{\cal
R}_{12/13}}{I_{13}}]^{2})^{1/2}$.
eThe average optical depth in 13CO emission line, calculated from eq.(4). fH2
column density calculated from 12CO data by adopting Galactic standard
$X$-factor, $2.0\times 10^{20}$ cm${}^{-2}{\rm K}^{-1}{\rm km}^{-1}$s.
gThe gas kinetic temperature calculated from the assumption of LTE conditions
by equating the column densities derived from both 12CO and 13CO (see text §
3.2). hM51 disk represents the average intensity over the disk region except
the nucleus.
Table 3: Observed and Derived Properties of C18O
Source | $\Delta\alpha$ | $\Delta\delta$ | $I_{\rm C^{18}O}\pm\sigma_{I}$ | $V_{\rm mean}$ | $\Delta V$ | $\tau(C^{18}O)^{a}$ | $N(\rm H_{2})^{b}$
---|---|---|---|---|---|---|---
Source | arcmin | arcmin | (K km s-1) | (km s-1) | (km s-1) | | $(10^{21}\ {\rm cm}^{-2})$
M82 | 0.0 | 0.0 | 3.81$\pm$0.90 | 140 | 121 | 0.017 | 18.6
| 0.0 | 0.5 | $<$4.2 | … | … | … | …
| 0.0 | -0.5 | 2.55$\pm$1.27 | 224 | 176 | 0.022 | 14.0
| 0.5 | 0.0 | 3.80$\pm$0.90 | 145 | 93 | 0.011 | 25.2
| 0.5 | 0.5 | 3.30$\pm$0.76 | 289 | 135 | 0.012 | 20.5
| 0.5 | -0.5 | $<$4.1 | … | … | … | …
| 1.0 | 0.0 | 4.19$\pm$1.20 | 284 | 236 | 0.018 | 20.6
| 1.0 | 0.5 | 3.14$\pm$1.84 | 256 | 174 | 0.015 | 14.8
M51 | 0.0 | 0.0 | 1.04$\pm$0.35 | 466 | 176 | 0.032 | 7.4
| 0.0 | 0.5 | 0.92$\pm$0.49 | 413 | 154 | 0.032 | 5.7
| 0.0 | 1.0 | 0.45$\pm$0.16 | 407 | 23 | 0.026 | 4.6
| 0.0 | 1.5 | 0.99$\pm$0.54 | 393 | 79 | 0.10 | 2.4
| 0.0 | 2.0 | $<$1.17 | … | … | … | …
| 0.0 | 2.5 | 0.88$\pm$0.37 | 369 | 58 | 0.14 | 1.6
| 0.0 | -0.5 | 0.95$\pm$0.17 | 519 | 37 | 0.035 | 4.3
| 0.0 | -1.0 | 0.55$\pm$0.13 | 544 | 35 | 0.034 | 3.6
| 0.0 | -1.5 | $<$0.68 | … | … | … | …
| 0.0 | -2.0 | $<$0.84 | … | … | … | …
aThe average optical depth in C18O emission line is calculated from the
similar equation as eq.(4). bH2 column density derived from C18O, calculated
from eq.(6).
Table 4: Line Intensity Ratios
Source | $\Delta\alpha$ | $\Delta\delta$ | $I_{\rm{}^{12}CO}/I_{\rm C^{18}O}$ | $I_{\rm{}^{13}CO}/I_{\rm C^{18}O}$ | $I_{\rm{}^{13}CO}/I_{\rm HCO^{+}}$ | $I_{\rm{}^{13}CO}/I_{\rm CS}$ | References
---|---|---|---|---|---|---|---
| arcmin | arcmin | | | | |
M82 | 0.0 | 0.0 | 53.5$\pm$13.0 | 2.7$\pm$0.9 | 1.4$\pm$0.2 | 2.1$\pm$0.5 | This work
| 0.0 | 0.5 | $>$35.3 | $>$2.8 | 0.9$\pm$0.1 | … | This work
| 0.0 | -0.5 | 40.5$\pm$20.6 | 2.2$\pm$1.4 | 0.5$\pm$0.1 | … | This work
| 0.5 | 0.0 | 82.3$\pm$19.5 | 3.7$\pm$1.2 | 1.2$\pm$0.2 | 3.0$\pm$0.9 | This work
| 0.5 | 0.5 | 76.5$\pm$18.0 | 3.4$\pm$1.1 | 0.9$\pm$0.1 | 4.0$\pm$1.4 | This work
| 0.5 | -0.5 | $>$29.8 | $>$1.4 | 1.6$\pm$0.7 | … | This work
| 1.0 | 0.0 | 49.5$\pm$14.5 | 2.7$\pm$1.1 | 1.8$\pm$0.4 | 3.3$\pm$1.0 | This work
| 1.0 | 0.5 | 63.2$\pm$37.5 | 2.0$\pm$1.4 | 1.9$\pm$0.4 | … | This work
NGC 3628 | 0.0 | 0.0 | … | … | 0.5$\pm$0.1 | … | This work
NGC 4631 | 0.0 | 0.0 | $>$22.8 | $>$2.4 | … | … | This work
M51 | 0.0 | 0.0 | 29.1$\pm$10.3 | 3.7$\pm$1.6 | 0.9$\pm$0.2 | 2.6$\pm$0.9 | This work
| 0.0 | 0.5 | 28.6$\pm$15.8 | 3.3$\pm$2.3 | 1.8$\pm$0.7 | 2.4$\pm$0.8 | This work
| 0.0 | 1.0 | 34.9$\pm$13.7 | 5.3$\pm$2.8 | 1.3$\pm$0.7 | … | This work
| 0.0 | 1.5 | 9.4$\pm$5.6 | 1.3$\pm$1.1 | … | … | This work
| 0.0 | 2.0 | $>$7.6 | $>$1.7 | … | … | This work
| 0.0 | 2.5 | 7.1$\pm$3.4 | 0.9$\pm$0.7 | … | … | This work
| 0.0 | -0.5 | 26.0$\pm$5.2 | 2.4$\pm$0.9 | 1.5$\pm$0.5 | 5.1$\pm$2.1 | This work
| 0.0 | -1.0 | 27.0$\pm$7.1 | 3.5$\pm$1.6 | … | … | This work
| 0.0 | -1.5 | $>$8.7 | $>$1.3 | … | … | This work
| 0.0 | -2.0 | $>$8.6 | $>$2.0 | … | … | This work
M51 disk | … | … | 16.4$\pm$7.4 | 2.6$\pm$1.7 | … | … | This work
NGC 253 | 0.0 | 0.0 | 61.0$\pm$6.2 | 4.9$\pm$0.7 | 1.8$\pm$0.1 | 2.5$\pm$0.4 | 1,2,3
NGC 891 | 0.0 | 0.0 | … | … | … | 4.5$\pm$1.4 | 3,4
NGC 1068 | 0.0 | 0.0 | … | … | … | 1.7$\pm$0.5 | 3,5
NGC 1808 | 0.0 | 0.0 | 48.9$\pm$5.4 | 3.3$\pm$0.6 | … | … | 6,7
NGC 2146 | 0.0 | 0.0 | 32.7$\pm$9.1 | 3.0$\pm$1.1 | … | … | 6,7
NGC 2903 | 0.0 | 0.0 | … | … | … | 4.5$\pm$1.5 | 3,4
NGC 3256 | 0.0 | 0.0 | 91.6$\pm$28.1 | 2.9$\pm$1.5 | 0.8$\pm$0.3 | 1.8$\pm$1.1 | 8
NGC 4736 | 0.0 | 0.0 | … | … | … | 4.5$\pm$1.5 | 3,4
NGC 4826 | 0.0 | 0.0 | 19.0$\pm$2.7 | 4.1$\pm$1.1 | … | … | 6,7
NGC 5457 | 0.0 | 0.0 | … | … | … | 2.9$\pm$0.8 | 3,4
NGC 6240 | 0.0 | 0.0 | … | … | 0.2$\pm$0.1 | … | 9
Maffei2 | 0.0 | 0.0 | … | … | … | 5.2$\pm$0.8 | 3,4
IC 342 | 0.0 | 0.0 | 35.1$\pm$3.0 | 3.9$\pm$0.3 | … | 2.1$\pm$0.4 | 1,3,4
Circinus | 0.0 | 0.0 | 54.4$\pm$5.4 | 5.7$\pm$0.7 | … | … | 6,7
Arp 220 | 0.0 | 0.0 | 49.4$\pm$19.9 | 1.0$\pm$0.4 | 0.3$\pm$0.1 | 0.6$\pm$0.3 | 9
All the ratios of integrated intensity have been corrected for the different
beamsizes. (1) Sage et al. (1991); (2) Henkel et al. (1993); (3) Sage et al.
(1990); (4) Sage & Isbell (1991); (5) Young & Sanders (1986); (6) Aalto et al.
(1991); (7) Aalto et al. (1995); (8) Casoli et al. (1992); (9) Greve et al.
(2009).
## Appendix A The Stability of DLH Radio Telescope
Figure 7 is an Allan Variance Plot, which is often the ultimate way to measure
stability but requires an enormous amount of observation time. There are three
main contributions to be aware of in the Allan plot, including the white
noise, the 1/$f$-noise and low frequency drift noise. The upper panel of Fig.
7 shows that how the squared RMS noise of 12CO spectra with velocity
resolution of $\sim$0.16 km s-1 varied with the integration time on source,
while the lower panel shows the relative error deviate from the radiometer
equation. The radiometer equation, also the limiting sensitivity of the
spectrometer, is given by
$\frac{\Delta T_{\rm rms}}{T_{\rm sys}}=\frac{K}{\sqrt{\Delta\nu\tau}}\,,$
$None$
where $T_{\rm sys}$ is system temperature, $\tau$ is the sum of the
integration time on the source and on the off position, $K$ is a factor
accounting for data taking procedures, and $\Delta T_{\rm rms}$ is rms noise
temperature of the spectra for a given frequency resolution $\Delta\nu$.
Figure 7a and b show the data obtained from the observing semesters of 2008 to
2009 and of 2009 to 2010, respectively. It can be seen in Figure 7a that the
Allan plot begin to deviate from the radiometer equation when integrating
about 10 minutes on source, and the relative error increase to 80% when
integrating 100 minutes, however, it is shown in Figure 7b that the relative
error increase to only 10% with the same integration time. Therefore, both the
stability and the sensitivity of the telescope have been greatly improved
after the system maintenance in the summer of 2009. The average noise level in
units of $T^{*}_{A}$ would be 0.020 K and 0.018 K when integrating 100 and 200
minutes on source respectively, with spectra velocity resolution of $\sim$10
km s-1. Therefore, this plot can be used to estimate observation time
according to the sensitivity that we required. But an important thing to note
in this Allan Variance Plot is that the data we used are the raw data, that is
to say, we didn’t do any data processing such as rejecting the spectra with
distorted baseline or abnormal rms noise level. Consequently, we could get
even better sensitivity if we just co-added the spectra that found to be
consistent within normal rms noise level. Based on these analysis, it seems to
imply that the telescope still have the capability to detect even weaker
emission signal than what have been detected, since the effective integration
time of most of our observations have not reach the limit.
Figure 7: The Allan Variance and the relative error from radiometer equation
as a function of integration time on source (see Appendix A). On the upper
panel of each figure, the dashed line represents what is expected from the
radiometer equation with a slope of -1. On the lower panel, each point
represent the relative error between the value of measured and expected from
radiometer equation. (a)the data taken from the observing semester of 2008 to
2009. (b)the data taken from the observing semester of 2009 to 2010.
|
arxiv-papers
| 2011-03-29T03:40:58 |
2024-09-04T02:49:17.999136
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Qinghua Tan, Yu Gao, Zhiyu Zhang, Xiaoyang Xia",
"submitter": "Qinghua Tan",
"url": "https://arxiv.org/abs/1103.5540"
}
|
1103.5789
|
# On the Capacity of the $K$-User Cyclic Gaussian Interference Channel
Lei Zhou and Wei Yu
Department of Electrical and Computer Engineering,
University of Toronto, Toronto, Ontario M5S 3G4, Canada
emails: {zhoulei, weiyu}@comm.utoronto.ca
###### Abstract
This paper studies the capacity region of a $K$-user cyclic Gaussian
interference channel, where the $k$th user interferes with only the $(k-1)$th
user (mod $K$) in the network. Inspired by the work of Etkin, Tse and Wang,
which derived a capacity region outer bound for the two-user Gaussian
interference channel and proved that a simple Han-Kobayashi power splitting
scheme can achieve to within one bit of the capacity region for all values of
channel parameters, this paper shows that a similar strategy also achieves the
capacity region for the $K$-user cyclic interference channel to within a
constant gap in the weak interference regime. Specifically, a compact
representation of the Han-Kobayashi achievable rate region using Fourier-
Motzkin elimination is first derived, a capacity region outer bound is then
established. It is shown that the Etkin-Tse-Wang power splitting strategy
gives a constant gap of at most two bits (or one bit per dimension) in the
weak interference regime. Finally, the capacity result of the $K$-user cyclic
Gaussian interference channel in the strong interference regime is also given.
## I Introduction
The interference channel models a communication scenario where several
mutually interfering transmitter-receiver pairs share the same physical
medium. The interference channel is a useful model for practical wireless
network. The capacity region of the interference channel, however, has not
been completely characterized, even for the two-user Gaussian case.
The largest achievable rate region for the two-user interference channel is
due to a Han-Kobayashi strategy [1], where each transmitter splits its
transmit signal into a common and a private part. The achievable rate region
is the convex hull of the union of achievable rates where each receiver
decodes the common messages from both transmitters plus the private message
intended for itself. Recently, Chong et al. [2] obtained an equivalent
achievable rate region but in a simpler form by applying the Fourier-Motzkin
algorithm together with a time-sharing technique to the Han and Kobayashi’s
original rate region. The optimality of the Han-Kobayashi region for the two-
user Gaussian interference channel is still an open problem in general, except
in the strong interference regime where transmission with common information
only is shown to achieve the capacity region [1, 3, 4], and in a noisy
interference regime where transmission with private information only is shown
to be sum-capacity achieving [5, 6, 7].
In a recent breakthrough, Etkin, Tse and Wang [8] showed that the Han-
Kobayashi scheme can in fact achieve to within one bit of the capacity region
for the two-user Gaussian interference channel for all channel parameters.
Their key insight was that the interference-to-noise ratio (INR) of the
private message should be chosen to be as close to $1$ as possible in the Han-
Kobayashi scheme. They also found a new capacity region outer bound using a
genie-aided technique.
The Etkin-Tse-Wang result applies only to the two-user interference channel.
Practical communication systems often have more than two transmitter-receiver
pairs, yet extending the one-bit result of Etkin, Tse and Wang’s work to
beyond the two-user case is by no means trivial. This is because when more
than 2 users are involved, the Han-Kobayashi private-common superposition
coding strategy becomes exceedingly complicated. It is conceivable that
multiple common messages may be needed at each transmitter, each intended to
be decoded by an arbitrary subset of receivers, thus making the optimization
of the resulting rate region difficult. Further, superposition coding itself
may not be adequate. Interference alignment types of coding scheme [9] has
been shown to be able to enlarge the achievable rate region and to achieve to
within constant gap of many-to-one and one-to-many interference channels [10].
In the context of $K$-user Gaussian interference channels, sum-capacity
results are available in the noisy interference regime [5, 11]. Annapureddy et
al. [5] obtained the sum capacity for the symmetric three-user Gaussian
interference channel, the one-to-many and the many-to-one Gaussian
interference channels under the noisy interference criterion. Shang et al.
[11] studied the fully connected $K$-user Gaussian interference channel and
showed that treating interference as noise at the receiver is sum-capacity
achieving when the transmit power and the cross channel gains are sufficiently
weak to satisfy a certain criterion. In addition, much work has also been
carried out on the generalized degrees of freedom (as defined in [8]) of the
$K$-user interference channel and its variations [9, 12, 13].
Figure 1: The circular array handoff model
Instead of treating the general $K$-user interference channel, this paper
focuses on a cyclic Gaussian interference channel model, where the $k$th user
interferes with only the $(k-1)$th user. In this case, each transmitter
interferes with only one other receiver, and each receiver suffers
interference from only one other transmitter, thereby avoiding the
difficulties mentioned earlier. For the $K$-user cyclic interference channel,
the Etkin, Tse and Wang’s coding strategy remains a natural one. In our
previous work [14], we showed that such a strategy achieves the sum capacity
for a symmetric channel to within two bits. The main objective of this paper
is to show that this strategy also achieves to within two bits of the capacity
region for the general cyclic interference channel in the weak interference
regime. This paper contains an outline of the main results. Detailed proofs
can be found in [15].
The cyclic interference channel model is motivated by the so-called modified
Wyner model, which describes the soft handoff scenario of a cellular network
[16]. The original Wyner model [17] assumes that all cells are arranged in a
linear array with the base-stations located at the center of each cell, and
where intercell interference comes from only the two adjacent cells. In the
modified Wyner model [16] cells are arranged in a circular array as shown in
Fig. 1. The mobile terminals are located along the circular array. If one
assumes that the mobiles always communicate with the intended base-station to
its left (or right), while only suffering from interference due to the base-
station to its right (or left), one arrives at the $K$-user cyclic Gaussian
interference channel studied in this paper. The modified Wyner model has been
extensively studied in the literature [16, 18, 19], but often either with
interference treated as noise or with the assumption of full base station
cooperation. This paper studies the modified Wyner model without base station
cooperation, in which case the soft handoff problem becomes that of a cyclic
interference channel.
The main results of this paper are as follows. For the $K$-user cyclic
Gaussian interference channel in the weak interference regime, one can achieve
to within two bits of the capacity region using the Etkin, Tse and Wang’s
power splitting scheme. The capacity region in the strong interference regime
is also given. It is shown that transmission with common message only achieves
the capacity region.
A key part of the development involves a Fourier-Motzkin elimination procedure
on the achievable rate region of the $K$-user cyclic interference channel. To
deal with the large number of inequality constraints, an induction proof needs
to be used. It is shown that as compared to the two-user case, where the rate
region is defined by constraints on the individual rate $R_{i}$, the sum rate
$R_{1}+R_{2}$, and the sum rate plus an individual rate $2R_{i}+R_{j}$ ($i\neq
j$), the achievable rate region for the $K$-user cyclic interference channel
is defined by an additional set of constraints on the sum rate of any
arbitrary $l$ adjacent users, where $2\leq l<K$. These four types of rate
constraints completely characterize the Han-Kobayashi region for the $K$-user
cyclic interference channel. They give rise to a total of $K^{2}+1$
constraints.
## II Channel Model
Figure 2: Cyclic Gaussian interference channel
The $K$-user cyclic Gaussian interference channel is first introduced in [14].
It consists of $K$ transmitter-receiver pairs as shown in Fig. 2. Each
transmitter communicates with its intended receiver while causing interference
to only one neighboring receiver. Each receiver receives a signal intended for
it and an interference signal from only one neighboring sender plus the
additive white Gaussian noise (AWGN). As shown in Fig. 2, $X_{1},X_{2},\cdots
X_{K}$ and $Y_{1},Y_{2},\cdots Y_{K}$ are complex-valued input and output
signals, respectively, and $Z_{i}\thicksim\mathcal{CN}(0,\sigma^{2})$ is the
independent and identically distributed (i.i.d) circularly symmetric Gaussian
noise at receiver $i$. The input-output model can be written as
$\displaystyle Y_{1}$ $\displaystyle=$ $\displaystyle
h_{1,1}X_{1}+h_{2,1}X_{2}+Z_{1},$ $\displaystyle Y_{2}$ $\displaystyle=$
$\displaystyle h_{2,2}X_{2}+h_{3,2}X_{3}+Z_{2},$ $\displaystyle\vdots$
$\displaystyle Y_{K}$ $\displaystyle=$ $\displaystyle
h_{K,K}X_{K}+h_{1,K}X_{1}+Z_{K},$ (1)
where each $X_{i}$ has a power constraint $P_{i}$ associated with it, i.e.,
$\mathbb{E}\left[|X_{i}|^{2}\right]\leq P_{i}$. Here, $h_{i,j}$ is the
complex-valued channel gain from transmitter $i$ to receiver $j$.
The encoding-decoding procedure is described as follows. Transmitter $i$ maps
a message $m_{i}\in\\{1,2,\cdots,2^{nR_{i}}\\}$ into a length $n$ codeword
$X_{i}^{n}$ that belongs to a codebook $\mathcal{C}_{i}^{n}$, i.e.
$X_{i}^{n}=f_{i}^{n}(m_{i})$, where $f_{i}^{n}(.)$ represents the encoding
function of user $i$, $i=1,2,\cdots,K$. Codeword $X_{i}^{n}$ is then sent over
a block of $n$ time instances. From the received sequence $Y_{i}^{n}$,
receiver $i$ obtains an estimate $\hat{m}_{i}$ of the transmit message $m_{i}$
using a decoding function $g_{i}^{n}(.)$, i.e.
$\hat{m}_{i}=g_{i}^{n}(Y_{i}^{n})$. The average probability of error is
defined as $P_{e}^{n}=\mathbb{E}\left[\textrm{Pr}(\cup(\hat{m}_{i}\neq
m_{i}))\right]$. A rate tuple $(R_{1},R_{2},\cdots,R_{K})$ is said to be
achievable if for an $\epsilon>0$, there exists a family of codebooks
$\mathcal{C}_{i}^{n}$, encoding functions $f_{i}^{n}(.)$, and decoding
functions $g_{i}^{n}(.)$, $i=1,2,\cdots,K$, such that $P_{e}^{n}<\epsilon$ for
a sufficiently large $n$. The capacity region is the collection of all
achievable rate tuples.
Define the signal-to-noise and interference-to-noise ratios for each user as
follows111Note that the definition of $\mathsf{INR}$ is slightly different
from that of Etkin, Tse and Wang [8].:
$\mathsf{SNR}_{i}=\frac{|h_{i,i}|^{2}P_{i}}{\sigma^{2}},\quad\mathsf{INR}_{i}=\frac{|h_{i,i-1}|^{2}P_{i}}{\sigma^{2}},\;i=1,2,\cdots,K.$
(2)
The $K$-user cyclic Gaussian interference channel is said to be in the weak
interference regime if
$\mathsf{INR}_{i}\leq\mathsf{SNR}_{i},\quad\forall i=1,2,\cdots,K.$ (3)
and the strong interference regime if
$\mathsf{INR}_{i}\geq\mathsf{SNR}_{i},\quad\forall i=1,2,\cdots,K.$ (4)
Otherwise, it is said to be in the mixed interference regime, which has
$2^{K}-2$ possible combinations.
Throughout this paper, modulo arithmetic is implicitly used on the user
indices, e.g., $K+1=1$ and $1-1=K$. Note that when $K=2$, the cyclic channel
reduces to the conventional two-user interference channel.
## III Within Two Bits of the Capacity Region in the Weak Interference Regime
In the two-user case, the shape of the Han-Kobayashi achievable rate region is
the union of polyhedrons (each corresponding to a fixed input distribution)
with boundaries defined by rate constraints on $R_{1}$, $R_{2}$,
$R_{1}+R_{2}$, and on $2R_{1}+R_{2}$ and $2R_{2}+R_{1}$, respectively. To
extend Etkin, Tse and Wang’s result to the general case, one needs to find a
similar rate region characterization for the general $K$-user cyclic
interference channel first.
A key feature of the cyclic Gaussian interference channel model is that each
transmitter sends signal to its intended receiver while causing interference
to only one of its neighboring receivers; meanwhile, each receiver receives
the intended signal plus the interfering signal from only one of its
neighboring transmitters. Using this fact and with the help of Fourier-Motzkin
elimination algorithm, we show in this section that the achievable rate region
of the $K$-user cyclic Gaussian interference channel is the union of
polyhedrons with boundaries defined by rate constraints on the individual
rates $R_{i}$, the sum rate $R_{sum}$, the sum rate plus an individual rate
$R_{sum}+R_{i}$ ($i=1,2,\cdots,K$), and the sum rate for arbitrary $l$
adjacent users ($2\leq l<K$). This last rate constraint on arbitrary $l$
adjacent users’ rates is new as compared with the two-user case.
The preceding characterization together with outer bounds to be proved later
in the section allow us to show that the capacity region of the $K$-user
cyclic Gaussian interference channel can be achieved to within a constant gap
using the Etkin, Tse and Wang’s power-splitting strategy in the weak
interference regime. However, instead of the one-bit result as obtained for
the two-user interference channel [8], this section shows that without time-
sharing, one can achieve to within two bits of the capacity region for the
$K$-user cyclic Gaussian interference channel in the weak interference regime.
The strong interference regime is treated in the next section.
### III-A Achievable Rate Region
###### Theorem 1.
Let $\mathcal{P}$ denote the set of probability distributions $P(\cdot)$ that
factor as
$\displaystyle P(q,w_{1},x_{1},w_{2},x_{2},\cdots,w_{K},x_{K})=$ (5)
$\displaystyle\quad\quad\quad p(q)p(x_{1},w_{1}|q)p(x_{2},w_{2}|q)\cdots
p(x_{K},w_{K}|q).$
For a fixed $P\in\mathcal{P}$, let $\mathcal{R}_{\mathrm{HK}}^{(K)}(P)$ be the
set of all rate tuples $(R_{1},R_{2},\cdots,R_{K})$ satisfying
$\displaystyle 0\leq R_{i}$ $\displaystyle\leq$
$\displaystyle\min\\{d_{i},a_{i}+e_{i-1}\\},$ (6)
$\displaystyle\sum_{j=m}^{m+l-1}R_{j}$ $\displaystyle\leq$
$\displaystyle\min\left\\{g_{m}+\sum_{j=m+1}^{m+l-2}e_{j}+a_{m+l-1},\right.$
(7)
$\displaystyle\left.\quad\quad\quad\sum_{j=m-1}^{m+l-2}e_{j}+a_{m+l-1}\right\\},$
$\displaystyle R_{sum}=\sum_{j=1}^{K}R_{j}$ $\displaystyle\leq$
$\displaystyle\min\left\\{\sum_{j=1}^{K}e_{j},r_{1},r_{2},\cdots,r_{K}\right\\},$
(8) $\displaystyle\sum_{j=1}^{K}R_{j}+R_{i}$ $\displaystyle\leq$
$\displaystyle a_{i}+g_{i}+\sum_{j=1,j\neq i}^{K}e_{j},$ (9)
where $a_{i},d_{i},e_{i},g_{i}$ and $r_{i}$ are defined as follows:
$\displaystyle a_{i}$ $\displaystyle=$ $\displaystyle
I(Y_{i};X_{i}|W_{i},W_{i+1},Q),$ (10) $\displaystyle d_{i}$ $\displaystyle=$
$\displaystyle I(Y_{i};X_{i}|W_{i+1},Q),$ (11) $\displaystyle e_{i}$
$\displaystyle=$ $\displaystyle I(Y_{i};W_{i+1},X_{i}|W_{i},Q),$ (12)
$\displaystyle g_{i}$ $\displaystyle=$ $\displaystyle
I(Y_{i};W_{i+1},X_{i}|Q),$ (13) $r_{i}=a_{i-1}+g_{i}+\sum_{j=1,j\neq
i,i-1}^{K}e_{j},$ (14)
and the range of indices are $i,m=1,2,\cdots,K$ in (6) and (9),
$l=2,3,\cdots,K-1$ in (7). Define
$\mathcal{R}_{\mathrm{HK}}^{(K)}=\bigcup_{P\in\mathcal{P}}\mathcal{R}_{\mathrm{HK}}^{(K)}(P).$
(15)
Then $\mathcal{R}_{\mathrm{HK}}^{(K)}$ is an achievable rate region for the
$K$-user cyclic interference channel.
###### Proof:
The achievable rate region can be proved by the Fourier-Motzkin algorithm
together with an induction step. The proof follows the Kobayashi and Han’s
strategy [20] of eliminating a common message at each step. Details are
available in [15]. ∎
In the above achievable rate region, (6) is the constraint on the achievable
rate of an individual user, (7) is the constraint on the achievable sum rate
for any $l$ adjacent users ($2\leq l<K$), (8) is the constraint on the
achievable sum rate of all $K$ users, and (9) is the constraint on the
achievable sum rate for all $K$ users plus a repeated user.
From (6) to (9), there are a total of $K+K(K-2)+1+K=K^{2}+1$ constraints.
Together they describe the shape of the achievable rate region under a fixed
input distribution. The quadratic growth in the number of constraints as a
function of $K$ makes the Fourier-Motzkin elimination of the Han-Kobayashi
region quite complex. An induction needs to be used to deal with the large
number of the constraints.
As an example, for the two-user Gaussian interference channel, there are
$2^{2}+1=5$ rate constraints, corresponding to that of $R_{1}$, $R_{2}$,
$R_{1}+R_{2}$, $2R_{1}+R_{2}$ and $2R_{2}+R_{1}$, as in [1, 20, 2, 8].
Specifically, substituting $K=2$ in Theorem 1 gives us the following
achievable rate region:
$\displaystyle 0\leq R_{1}$ $\displaystyle\leq$
$\displaystyle\min\\{d_{1},a_{1}+e_{2}\\},$ (16) $\displaystyle 0\leq R_{2}$
$\displaystyle\leq$ $\displaystyle\min\\{d_{2},a_{2}+e_{1}\\},$ (17)
$\displaystyle R_{1}+R_{2}$ $\displaystyle\leq$
$\displaystyle\min\\{e_{1}+e_{2},a_{1}+g_{2},a_{2}+g_{1}\\},$ (18)
$\displaystyle 2R_{1}+R_{2}$ $\displaystyle\leq$ $\displaystyle
a_{1}+g_{1}+e_{2},$ (19) $\displaystyle 2R_{2}+R_{1}$ $\displaystyle\leq$
$\displaystyle a_{2}+g_{2}+e_{1},$ (20)
which is exactly the Theorem D of [20].
### III-B Capacity Region Outer Bound
###### Theorem 2.
For the $K$-user cyclic Gaussian interference channel in the weak interference
regime, the capacity region is included in the following set of rate tuples
$(R_{1},R_{2},\cdots,R_{K})$:
$\displaystyle R_{i}$ $\displaystyle\leq$ $\displaystyle\lambda_{i},$ (21)
$\displaystyle\sum_{j=m}^{m+l-1}R_{j}$ $\displaystyle\leq$
$\displaystyle\min\left\\{\gamma_{m}+\sum_{j=m+1}^{m+l-2}\alpha_{j}+\beta_{m+l-1},\right.$
(22)
$\displaystyle\left.\quad\quad\mu_{m}+\sum_{j=m}^{m+l-2}\alpha_{j}+\beta_{m+l-1}\right\\},$
$\displaystyle\sum_{j=1}^{K}R_{j}$ $\displaystyle\leq$
$\displaystyle\min\left\\{\sum_{j=1}^{K}\alpha_{j},\rho_{1},\rho_{2},\cdots,\rho_{K}\right\\},$
(23) $\displaystyle\sum_{j=1}^{K}R_{j}+R_{i}$ $\displaystyle\leq$
$\displaystyle\beta_{i}+\gamma_{i}+\sum_{j=1,j\neq i}^{K}\alpha_{j},$ (24)
where the ranges of the indices $i$, $m$, $l$ are as defined in Theorem 1, and
$\displaystyle\alpha_{i}$ $\displaystyle=$
$\displaystyle\log\left(1+\mathsf{INR}_{i+1}+\frac{\mathsf{SNR}_{i}}{1+\mathsf{INR}_{i}}\right),$
(25) $\displaystyle\beta_{i}$ $\displaystyle=$
$\displaystyle\log\left(\frac{1+\mathsf{SNR}_{i}}{1+\mathsf{INR}_{i}}\right),$
(26) $\displaystyle\gamma_{i}$ $\displaystyle=$
$\displaystyle\log\left(1+\mathsf{INR}_{i+1}+\mathsf{SNR}_{i}\right),$ (27)
$\displaystyle\lambda_{i}$ $\displaystyle=$
$\displaystyle\log(1+\mathsf{SNR}_{i}),$ (28) $\displaystyle\mu_{i}$
$\displaystyle=$ $\displaystyle\log(1+\mathsf{INR}_{i}),$ (29)
$\displaystyle\rho_{i}$ $\displaystyle=$
$\displaystyle\beta_{i-1}+\gamma_{i}+\sum_{j=1,j\neq i,i-1}^{K}\alpha_{j}.$
(30)
###### Proof:
Genie-aided bounding techniques are used to prove the theorem. See [15] for
details. ∎
### III-C Capacity Region to Within Two Bits
###### Theorem 3.
For the $K$-user cyclic Gaussian interference channel in the weak interference
regime, the fixed Etkin, Tse and Wang’s power-splitting strategy achieves to
within two bits of the capacity region222If a rate pair
$(R_{1},R_{2},\cdots,R_{K})$ is achievable and
$(R_{1}+k,R_{2}+k,\cdots,R_{K}+k)$ is outside the capacity region, then
$(R_{1},R_{2},\cdots,R_{K})$ is said to be within $k$ bits of the capacity
region..
###### Proof:
Applying the Etkin, Tse and Wang’s power-splitting strategy (i.e.,
$\mathsf{INR}_{ip}=\min(\mathsf{INR}_{i},1)$) to Theorem 1, parameters
$a_{i},d_{i},e_{i},g_{i}$ can be easily calculated as follows:
$\displaystyle a_{i}$ $\displaystyle=$
$\displaystyle\log\left(2+\mathsf{SNR}_{ip}\right)-1,$ (31) $\displaystyle
d_{i}$ $\displaystyle=$ $\displaystyle\log\left(2+\mathsf{SNR}_{i}\right)-1,$
(32) $\displaystyle e_{i}$ $\displaystyle=$
$\displaystyle\log\left(1+\mathsf{INR}_{i+1}+\mathsf{SNR}_{ip}\right)-1,$ (33)
$\displaystyle g_{i}$ $\displaystyle=$
$\displaystyle\log\left(1+\mathsf{INR}_{i+1}+\mathsf{SNR}_{i}\right)-1.$ (34)
To prove that the achievable rate region described by the above
$a_{i},d_{i},e_{i},g_{i}$ is within two bits of the outer bound in Theorem 2,
we need to show that each of the rate constraints in (6)-(9) is within two
bits of their corresponding outer bound in (21)-(24), i.e., the following
inequalities hold for all $i$, $m$, $l$ in the ranges defined in Theorem 1:
$\displaystyle\delta_{R_{i}}$ $\displaystyle<$ $\displaystyle 2,$ (35)
$\displaystyle\delta_{R_{m}+\cdots+R_{m+l-1}}$ $\displaystyle<$ $\displaystyle
2l,$ (36) $\displaystyle\delta_{R_{sum}}$ $\displaystyle<$ $\displaystyle 2K,$
(37) $\displaystyle\delta_{R_{sum}+R_{i}}$ $\displaystyle<$ $\displaystyle
2(K+1),$ (38)
where $\delta_{(\cdot)}$ is the difference between the achievable rate in
Theorem 1 and its corresponding outer bound in Theorem 2. A complete proof can
be found in [15] . ∎
## IV Capacity Region in the Strong Interference Regime
The results so far in the paper pertain only to the weak interference regime,
where $\mathsf{SNR}_{i}\geq\mathsf{INR}_{i}$, $\forall i$. In the strong
interference regime, where $\mathsf{SNR}_{i}\leq\mathsf{INR}_{i}$, $\forall
i$, the capacity result in [1] [4] for the two-user Gaussian interference
channel can be easily extended to the $K$-user cyclic case.
###### Theorem 4.
For the $K$-user cyclic Gaussian interference channel in the strong
interference regime, the capacity region is given by the set of
$(R_{1},R_{2},\cdots,R_{K})$ such that
$\left\\{\begin{array}[]{l}R_{i}\leq\log(1+\mathsf{SNR}_{i})\\\
R_{i}+R_{i+1}\leq\log(1+\mathsf{SNR}_{i}+\mathsf{INR}_{i+1}),\end{array}\right.$
(39)
for $i=1,2,\cdots,K$. In the very strong interference regime where
$\mathsf{INR}_{i}\geq(1+\mathsf{SNR}_{i-1})\mathsf{SNR}_{i},\forall i$, the
capacity region is the set of $(R_{1},R_{2},\cdots,R_{K})$ with
$R_{i}\leq\log(1+\mathsf{SNR}_{i}),\;\;i=1,2,\cdots,K.$ (40)
###### Proof:
Achievability: It is easy to see that (39) is in fact the intersection of the
capacity regions of $K$ multiple-access channels:
$\bigcap_{i=1}^{K}\left\\{(R_{i},R_{i+1})\left|\begin{array}[]{l}R_{i}\leq\log(1+\mathsf{SNR}_{i})\\\
R_{i+1}\leq\log(1+\mathsf{INR}_{i+1})\\\
R_{i}+R_{i+1}\leq\log(1+\mathsf{SNR}_{i}+\mathsf{INR}_{i+1}).\end{array}\right.\right\\}.$
(41)
Each of these regions corresponds to that of a multiple-access channel with
$W_{i}^{n}$ and $W_{i+1}^{n}$ as inputs and $Y_{i}^{n}$ as output (with
$U_{i}^{n}=U_{i+1}^{n}=\emptyset$). Therefore, the rate region (39) can be
achieved by setting all the input signals to be common messages. This
completes the achievability part.
Converse: The converse proof follows the idea of [4]. The key ingredient is to
show that for a genie-aided Gaussian interference channel to be defined later,
in the strong interference regime, whenever a rate tuple
$(R_{1},R_{2},\cdots,R_{K})$ is achievable, i.e., $X_{i}^{n}$ is decodable at
receiver $i$, $X_{i}^{n}$ must also be decodable at $Y_{i-1}^{n}$,
$i=1,2,\cdots,K$.
The genie-aided Gaussian interference channel is defined by the Gaussian
interference channel (see Fig. 2) with genie $X_{i+2}^{n}$ given to receiver
$i$. The capacity region of the $K$-user cyclic Gaussian interference channel
must be resided inside the capacity region of the genie-aided one.
Assume that a rate tuple $(R_{1},R_{2},\cdots,R_{K})$ is achievable for the
$K$-user cyclic Gaussian interference channel. In this case, after $X_{i}^{n}$
is decoded, with the knowledge of the genie $X_{i+2}^{n}$, receiver $i$ can
construct the following signal:
$\displaystyle\widetilde{Y}_{i}^{n}$ $\displaystyle=$
$\displaystyle\frac{h_{i+1,i+1}}{h_{i+1,i}}(Y_{i}^{n}-h_{i,i}X_{i}^{n})+h_{i+2,i+1}X_{i+2}^{n}$
$\displaystyle=$ $\displaystyle
h_{i+1,i+1}X_{i+1}^{n}+h_{i+2,i+1}X_{i+2}^{n}+\frac{h_{i+1,i+1}}{h_{i+1,i}}Z_{i}^{n},$
which contains the signal component of $Y_{i+1}^{n}$ but with less noise since
$|h_{i+1,i}|\geq|h_{i+1,i+1}|$ in the strong interference regime. Now, since
$X_{i+1}^{n}$ is decodable at receiver $i+1$, it must also be decodable at
receiver $i$ using the constructed $\widetilde{Y}_{i}^{n}$. Therefore,
$X_{i}^{n}$ and $X_{i+1}^{n}$ are both decodable at receiver $i$. As a result,
the achievable rate region of $(R_{i},R_{i+1})$ is bounded by the capacity
region of the multiple-access channel $(X_{i}^{n},X_{i+1}^{n},Y_{i}^{n})$,
which is shown in (41). Since (41) reduces to (39) in the strong interference
regime, we have shown that (39) is an outer bound of the $K$-user cyclic
Gaussian interference channel in the strong interference regime. This
completes the converse proof.
In the very strong interference regime where
$\mathsf{INR}_{i}\geq(1+\mathsf{SNR}_{i-1})\mathsf{SNR}_{i},\forall i$, it is
easy to verify that the second constraint in (39) is no longer active. This
results in the capacity region (40). ∎
## V Concluding Remarks
This paper studies the capacities and the coding strategies for the $K$-user
cyclic Gaussian interference channel in the weak and the strong interference
regimes. An achievable rate region based on the Han-Kobayashi power splitting
strategy is first derived; a corresponding capacity region outer bound is then
obtained using genie-aided bounding techniques. This paper shows that in the
weak interference regime, the Etkin, Tse and Wang’s power-splitting strategy
achieves to within two bits of the capacity region. The capacity result for
the $K$-user cyclic Gaussian interference channel in the strong interference
regime is a straightforward extension of the corresponding two-user case.
However, in the mixed interference regime, although the constant gap result
may well continue to hold, the proof becomes considerably more complicated, as
different mixed scenarios need to be enumerated and the corresponding outer
bounds derived.
## References
* [1] T. S. Han and K. Kobayashi, “A new achievable rate region for the interference channel,” _IEEE Trans. Inf. Theory_ , vol. 27, no. 1, pp. 49–60, Jan. 1981.
* [2] H. Chong, M. Motani, H. Garg, and H. El Gamal, “On the Han-Kobayashi region for the interference channel,” _IEEE Trans. Inf. Theory_ , vol. 54, no. 7, pp. 3188–3195, Jul. 2008.
* [3] A. B. Carleial, “Interference channels,” _IEEE Trans. Inf. Theory_ , vol. 24, no. 1, pp. 60–70, Jan. 1978.
* [4] H. Sato, “The capacity of the Gaussian interference channel under strong interference,” _IEEE Trans. Inf. Theory_ , vol. 27, no. 6, pp. 786–788, Nov. 1981.
* [5] V. S. Annapureddy and V. Veeravalli, “Gaussian interference networks: Sum capacity in the low interference regime and new outer bounds on the capacity region,” _IEEE Trans. Inf. Theory_ , vol. 55, no. 7, pp. 3032–3035, Jul. 2009.
* [6] A. S. Motahari and A. K. Khandani, “Capacity bounds for the Gaussian interference channel,” _IEEE Trans. Inf. Theory_ , vol. 55, no. 2, pp. 620–643, Feb. 2009.
* [7] X. Shang, G. Kramer, and B. Chen, “A new outer bound and the noisy-interference sum-rate capacity for Gaussian interference channels,” _IEEE Trans. Inf. Theory_ , vol. 55, no. 2, pp. 689–699, Feb. 2009.
* [8] R. Etkin, D. N. C. Tse, and H. Wang, “Gaussian interference channel capacity to within one bit,” _IEEE Trans. Inf. Theory_ , vol. 54, no. 12, pp. 5534–5562, Dec. 2008.
* [9] V. R. Cadambe and S. A. Jafar, “Interference alignment and the degrees of freedom for the K user interference channel,” _IEEE Trans. Inf. Theory_ , vol. 54, no. 8, pp. 3425–3441, Aug. 2008.
* [10] G. Bresler, A. Parekh, and D. N. C. Tse, “The approximate capacity of the many-to-one and one-to-many Gaussian interference channels,” _IEEE Trans. Inf. Theory_ , vol. 56, no. 9, pp. 4566 –4592, Sep. 2010.
* [11] X. Shang, G. Kramer, and B. Chen, “New outer bounds on the capacity region of Gaussian interference channels,” in _Proc. IEEE Int. Symp. Inf. Theory (ISIT)_ , Jul. 2008, pp. 245–249.
* [12] S. A. Jafar and S. Vishwanath, “Generalized degrees of freedom of the symmetric Gaussian K user interference channel,” _IEEE Trans. Inf. Theory_ , vol. 56, no. 7, pp. 3297–3303, July. 2010.
* [13] V. R. Cadambe and S. A. Jafar, “Interference alignment and a noisy interference regime for many-to-one interference channels,” _Submitted to IEEE Trans. Inf. Theory_ , Dec. 2009. [Online]. Available: http://arxiv.org/pdf/0912.3029
* [14] L. Zhou and W. Yu, “On the symmetric capacity of the k-user symmetric cyclic Gaussian interference channel,” in _Proc. IEEE Conf. Inf. Sciences and Systems (CISS)_ , Mar. 2010, pp. 1–6.
* [15] ——, “On the capacity of the $K$-user cyclic Gaussian interference channel,” _Submitted to IEEE Trans. Inf. Theory_ , Oct. 2010. [Online]. Available: http://arxiv.org/abs/1010.1044
* [16] O. Somekh, B. M. Zaidel, and S. Shamai, “Sum rate characterization of joint multiple cell-site processing,” _IEEE Trans. Inf. Theory_ , vol. 53, no. 12, pp. 4473–4497, Dec. 2007.
* [17] A. D. Wyner, “Shannon-theoretic approach to a Gaussian cellular multiple-access channel,” _IEEE Trans. Inf. Theory_ , vol. 40, no. 6, pp. 1713–1727, Nov. 1994.
* [18] Y. Liang and A. Goldsmith, “Symmetric rate capacity of cellular systems with cooperative base stations,” in _Proc. Global Telecommun. Conf. (Globecom)_ , Nov. 2006, pp. 1–5.
* [19] J. Sheng, D. N. C. Tse, J. Hou, J. B. Soriaga, and R. Padovani, “Multi-cell downlink capacity with coordinated processing,” in _Proc. Inf. Theory and App. (ITA)_ , Jan. 2007, pp. 1–5.
* [20] K. Kobayashi and T. S. Han, “A further consideration on the HK and the CMG regions for the interference channel,” in _Proc. Inf. Theory and App. (ITA)_ , Jan. 2007.
|
arxiv-papers
| 2011-03-29T23:00:04 |
2024-09-04T02:49:18.009308
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Lei Zhou and Wei Yu",
"submitter": "Lei Zhou",
"url": "https://arxiv.org/abs/1103.5789"
}
|
1103.5927
|
# Seasonal and regional characterization of horizontal stirring in the global
ocean
Ismael Hernández-Carrasco,1 Cristóbal López,1∗
Emilio Hernández-García1 and Antonio Turiel,2
1Instituto de Física Interdisciplinar y Sistemas Complejos (CSIC-UIB)
07122 Palma de Mallorca, Spain
2Institut de Ciències del Mar, CSIC
Passeig Marítim de la Barceloneta 37-49, 08003 Barcelona, Spain
∗To whom correspondence should be addressed; E-mail: clopez@ifisc.uib.es
###### Abstract
Recent work on Lagrangian descriptors has shown that Lyapunov Exponents can be
applied to observed or simulated data to characterize the horizontal stirring
and transport properties of the oceanic flow. However, a more detailed
analysis of regional dependence and seasonal variability was still lacking. In
this paper, we analyze the near-surface velocity field obtained from the Ocean
general circulation model For the Earth Simulator (OFES) using Finite-Size
Lyapunov Exponents (FSLE). We have characterized regional and seasonal
variability. Our results show that horizontal stirring, as measured by FSLEs,
is seasonally-varying, with maximum values in Summer time. FSLEs also strongly
vary depending on the region: we have first characterized the stirring
properties of Northern and Southern Hemispheres, then the main oceanic basins
and currents. We have finally studied the relation between averages of FSLE
and some Eulerian descriptors such as Eddy Kinetic Energy (EKE) and vorticity
($\omega$) over the different regions.
## 1 Introduction
A detailed knowledge of the transport, dispersion, stirring and mixing
mechanisms of water masses across the global ocean is of crucial interest to
fully understand, for example, heat and tracer budgets, or the role of oceans
in climate regulation. There has been a recent strong activity in the study of
these processes from a Lagrangian perspective. Some works have addressed the
global variability of them using finite-time Lyapunov exponents (FTLEs)
computed from currents derived from satellite altimetry [4, 42]. These studies
quantify stirring intensity, and identify mesoscale eddies and other
Lagrangian Coherent Structures (LCSs). Furthermore, previous works [43]
pointed out relationships between Lagrangian and Eulerian quantifiers of
stirring/mixing activity (FTLEs and Eddy Kinetic Energy (EKE) or mean strain
rate).
Having in mind the implications for the distribution of biogeochemical
tracers, our goal is to extend the previous works to provide detailed seasonal
analysis and a comparative study between different ocean regions and different
scales: Earth’s hemispheres, ocean basins, and boundary currents. To this end
we use finite-size Lyapunov exponents (FSLEs). These quantities are related to
FTLEs since they also compute stretching and contraction time scales for
transport, but they depend on explicit spatial scales which are simple to
specify and to interpret in oceanographic contexts [10, 11, 20, 40]. In
particular we will focus on the impact on transport of mesoscale processes,
for which characteristic spatial scales as a function of latitude are well
known. We are also interested in checking the existence of relationships
between Lagrangian measures of horizontal stirring intensity, as given by
averages of finite-size Lyapunov exponents (FSLE), and other dynamic, Eulerian
quantities, such as EKE or vorticity. Such a functional relation does not need
to hold in general, but may be present when there is a connection between the
mechanisms giving rise to mesoscale turbulence (probably, baroclinic
instability) and horizontal stirring.
The paper is organized as follows. In Section 2 we describe the data and tools
used in this study. In section 3 we first present the geographical and
seasonal characterization of the horizontal stirring, and then we investigate
the relation of FSLE with EKE and vorticity. Finally, in the Conclusions we
present a summary and concluding remarks.
## 2 Data and Methods
Our dataset consists of an output from the Ocean general circulation model For
the Earth Simulator (OFES) [26, 25]. This is a near-global ocean model that
has been spun up for 50 years under climatological forcing taken from monthly
mean NCEP (United States National Centers for Environmental Prediction)
atmospheric data. After that period the OFES is forced by the daily mean NCEP
reanalysis for 48 years from 1950 to 1998. See [26] for additional details on
the forcing. The output of the model corresponds to daily data for the last 8
years. Horizontal angular resolution is the same in both the zonal, $\phi$,
and meridional, $\theta$, directions, with values of
$\Delta\theta=\Delta\phi=1/10^{\circ}$. The output has been interpolated to 54
vertical z-layers and has a temporal resolution of one day. The velocity
fields that we have used in this work correspond to the first two years, 1990
and 1991, of the output. Vertical displacements are unimportant during the
time scales we consider here so that, despite horizontal layers are not true
isopycnals, most fluid elements remain in their initial horizontal layer
during the time of our Lagrangian computation. Thus we use in our analysis
horizontal velocities in single horizontal layers. We refer to recent works
[28, 6] for Lyapunov analyses considering vertical displacements. Unless
explicitly stated, our calculations are for the second output layer, at $7.56$
m depth, which is representative of the surface motion but limits the effect
of direct wind drag (we have also studied the layer at $97$ m depth; results
on this layer are briefly shown in Fig. 3). See [26] and [25] for a thorough
evaluation of the model performance.
Among Lagrangian techniques used to quantify ocean transport and mixing, local
Lyapunov methods are being widely used. The idea in them is to look at the
dispersion of a pair of particles as they are transported by the flow. To
calculate FTLEs, pairs of particles infinitesimally close are released and
their separation after a finite time is accounted; for FSLEs [2] two finite
distances are fixed, and the time taken by pairs of particles to separate from
the smallest to the largest is computed. Both methods thus measure how water
parcels are stretched by the flow, and they also quantify pair dispersion. The
methods can also be tailored to reveal two complementary pieces of
information. On the one hand they provide time-scales for dispersion and
stirring process [1, 2, 9, 23, 10, 19, 30]. On the other, they are useful to
identify Lagrangian Coherent Structures (LCSs), persistent structures that
organize the fluid transport [15, 13, 8, 21, 22, 24, 14, 37, 4, 11, 40, 29].
This second capability arises because the largest Lyapunov values tend to
concentrate in space along characteristic lines which could often be
identified with the manifolds (stable and unstable) of hyperbolic trajectories
[15, 13, 14, 16, 37]. Since these manifolds are material lines that can not be
crossed by fluid elements, they strongly constrain and determine fluid motion,
acting then as LCSs that organize ocean transport on the horizontal. Thus,
eddies, fronts, avenues and barriers to transport, etc. can be conveniently
located by computing spatial Lyapunov fields. We note however that more
accurate characterization of LCSs can be done beyond Lyapunov methods [16],
that high Lyapunov values can correspond also to non-hyperbolic structures
with high shear [12], and that an important class of LCSs is associated to
small, and not to large values of the Lyapunov exponents [35, 5].
In the present work, however, we are more interested in obtaining the first
type of information, i.e. in extracting characteristic dispersion time-scales,
quantifying the intensity of stirring, for the different ocean regions and
seasons. In particular we want to focus on the transport process associated to
eddies and other mesoscale structures. Previous Lagrangian analyses of the
global ocean [4, 42] used FTLE to quantify such horizontal stirring. This
quantity depends on the integration time during which the pair of particles is
followed. FTLEs generally decrease as this integration time increases,
approaching the asymptotic value of the infinite-time Lyapunov exponent [42].
We find difficult to specify finite values of this integration time for which
easy-to-interpret results would be obtained across the different ocean
regions. But for the mesoscale processes on which we want to focus,
characteristic spatial scales are related to the Rossby Deformation Radius
(RDR), with easily defined values and latitudinal dependence (see below).
Thus, we use in this paper FSLEs as a convenient way to identify
characteristics of stirring by mesoscale processes. FSLE are also convenient
in finite ocean basins, where relevant spatial scales are also clearly imposed
[1, 7, 23]. As a quantifier of horizontal stirring, measuring the stretching
of water parcels, FSLEs give also information on the intensity of horizontal
mixing between water masses, although a complete correspondence between
stirring and mixing requires the consideration of diffusivity and of the
stretching directions [12].
More in detail, at a given point the FSLE (denoted by $\lambda$ in the
following) is obtained by computing the minimal time $\tau$ at which two fluid
particles, one centered on the point of study and the other initially
separated by a distance $\delta_{0}$, reach a final separation distance
$\delta_{f}$. At position x and time $t$, the FSLE is given by:
$\lambda(\textbf{x},t,\delta_{0},\delta_{f})=\tau^{-1}\ln(\delta_{f}/\delta_{0})$.
To estimate the minimal time $\tau$ we would need to integrate the
trajectories of all the points around the analyzed one and select the
trajectory which diverges the first. We can obtain a very good approximation
of $\tau$ by just considering the four trajectories defined by the closest
neighbors of the point in the regular grid of initial conditions at which we
have computed the FSLE; the spacing of this grid is taken equal to
$\delta_{0}$. The equations of motion that describe the horizontal evolution
of particle trajectories are
$\displaystyle\frac{d\phi}{dt}$ $\displaystyle=$
$\displaystyle\frac{u(\phi,\theta,t)}{R\cos{\theta}},$ (1)
$\displaystyle\frac{d\theta}{dt}$ $\displaystyle=$
$\displaystyle\frac{v(\phi,\theta,t)}{R},$ (2)
where $u$ and $v$ stand for the zonal and meridional components of the surface
velocity field coming from the OFES simulations; $R$ is the radius of the
Earth ($6400$ $km$), $\phi$ is longitude and $\theta$ latitude. Numerically we
proceed by integrating Eqs. (1) and (2) using a standard, fourth-order Runge-
Kutta scheme, with an integration time step $dt=6$ hours. Since information is
provided just in a discrete space-time grid, spatiotemporal interpolation of
the velocity data is required, that is performed by bilinear interpolation.
Initial conditions for which the prescribed final separation $\delta_{f}$ has
not been reached after integrating all the available times in the data set are
assigned a value $\lambda=0$. A possible way to introduce small-scale features
that are not resolved by our simulated velocity fields is by inclusion of
noise terms in the equations of motion (2). We have recently shown [20] that
the main mesoscale features are maintained when this eddy-diffusivity is taken
into account, though sub-mesoscale structures may change considerably. For
global scales we expect the effects of noise to be even more negligible.
The field of FSLEs thus depends on the choice of two length scales: the
initial separation $\delta_{0}$ (which coincides with the lattice spacing of
the FSLE grid and is fixed in our computations to the model resolution,
$\delta_{0}$=$1/10^{\circ}$) and the final separation $\delta_{f}$. As in
previous works in middle latitudes [10, 11, 20] we will focus on transport
processes arising from the mesoscale structures. In these studies $\delta_{f}$
was taken about $110km$, which is of the order of, but larger than, the
mesoscale size in middle latitudes. Note that $\delta_{f}$ should be a
decreasing function of the latitude, since mesoscale structures decrease in
size with Rossby Deformation Radius (RDR). We need not to exactly match RDR
but to guarantee that our choice of $\delta_{f}$ is similar but larger than
mesoscale lengths, and also that it is a smooth function to avoid inducing
artifacts. We have then chosen $\delta_{f}$ as $\delta_{f}=1.3|\cos\theta|$
degrees; other reasonable choices lead to similar results to those presented
here.
We compute the FSLEs by backwards time integration. In this way we quantify
the fluid deformation by past stirring. When computing LCSs this leads to
structures easier to interpret since they can be associated with the actual
shape of tracer filaments [21, 11]. However, given that forward and backward
exponents in incompressible flows are related by temporal shifts and spatial
distortions [17], and that we are interested in temporal and spatial averages
over relatively large scales, we do not expect significant differences when
using forward exponents to calculate the stirring quantifiers presented below.
This was explicitly checked in a similar framework in [10].
Lagrangian measurements have been shown to correlate well with several
Eulerian quantities at several scales [43, 42]. In particular it is pertinent
to correlate stirring with Eddy Kinetic Energy (EKE) since it is expected that
more energetic turbulent areas would also present stronger horizontal
stirring, mainly due to the spawning of eddies (see however [34, 33]). Given
an integration period $T$ long enough (for instance $T$= one year), the EKE
(per unit of mass) is given by: $EKE=\frac{1}{2}\left\langle u^{\prime
2}+v^{\prime 2}\right\rangle$, where $u^{\prime}$ and $v^{\prime}$ are the
instant deviations in zonal and meridional velocities from the average over
the period $T$, and the brackets denote average over that period. Another
Eulerian measurement used in this work is the surface relative vorticity,
given by $\omega=\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}$,
with positive (vs negative) $\omega$ associated to cyclonic (vs anticyclonic)
motion in the Northern Hemisphere (opposite signs in the Southern Hemisphere).
An additional Eulerian candidate to look for Lagrangian correspondences is the
local strain rate, but it has been shown [43, 42] to scale linearly with
$EKE^{1/2}$ and thus it will not be explicitly considered here.
Conditioned averages of $\lambda$ as a function of another variable $y$ (let
$y$ be EKE1/2 or $\omega$) introduced in Subsection 3.4 are obtained by
discretizing the allowed values of $y$ by binning; 100 bins were taken, each
one defining a range of values $(y_{n},y_{n+1})$ and represented by the
average value $\hat{y}_{n}=\frac{y_{n}+y_{n+1}}{2}$. So, for each discretized
value of $\hat{y}_{n}$ the average of all the values of $\lambda$ which occur
coupled with a value in $(y_{n},y_{n+1})$ is computed. The result is an
estimate of the conditioned average $\tilde{\lambda}(y)$ (which is a function
of $y$) at the points $\hat{y}_{n}$.
## 3 Results
### 3.1 Global horizontal stirring from FSLE
In Fig. 1 we present a map of FSLEs at a given time. Typical values are in the
order of $0.1-0.6$ $days^{-1}$, that correspond well to the horizontal
stirring times expected at the mesoscale, in the range of days/weeks. Spatial
structures, from filaments and mesoscale vortices to larger ones, are clearly
identified; see a representative zoom of the South Atlantic Ocean (Bottom of
Fig. 1), where the typical filamental structures originated by the horizontal
motions are evident.
Instantaneous maps of FSLEs have a significant signature of short-lived fast
processes and are adequate to extract LCSs, but we are more interested in
slower processes at larger scales. We have hence taken time averages of FSLEs
over different periods, in order to select the low-frequency, large-scale
signal. In this way we can easily characterize regions in the global ocean
with different horizontal stirring activity; areas with larger values of
averaged FSLEs are identified as zones with more persistent horizontal
stirring [10], as shown in Fig. 2a. As expected, we can observe that high
stirring values correspond to Western Boundary Currents (WBCs) and to the
Antarctic Circumpolar Current, while the rest of the ocean and the Eastern
Boundary Currents (EBCs) display significantly lower values.
### 3.2 Geographical characterization of horizontal stirring
A convenient quantity used to characterize stirring in a prescribed
geographical area $A$ was introduced by [10], which is simply the spatial
average of the FSLEs over that area at a given time, denoted by $<\lambda({\bf
x},t)>_{A}$. Time series of this quantity for the whole ocean and the Northern
and Southern hemispheres are shown in Fig. 3a. It is worth noting that the
stirring intensity is typically larger in the Northern Hemisphere than in the
Southern one.
Further information can be obtained by analyzing the FSLE Probability
Distribution Functions (PDFs). In Fig. 3b we present the PDFs for both
hemispheres and the whole ocean; the required histograms are built using
$\lambda$ values computed once every week during one year (52 snapshots) at
each point of the spatial FSLE grid in the area of interest. Each one of these
PDFs is broad and asymmetric, with a small mode $\lambda_{m}$ (i.e., the value
of $\lambda$ at which the probability attains its maximum) and a heavy tail.
Similarly to what was discussed by [43] and [42] for the FTLE case, these PDFs
are well described by Weibull distributions with appropriate values for the
defining parameters. We note that an explicit relationship between FTLE and
FSLE distributions was derived by [41], but we have not checked if our flow is
in the regime considered in that reference. The mode $\lambda_{m}$ for the
Southern Hemisphere is smaller than that of the Northern Hemisphere. Thus,
Northern Hemisphere is globally more active in terms of horizontal dispersion
than the Southern one. The same conclusions hold when looking at seasonally
averaged instead of annually averaged quantities (not shown).
Taking into account the observed differences between Northern and Southern
Hemispheres, we have repeated the same analyses over the main ocean basins in
a search for isolating the factors which could contribute to one or another
observed behaviors. In Fig. 3c we show the time evolution of $<\lambda>_{A}$
as computed over the six main ocean basins (North Atlantic, South Atlantic,
North Pacific, South Pacific, Indian Ocean and Southern Ocean), compared to
the one obtained over the global ocean. The Southern Ocean happens to be the
most active (in terms of horizontal stirring) because of the presence of the
Antarctic Circumpolar Current, followed by the Atlantic and Indian Oceans, and
finally the Pacific. We have also computed (Fig. 3d) PDFs of FSLE for the
different oceans. As before, we obtain broad, asymmetric PDFs with small modes
and heavy tails. The smallest mode $\lambda_{m}$ corresponds to the Southern
Pacific, meaning than there is less horizontal stirring activity in this
basin, in support of what is also visually evident in Fig. 3c. On the opposite
regime we observe that the largest FSLE values correspond to the Southern
Ocean. For the rest of oceans the PDFs are rather coincident with the whole
ocean PDF.
We have gone further to a smaller scale, by repeating the same analyses for
the main currents in the global ocean: Gulf Stream, Benguela, Kuroshio,
Mozambique, East Australian, California, Peru and Canary currents. As
evidenced by Fig. 3e there is a clear separation in two groups of currents in
terms of their horizontal stirring properties: the most active currents
(including Gulf Stream, Kuroshio, Mozambique and East Australian currents, all
of them WBCs) and the least active ones (including Benguela, California, Peru
and Canary Currents, which correspond to EBCs). The distinction remains in the
PDF analysis: we can clearly distinguish two groups of PDFs: a) narrow PDFs
highly peaked around a very small value of $\lambda$ (EBCs); b) PDFs peaking
at a slightly greater value of $\lambda$, but significantly broader (WBCs).
Since the PDFs of the WBCs are broader, large values of FSLEs are found more
frequently, i.e., more intense stirring occurs. This appears to be a
reflection of the well-known mechanism of Western Intensification by [39].
Also, the asymmetry and tails of the PDFs show that the FSLE field is
inhomogeneous and that there are regions with very different dispersion
properties. Following [3], asymmetry and heavy tails make the PDFs quite
different from the Gaussians expected under more homogeneous mixing. These
characteristics are then indications that chaotic motion plays a dominant role
versus turbulent, smaller scales, dynamics. That is, the large scale velocity
features control the dynamics, something that is also reflected in the
filamentary patterns of the LCS shown in Fig. 1.
### 3.3 Seasonal characterization of horizontal stirring
Horizontal stirring in the global ocean has a strong seasonal variability, as
shown in Fig. 3a. Maximum values of $<\lambda>_{A}$ in the Northern Hemisphere
are reached early in that hemisphere Summer, and minimum ones early in that
hemisphere Winter. The same happens for the Southern hemisphere related to its
Summer and Winter periods.
Seasonally averaged FSLEs in the whole ocean over the four seasons are shown
in Fig. 4. The spatial pattern is rather similar in all of them, and also
similar to the annually-averaged spatial distribution shown in Fig 2a. Higher
FSLE levels are found at the Gulf Stream and Kuroshio in the Northern
Hemisphere in Spring and Summer of that hemisphere. Analogously for the
Eastern Australia and Mozambique Currents in the Southern Hemisphere relative
to their own Spring and Summer time.
Following [44], to analyze which areas are more sensitive to seasonal changes,
we computed the standard deviation of the annual time series of FSLE (see Fig.
5). Larger values appear to correspond to the more energetic regions thus
showing a higher seasonal variability. More information about seasonal
variability of different oceanic regions can be obtained again from Fig. 3.
Time evolution of stirring in the North Atlantic and North Pacific, shown in
Fig. 3c, attains high values in Spring and Summer, and minimum ones in Winter.
Concerning the main currents, we found than values of stirring in Kuroshio,
Gulf Stream, East Australia, and Mozambique currents increase in Spring and
Summer and decrease in Winter (see Fig. 3e). This seasonal variability is also
present in EBCs but the amplitude of the changes is smaller than in WBCs.
The generic increase in mesoscale stirring in Summer time detected here with
Lyapunov methods has also been identified in previous works and several
locations [18, 32, 27, 31, 44] (in most of the cases from the EKE behavior
extracted from altimetric data). Although no consensus on a single mechanisms
seems to exist (see discussion in [44]) enhanced baroclinic instability has
been proposed in particular areas [32, 31], as well as reduced dissipation
during Summer [44].
We have also calculated longitudinal (zonal) averages of the time averages of
FSLE in Figs. 2a and 4. This is shown in Fig. 6 (top figure for the Northern
hemisphere and bottom figure for the Southern one). First of all, we see that
horizontal stirring has a general tendency to increase with latitude in both
hemispheres. One may wonder if this is a simple consequence of the decreasing
value of $\delta_{f}$ we take when increasing latitude. We have checked that
the same increasing tendency remains when the calculation is redone with a
constant $\delta_{f}$ over the whole globe (not shown), so that this trivial
effect is properly compensated by the factor $\ln(\delta_{f}/\delta_{0})$ in
the FSLE definition, and what we see in Fig. 6 is really a stronger stirring
at higher latitudes. Note that this type of dependence is more similar to the
equivalent sea surface slope variability, $K_{sl}$, calculated from altimetry
in [38] than to the raw zonal dependency of the EKE obtained in the same
paper. Since $K_{sl}$ is intended to represent Sea Surface Height variability
with the large scale components filtered out, we see again that our FSLE
calculation is capturing properly the mesoscale components of ocean stirring
observed by other means.
It is also clearly seen that latitudinal positions of local maxima of stirring
correspond to the main currents (e.g. Gulf Stream and Kuroshio around 35∘N;
Mozambique, Brazil and East Australia around 25∘S). The picture in Fig. 6
confirms that horizontal stirring is somehow higher in local Summer in mid-
latitudes, were the main currents are, for both hemispheres. At low and high
latitudes however the horizontal stirring is higher in local winter-time for
both hemispheres, which is particularly visible in the Northern Hemisphere at
high latitudes. A similar behavior was noted by [44] in the subpolar North
Pacific and part of the subpolar North Atlantic for EKE derived from
altimetry. Possible causes pointed there are barotropic instabilities or
direct wind forcing.
### 3.4 Lagrangian-Eulerian relations
Lagrangian measures such as FSLEs provide information on the cumulative effect
of flow at a given point, as it integrates the time-evolution of water parcels
arriving to that point. They are not directly related to instantaneous
measurements as those provided by Eulerian quantities such as EKE or
vorticity, unless some kind of dynamic equilibrium or ergodicity-type property
is established so that the time-integrated effect can be related to the
instantaneous spatial pattern (for instance, if the spatial arrangement of
eddies at a given time gives an idea about the typical time evolution of a
water parcel) or their averages. EKE gives information on the turbulent
component of the flow, which is associated to high eddy activity, while
relative vorticity $\omega$ takes into account the shear and the rotation of
the whole flow. Eventual establishment of such dynamic equilibrium would allow
to substitute in some instances time averages along trajectories by spatial
averages, so providing a useful tool for rapid diagnostics of sea state. Thus,
we will relate the Lagrangian stirring (as measured by the FSLEs) with an
instantaneous, Eulerian, state variable. Of course, the Lagrangian-Eulerian
relations will be useful only if the same, or only a few functional
relationships hold in different ocean regions. If the relation should be
recalculated for every study zone, the predictive power is completely lost.
We have thus explored the functional dependence of FLSEs with EKE and relative
vorticity. In Fig. 2 the time average of these three fields is shown.
Comparing FSLEs (Fig. 2a) and EKE (Fig. 2b), we see that high and low values
of these two quantities are generally localized in the same regions. There are
a few exceptions, such as the North Pacific Subtropical Countercurrent, which
despite being energetic [32] does not seem to produce enough pair dispersion
and stretching at the scales we are considering. It was already shown by [43]
and [42] that variations in horizontal stirring are closely related to
variations in mesoscale activity as measured by EKE. Note the similarity, with
also an analogous range of values, of the EKE plot in Fig. 2b), obtained from
a numerical model, to that of [42] (first figure), which is obtained from
altimetry data. In [43] a proportionality between the stretching rate (as
measured by FTLE) and $EKE^{1/4}$ was inferred for the Tasman Sea (a relation
was found but no fit was attempted in the global data set described in [42]).
In order to verify if a similar functional dependence between FSLE and EKE
could hold for our global scale dataset, we have computed different
conditioned averages (see Section 2), shown in Fig. 7: in the left panel we
present the conditioned average $\tilde{\lambda}(EKE)$, while in the right
panel $\tilde{\lambda}(\omega)$ is shown; both functions were derived from the
time averaged variables shown in Figure 2.
The smooth curve depicted in Fig. 7, left, is an indication of a well-defined
functional relationship between $\overline{\lambda}$ and $\overline{EKE}$,
similar to the ones found by [43] and [42] from altimeter data. Notice however
that the plot just gives conditioned averages, but the conditioned standard
deviation -which is a measure of randomness and fluctuations- is not
negligible. An idea of the scatter is given for selected areas in Fig. 8.
Considerably less compact relationships were obtained in the Mediterranean sea
[11]. Fig. 8 shows that very different dynamical regimes identified by
different values of $\lambda$ may correspond to the same level of EKE. As a
Lagrangian diagnostic, we believe that FSLE is more suitable to link
turbulence properties to tracer dynamics than Eulerian quantifiers such as
EKE. FSLEs provide complementary information since very energetic areas, with
large typical velocities, do not necessarily correspond to high stretching
regions. A paradigmatic example is a jet, or a shear flow, where small
dispersions may be found because of the absence of chaotic trajectories. A
functional relation between $\overline{\lambda}$ and $\overline{\omega}$ is
also obtained (Fig. 7, right), although it is much noisier and probably worse-
behaved. When particularizing for the different regions, we see that for EKE
the WBCs are all roughly associated with one particular functional relation
for the conditioned average $\overline{\lambda}$ while EBCs gather around a
different one. None of the two prototype Lagrangian-Eulerian relations fits
well to the relation $\lambda\propto EKE^{1/4}$ proposed for FTLE by [43] from
altimeter data in the Tasman sea. Data are too scarce to make a reliable
fitting for the conditioned average, in particular for the EBC. In Fig. 8 we
see that relations of the form $\lambda\propto EKE^{\alpha}$ could be
reasonably fitted to scatter plots of the data, with $\alpha$ larger than the
$0.25$ obtained in [43], specially for WBC were $\alpha$ is in the range
$(0.34,0.40)$. This quantitative difference of our results with [43] may rest
upon the fact that they considered just the Tasman Sea and we consider the
different oceans. Other sources for the difference could be that we are using
FSLE of velocity data from a numerical model, instead of FTLE from altimetry,
or that they use a grid of relatively low resolution $0.5^{\circ}\times
0.5^{\circ}$, while ours is $0.1^{\circ}\times 0.1^{\circ}$. Maybe their
coarser resolution is not enough to resolve filaments which are the most
relevant structures in our FSLE calculations. Despite this the qualitative
shape of the Lagrangian-Eulerian relations is similar to the previous works
[43, 42].
In order to analyze the ocean regions beyond boundary currents, we have also
computed the conditioned averages for the Equatorial Current and for a
$40^{\circ}$ longitude by $20^{\circ}$ latitude sub-region centered at
$245^{\circ}$ longitude and $-30^{\circ}$ latitude in the middle of the sub-
tropical gyre in the Pacific Ocean (and hence an area of scarce horizontal
stirring activity). We see (Fig. 7, left) that the EBC Lagrangian-Eulerian
relation is valid for these two areas. We have also verified that the
relations derived from annually-averaged quantities remain the same for
seasonal averages (not shown). The important point here is the occurrence of
just two different shapes for the EKE-FSLE relations across very different
ocean regions, which may make useful this type of parametrization of a
Lagrangian quantity in terms of an Eulerian one. For the relations of FSLE in
terms of relative vorticity, a distinction between WBC and EBC still exists
but the results are less clear and class separation is not as sharp as in the
case of EKE (see Fig. 7, right). For instance, Gulf Stream and Kuroshio,
despite being both WBC, do not seem to share the same Lagrangian-Eulerian
relation, which limits its usefulness.
## 4 Conclusions
In this paper we have studied the space and time variability of horizontal
stirring in the global ocean by means of FSLE analysis of the outputs of a
numerical model. Similarly to what has been done in previous works, FSLEs can
be taken as indicators of horizontal stirring. Being Lagrangian variables,
they integrate the evolution of water parcels and thus they are not completely
local quantities. We have taken averages to analyze two main time scales
(annual and seasonal) and three space scales (planetary scale, ocean scale and
horizontal boundary scale). Our velocity data were obtained by using
atmospheric forcing from NCEP. Structures and dynamics at small scales will be
probably more realistic if forcing with higher resolution observed winds, as
in [36]. But since we have not studied the first model layer which is directly
driven by wind, and we have focused on averages at relatively large time and
spatial scales, we do not expect much differences if using more detailed
forcing.
Horizontal stirring intensity tends to increase with latitude, probably as a
result of having higher planetary vorticity and stronger wind action at high
latitudes, or rather, as argued in [44] because of barotropic instabilities.
Certainly, new studies are required to evaluate these hypothesis. At a
planetary scale we observe a significantly different behavior in the Northern
hemisphere with respect to the Southern Hemisphere, the first being on average
more active in terms of horizontal stirring than the second one. This
difference can probably be explained by the greater relative areas of
subtropical gyres in the Southern Hemisphere with small stirring activity
inside them, which compensates in the averages the great activity of the
Antartic Circumpolar Current. At an ocean scale, we observe that the level of
stirring activity tends to decay as the size of subtropical gyres increases,
what is an indication that the most intense horizontal stirring takes place at
the geographical boundaries of ocean basins. For that reason, we have finally
analyzed the behavior of stirring at boundary scale, which is mainly related
to WBCs and EBCs. EBCs behave in a similar way to ocean interior in terms of
all the quantities we have computed, including the Lagrangian-Eulerian
relations. Thus, the main hot spots of horizontal stirring in the ocean are
WBC. The observed small mode in the global FSLE PDFs also indicates that
horizontal stirring is not very intense for the vast majority of the ocean,
but the heavy tails indicate the existence of large excursions at some
specific, stretched locations (e.g., inside the WBCs and other smaller scale
currents active enough to generate stirring). This type of uneven distribution
is characteristic of multifractal systems arising from large scale chaotic
advection, something that was discussed for oceanic FSLEs in [20].
Regarding seasonal variability, generally we observe stronger stirring during
each hemisphere’s Summer time. Medium and high latitudes behave however in the
opposite way: stirring is more active during the hemisphere Summer for medium
latitudes and during the hemisphere Winter for high latitudes. Medium
latitudes are strongly affected by the behavior of WBC, which experience
intensification of horizontal stirring during Summer [18, 32, 27, 31, 44]. As
commented before, high latitude Winter intense stirring could be the result of
a stronger action of wind during that period or of barotropic instabilities
[44], and dedicated studies are required to evaluate these hypothesis.
Finally, we have studied the connection between time-extended Lagrangian FSLEs
and instant Eulerian quantities such as EKE and relative vorticity. For the
case of EKE, the different ocean regions give rise to just two different
Lagrangian-Eulerian relations, associated to an intense or a weak stirring
regimes. The existence of these two regimes implies that pair dispersion and
stretching strength are larger in a class of ocean areas (represented by WBC)
than in another (e.g. EBC) at mesoscales, even when having the same EKE.
## Acknowledgments
I.H.-C., C.L. and E.H.-G. acknowledge support from MICINN and FEDER through
project FISICOS (FIS200760327); A. Turiel has received support from Interreg
TOSCA project (G-MED09-425) and Spanish MICINN project MIDAS-6
(AYA2010-22062-C05-01). The OFES simulation was conducted on the Earth
Simulator under the support of JAMSTEC. We thank Earth Simulator Center-
JAMSTECH team for providing these data.
## References
* [1] V Artale, G Boffetta, A Celani, M Cencini, and A Vulpiani. Dispersion of passive tracers in closed basins: Beyond the diffusion coefficient. Phys. Fluids, 9:3162–3171, 1997.
* [2] E Aurell, G Boffetta, A Crisanti, G Paladin, and A Vulpiani. Predictability in the large: an extension of the Lyapunov exponent. J. Phys. A, 30:1–26, 1997.
* [3] F.J. Beron-Vera. Mixing by low- and high-resolution surface geostrophic currents. J. Geophys. Res., 115:C10027, 2010.
* [4] F.J. Beron-Vera, M.J. Olascoaga, and G.J. Goni. Oceanic mesoscale eddies as revealed by Lagrangian coherent structures. Geophys. Res. Lett, 35:L12603, 2008.
* [5] Francisco J Beron-Vera, María J Olascoaga, Michael G Brown, Huseyin Koçak, and Irina I Rypina. Invariant-tori-like Lagrangian coherent structures in geophysical flows. Chaos, 20(1):017514, 2010.
* [6] J.H. Bettencourt, C. López, and E. Hernández-García. Oceanic three-dimensional Lagrangian coherent structures: A study of a mesoscale eddy in the Benguela upwelling region. Ocean Modell., to appear, 2012.
* [7] G. Boffetta, A. Celani, M. Cencini, G. Lacorata, and A. Vulpiani. Non-asymptotic properties of transport and mixing. Chaos, 10:50–60, 2000.
* [8] G. Boffetta, G. Lacorata, G. Redaelli, and A. Vulpiani. Detecting barriers to transport: A review of different techniques. Physica D, 159:58–70, 2001.
* [9] G. Buffoni, P. Falco, A. Griffa, and E. Zambianchi. Dispersion processes and residence times in a semi-enclosed basin with recirculating gyres: An application to the Tyrrhenian sea. J. Geophys. Res.-Oceans, 102:18699–18713, 1997.
* [10] F. d’Ovidio, V. Fernández, E. Hernández-García, and C. López. Mixing structures in the Mediterranean sea from Finite-Size Lyapunov Exponents. Geophys. Res. Lett., 31:L17203, 2004.
* [11] F. d’Ovidio, J. Isern-Fontanet, C. López, E. Hernández-García, and E. García-Ladona. Comparison between Eulerian diagnostics and Finite-Size Lyapunov Exponents computed from altimetry in the Algerian basin. Deep-Sea Res. I, 56:15–31, 2009.
* [12] Francesco d’Ovidio, Emily Shuckburgh, and Bernard Legras. Local mixing events in the upper-troposphere and lower-stratosphere. Part I: detection with the Lyapunov diffusivity. J. Atmos. Sci., 66(12):3678–3694, 2009.
* [13] G. Haller. Distinguished material surfaces and coherent structure in three-dimensional fluid flows. Physica D, 149:248–277, 2001.
* [14] G. Haller. Lagrangian coherent structures from approximate velocity data. Phys. Fluids, 14:1851–1861, 2002.
* [15] G. Haller and G. Yuan. Lagrangian coherent structures and mixing in two-dimensional turbulence. Physica D, 147:352–370, 2000.
* [16] George Haller. A variational theory of hyperbolic Lagrangian coherent structures. Physica D, 240(7):574 – 598, 2011. Erratum and addendum: Physica D 241, 439–441 (2012).
* [17] George Haller and Themistoklis Sapsis. Lagrangian coherent structures and the smallest finite - time Lyapunov exponent. Chaos, 21(2):023115, 2011.
* [18] G Halliwell, D B Olson, and G Peng. Stability of the Sargasso Sea subtropical frontal zone. J. Phys. Oceanogr., 24(6):1166–1183, 1994.
* [19] A. C. Haza, A. C. Poje, T. M. Özgökmen, and P. Martin. Relative dispersion from a high-resolution coastal model of the Adriatic Sea. Ocean Modell., 22:48–65, 2008.
* [20] Ismael Hernández-Carrasco, Cristóbal López, Emilio Hernández-García, and Antonio Turiel. How reliable are finite-size Lyapunov exponents for the assessment of ocean dynamics? Ocean Modelling, 36(3-4):208 – 218, 2011.
* [21] B. Joseph and B. Legras. Relation between kinematic boundaries, stirring, and barriers for the Antartic polar vortex. J. Atm. Sci., 59:1198–1212, 2002.
* [22] T-Y. Koh and B. Legras. Hyperbolic lines and the stratospheric polar vortex. Chaos, 12(2):382–394, 2002.
* [23] G Lacorata, E Aurell, and A Vulpiani. Drifter dispersion in the Adriatic sea: Lagrangian data and chaotic model. Ann. Geophysicae, 19:121–129, 2001.
* [24] G. Lapeyre. Characterization of finite-time Lyapunov exponents and vectors in two-dimensional turbulence. Chaos, 12(3):688–698, 2002.
* [25] Yukio Masumoto. Sharing the results of a high-resolution ocean general circulation model under a multi-discipline framework - a review of OFES activities. Ocean Dynamics, 60:633–652, 2010.
* [26] Yukio Masumoto, Hideharu Sasaki, Takashi Kagimoto, Nobumasa Komori, Akio Ishida, Yoshikazu Sasai, Toru Miyama, Tatsuo Motoi, Humio Mitsudera, Keiko Takahashi, and et al. A fifty-year eddy-resolving simulation of the world ocean - Preliminary outcomes of OFES (OGCM for the Earth Simulator). J. of the Earth Simulator, 1(April):35–56, 2004.
* [27] Rosemary Morrow, Aurore Brut, and Alexis Chaigneau. Seasonal and interannual variations of the upper ocean energetics between tasmania and antarctica. Deep-Sea Res. I, 50(3):339–356, 2003.
* [28] T. M. Özgökmen, A. C. Poje, P. F. Fischer, and A. C. Haza. Large eddy simulations of mixed layer instabilities and sampling strategies. Ocean Modell., 39:311–331, 2011.
* [29] Thomas Peacock and John Dabiri. Introduction to Focus Issue: Lagrangian coherent structures. Chaos, 20(1):017501, 2010.
* [30] Andrew C Poje, Angelique C Haza, Tamay M Özgökmen, Marcello G Magaldi, and Zulema D Garraffo. Resolution dependent relative dispersion statistics in a hierarchy of ocean models. Ocean Modell., 31:36–50, 2010.
* [31] B Qiu and S Chen. Seasonal modulations in the eddy field of the South Pacific ocean. J. Phys. Oceanogr., 34(7):1515–1527, 2004.
* [32] Bo Qiu. Seasonal eddy field modulation of the North Pacific Subtropical Countercurrent: TOPEX/Poseidon observations and theory. J. Phys. Oceanogr., 29(10):2471–2486, 1999.
* [33] V. Rossi, C. López, E. Hernández-García, J. Sudre, , V. Garçon, and Y. Morel. Surface mixing and biological activity in the four Eastern Boundary upwelling systems. Nonlin. Processes Geophys., 16:557–568, 2009.
* [34] V. Rossi, C. López, J. Sudre, E. Hernández-García, and V. Garçon. Comparative study of mixing and biological activity of the Benguela and Canary upwelling systems. Geophys. Res. Lett., 35:L11602, 2008.
* [35] I I Rypina, F J Beron-Vera, M G Brown, H Kocak, M J Olascoaga, and I A Udovydchenkov. On the Lagrangian dynamics of atmospheric zonal jets and the permeability of the stratospheric polar vortex. J. Atm. Sci., 33149:3595, 2006.
* [36] Hideharu Sasaki, Yoshikazu Sasai, Masami Nonaka, Yukio Masumoto, and Shintaro Kawahara. An eddy-resolving simulation of the quasi-global ocean driven by satellite-observed wind field – preliminary outcomes from physical and biological fields -. J. Earth Simulator, 6:35–49, 2006.
* [37] Shawn C. Shadden, Francois Lekien, and Jerrold E. Marsden. Definition and properties of Lagrangian coherent structures from finite-time Lyapunov exponents in two-dimensional aperiodic flows. Physica D, 212(34):271 – 304, 2005.
* [38] Detlef Stammer. Global characteristics of ocean variability estimated from regional TOPEX/Poseidon altimeter measurements. J. Phys. Oceanogr., 27(8):1743–1769, 1997.
* [39] H Stommel. The westward intensification of wind-driven ocean currents. Trans. Amer. Geophys. Union, 29(2):202–206, 1948.
* [40] E. Tew Kai, V. Rossi, J. Sudre, H. Weimerskirch, C. López, E. Hernández-García, F. Marsac, and V. Garçon. Top marine predators track Lagrangian coherent structures. Proc. Natl. Acad. Sci. U.S.A., 106(20):8245–8250, 2009.
* [41] Alexandra Tzella and Peter H. Haynes. Smooth and filamental structures in chaotically advected chemical fields. Phys. Rev. E, 81:016322, Jan 2010.
* [42] D. W. Waugh and E. R. Abraham. Stirring in the global surface ocean. Geophys. Res. Lett., 35:L20605, 2008.
* [43] D. W. Waugh, E. R. Abraham, and M. M. Bowen. Spatial variations of stirring in the surface ocean: A case of study of the Tasman sea. J. Phys. Oceanogr., 36:526–542, 2006.
* [44] Xiaoming Zhai, Richard J. Greatbatch, and Jan-Dirk Kohlmann. On the seasonal variability of eddy kinetic energy in the Gulf Stream region. Geophys. Res. Lett., 35:L24609, 2008.
Figure 1: Top: Snapshot of spatial distributions of FSLEs backward in time
corresponding to November 11, 1990 of the OFES output. Resolution is
$\delta_{0}=1/10^{\circ}$. Bottom: Zoom in the area of the box inside top
figure (South Atlantic Ocean). Coherent structures and vortices can be clearly
seen. The colorbar has units of $day^{-1}$.
Figure 2: a) Time average of the FSLEs in the Global Ocean. Geographical
regions of different stirring activity appear. The colorbar has units of
$day^{-1}$. b) Spatial distribution of annual $EKE^{1/2}$ (cm/s). c) Time
average of Relative Vorticity ($\omega$) in the Global Ocean. The color bar
has units of $day^{-1}$. In all the plots the averages are over the 52 weekly
maps computed from November 1st, 1990 to October 31th, 1991.
Figure 3: Left column: Temporal evolution (from November 1st, 1990 to October
31th, 1991) of the horizontal stirring (Spatial average of FSLEs). Right
column: PDFs of the FSLEs (histograms are built from the $\lambda$ values
contained at all locations of the 52 weekly maps computed for the second
simulation output year). Top: for both hemispheres and for the whole ocean.
Middle: for different oceanic regions. Bottom: for some main currents during
one simulation year. In addition to the results from the second surface layer
analyzed through the paper, panel a) shows also stirring intensity in a layer
close to 100m depth.
Figure 4: Time average of the FSLEs in the Global Ocean for the each season.
Spring: from March 22 to June 22. Summer: from June 22 to September 22.
Autumn: from September 22 to December 22. Winter: from December 22 to March
22. The colorbar has units of $day^{-1}$.
Figure 5: Standard deviation of weekly FSLE maps of one year. The colorbar has
units of $day^{-1}$
Figure 6: Cross-ocean zonal average of the annual, relative Summer and
relative Winter time average of FSLE maps from Fig 2a as a function of
latitude (expressed as absolute degrees from Equator to make both hemispheres
comparable). Top: Northern Hemisphere; bottom: Southern Hemisphere.
Figure 7: Left: Lagrangian-Eulerian relations. Left: the conditional average
$\tilde{\lambda}_{EKE}$ as a function of its corresponding annually averaged
(second year) $\overline{EKE}$ for different regions and currents. We clearly
observe two groups of relations FSLE-EKE. Right: same plot for the conditional
average $\tilde{\lambda}_{\omega}$ as a function of its corresponding annually
averaged (second year) $\overline{\omega}$. Although we observe also the same
two two groups of FSLE-$\omega$ relations, these functions are much noisier
and region-dependent.
Figure 8: Scatter plots showing temporally averaged FSLE values at different
spatial points in regions of Fig. 2a, and EKE values (as displayed in Fig. 2b)
at the same points. The regions displayed here are eight of the main currents.
Fittings of the type $y=cX^{b}$ are also displayed, where $y$ is the temporal
mean of FSLE and $X$ is $\textrm{EKE}^{1/2}$. Note that this implies
$<\textrm{FSLE}>=c~{}\textrm{EKE}^{\alpha}$ with $\alpha=b/2$.
|
arxiv-papers
| 2011-03-30T13:43:34 |
2024-09-04T02:49:18.016359
|
{
"license": "Public Domain",
"authors": "I. Hern\\'andez-Carrasco, C. L\\'opez, E. Hern\\'andez-Garc\\'ia, A.\n Turiel",
"submitter": "Ismael Hernandez-Carrasco",
"url": "https://arxiv.org/abs/1103.5927"
}
|
1103.5957
|
Multi-path Routing Metrics for Reliable Wireless Mesh Routing Topologies This
work was supported in part by HSN (Heterogeneous Sensor Networks), which
receives support from Army Research Office (ARO) Multidisciplinary Research
Initiative (MURI) program (Award number W911NF-06-1-0076) and in part by TRUST
(Team for Research in Ubiquitous Secure Technology), which receives support
from the National Science Foundation (NSF award number CCF-0424422) and the
following organizations: AFOSR (#FA9550-06-1-0244), BT, Cisco, DoCoMo USA
Labs, EADS, ESCHER, HP, IBM, iCAST, Intel, Microsoft, ORNL, Pirelli, Qualcomm,
Sun, Symantec, TCS, Telecom Italia, and United Technologies. The work was also
supported by the EU project FeedNetBack, the Swedish Research Council, the
Swedish Strategic Research Foundation, and the Swedish Governmental Agency for
Innovation Systems. PHOEBUS CHEN, KARL H. JOHANSSON, PAUL BALISTER, BÉLA
BOLLOBÁS, AND SHANKAR SASTRY Stockholm 2011 ACCESS Linnaeus Centre Automatic
Control School of Electrical Engineering KTH Royal Institute of Technology
SE-100 44 Stockholm, Sweden TRITA-EE 2011:033
# Multi-path Routing Metrics for
Reliable Wireless Mesh Routing Topologies
Phoebus Chen and Karl H. Johansson ACCESS Linnaeus Centre
KTH Royal Institute of Technology
Stockholm, Sweden
Paul Balister and Béla Bollobás Department of Mathematical Sciences
University of Memphis, USA
Shankar Sastry Department of EECS
University of California, Berkeley, USA
###### Abstract
Several emerging classes of applications that run over wireless networks have
a need for mathematical models and tools to systematically characterize the
reliability of the network. We propose two metrics for measuring the
reliability of wireless mesh routing topologies, one for flooding and one for
unicast routing. The Flooding Path Probability (FPP) metric measures the end-
to-end packet delivery probability when each node broadcasts a packet after
hearing from all its upstream neighbors. The Unicast Retransmission Flow (URF)
metric measures the end-to-end packet delivery probability when a relay node
retransmits a unicast packet on its outgoing links until it receives an
acknowledgement or it tries all the links. Both metrics rely on specific
packet forwarding models, rather than heuristics, to derive explicit
expressions of the end-to-end packet delivery probability from individual link
probabilities and the underlying connectivity graph.
We also propose a distributed, greedy algorithm that uses the URF metric to
construct a reliable routing topology. This algorithm constructs a Directed
Acyclic Graph (DAG) from a weighted, undirected connectivity graph, where each
link is weighted by its success probability. The algorithm uses a vector of
decreasing reliability thresholds to coordinate when nodes can join the
routing topology. Simulations demonstrate that, on average, this algorithm
constructs a more reliable topology than the usual minimum hop DAG.
###### Index Terms:
wireless, mesh, sensor networks, routing, reliability
## I Introduction
Despite the lossy nature of wireless channels, applications that need reliable
communications are migrating toward operation over wireless networks. Perhaps
the best example of this is the recent push by the industrial automation
community to move part of the control and sensing infrastructure of networked
control systems (see [1] for a survey of the field) onto Wireless Sensor
Networks (WSNs) [2, 3]. This has resulted in several efforts to create WSN
communication standards tailored to industrial automation (e.g., WirelessHART
[4], ISA-SP100 [5]).
A key network performance metric for all these communication standards is
_reliability_ , the probability that a packet is successfully delivered to its
destination. The standards use several mechanisms to increase reliability via
diversity, including retransmissions (time diversity), transmitting on
different frequencies (frequency diversity), and multi-path routing (spatial /
path diversity). But just providing mechanisms for higher reliability is not
enough — methods to characterize the reliability of the network are also
needed for optimizing the network and for providing some form of performance
guarantee to the applications. More specifically, we need a network
reliability metric in order to: 1) quickly evaluate and compare different
routing topologies to help develop wireless node deployment / placement
strategies; 2) serve as an abstraction / interface of the wireless network to
the systems built on these networks (e.g., networked control systems); and 3)
aid in the construction of a reliable routing topology.
This paper proposes two multi-path routing topology metrics, the Flooding Path
Probability (FPP) metric and the Unicast Retransmission Flow (URF) metric, to
characterize the reliability of wireless mesh hop-by-hop routing topologies.
Both routing topology metrics are derived from the directed acyclic graph
(DAG) representing the routing topology, the link probabilities (the link
metric), and specific packet forwarding models. The URF and FPP metrics define
different ways of combining link metrics than the usual method of summing or
multiplying the link costs along single paths.
The merit of these routing topology metrics is that they clearly relate the
modeling assumptions and the DAG to the reliability of the routing topology.
As such, they help answer questions such as: When are interleaved paths with
unicast hop-by-hop routing better than disjoint paths with unicast routing?
Under what modeling assumptions does routing on an interleaved multi-path
topology provide better reliability than routing along the best single path?
What network routing topologies should use constrained flooding for good
reliability? (These questions will be answered in Sections IV-D and V-D.)
Sections II and III provide background on routing topology metrics and a more
detailed problem description, to better understand the contributions of this
paper.
The contributions of this paper are two-fold: First, we define the FPP and URF
metrics and algorithms for computing them in Sections IV and V. Second, we
propose a distributed, greedy algorithm called URF-Delayed_Thresholds (URF-DT)
to generate a mesh routing topology that locally optimizes the URF metric in
Section VI. We demonstrate that the URF-DT algorithm can build routing
topologies with significantly better reliability than the usual minimum hop
DAG via simulations in Section VII.
## II Related Works
In single-path routing, the path metric is often defined as the sum of the
link metrics along the path. Examples of link metrics include the negative
logarithm of the link probability (for path probability) [6], ETX (Expected
Transmission Count), ETT (Expected Transmission Time), and RTT (Round Trip
Time) [7]. Most single-path routing protocols find minimum cost paths, where
the cost is the path metric, using a shortest path algorithm such as
Dijkstra’s algorithm or the distributed Bellman-Ford algorithm [8].
In multi-path routing, one wants metrics to compare collections of paths or
entire routing topologies with each other. Simply defining the multi-path
metric to be the maximum or minimum single-path metric of all the paths
between the source and the sink is not adequate, because such a multi-path
metric will lose information about the collection of paths.
Our FPP metric is a generalization of the reliability calculations done in [9]
for the M-MPR protocol and in [10] for the GRAdient Broadcast protocol. Unlike
[9, 10], our algorithm for computing the FPP metric does not assume all paths
have equal length.
Our URF metric is similar to the anypath route metric proposed by dubois-
Ferriere et al. [6]. Anypath routing, or opportunistic routing, allows a
packet to be relayed by one of several nodes which successfully receives a
packet [11]. The anypath route metric generalizes the single-path metric by
defining a “links metric” between a node and a set of candidate relay nodes.
The specific “links metric” is defined by the candidate relay selection policy
and the underlying link metric (e.g., ETX, negative log link probability). As
explained later in Section V-D, although the packet forwarding models for the
URF and FPP metrics are not for anypath routing, a variation of the URF metric
is almost equivalent to the ERS-best E2E anypath route metric presented in
[6].
One of our earlier papers, [12], modeled the precursor to the WirelessHART
protocol, TSMP [13]. We developed a Markov chain model to obtain the
probability of packet delivery over time from a given mesh routing topology
and TDMA schedule. The inverse problem, trying to jointly construct a mesh
routing topology and TDMA schedule to satisfy stringent reliability and
latency constraints, is more difficult. The approach taken in this paper is to
separate the scheduling problem from the routing problem, and focus on the
latter. The works [14, 15] find the optimal schedule and packet forwarding
policies for delay-constrained reliability when given a routing topology.
Many algorithms for building multi-path routing topologies try to minimize
single-path metrics. For instance, [16] extends Dijkstra’s algorithm to find
multiple equal-cost minimum cost paths while [17] finds multiple edge-disjoint
and node-disjoint minimum cost paths. RPL [18], a routing protocol currently
being developed by the IETF ROLL working group, constructs a DAG routing
topology by building a minimum cost routing tree (links from child nodes to
”preferred parent” nodes) and then adding redundant links which do not
introduce routing loops.111The primary design scenario considered by RPL uses
single-path metrics. Other extensions to consider multi-path metrics may be
possible in the future. In contrast, our URF-DT algorithm constructs a
reliable routing topology by locally optimizing the URF metric, a multi-path
metric that can express the reliability provided by hop-by-hop routing over
interleaved paths.
Another difference between URF-DT and RPL is that URF-DT specifies a mechanism
to control the order which nodes connect to the routing topology, while RPL
does not. The connection order affects the structure of the routing topology.
Finally, the LCAR algorithm proposed in [6] for building a routing topology
cannot be used to optimize the URF metric because the underlying link metric
(negative log link probability) for the URF metric does not satisfy the
physical cost criterion defined in [6].
## III Problem Description
We focus on measuring the reliability of wireless mesh routing topologies for
WSNs, where the wireless nodes have low computational capabilities, limited
memory, and low-bandwidth links to neighbors.
Empirical studies [13] have shown that multi-path hop-by-hop routing is more
reliable than single-path routing in wireless networks, where reliability is
measured by the source-to-sink packet delivery ratio. The main problem is to
define multi-path reliability metrics for flooding and for unicast routing
that capture this empirical observation. The second problem is to design an
algorithm to build a routing topology that directly tries to optimize the
unicast multi-path metric.
The FPP and URF metrics only differ in their packet forwarding models, which
are discussed in Sections IV-A and V-A. Both models do not retransmit packets
on failed links. More accurately, a finite number of retransmissions on the
same link can be treated as one link transmission with a higher success
probability.222We can do this because our metrics only measure reliability and
are not measuring throughput or delay. Here, a failed link in the model
describes a link outage that is longer than the period of the retransmissions
(a bursty link).
In fact, without long link outages and finite retransmissions, it is hard to
argue that multi-path hop-by-hop routing has better reliability than single-
path routing. Under a network model where all the links are mutually
independent and independent of their past state, all single paths have
reliability 1 when we allow for an infinite number of retransmissions.
Both the FPP and URF metrics assume that the links in the network succeed and
fail independently of each other. While this is not entirely true in a real
network, it is more tractable than trying to model how links are dependent on
each other. Both metrics also assume that each node can estimate the
probability that an incoming or outgoing link fails through link estimation
techniques at the link and physical layers [19].
### III-A Notation and Terminology
We use the following notation and terminology to describe graphs. Let
$G=({\mathcal{V}},{\mathcal{E}},p)$ represent a weighted directed graph with
the set of vertices (nodes) ${\mathcal{V}}=\\{1,\ldots,N\\}$, the set of
directed edges (links)
${\mathcal{E}}\subseteq\\{(i,j)\>:\>i,j\in{\mathcal{V}}\\}$, and a function
assigning weights to edges $p:{\mathcal{E}}\mapsto[0,1]$. The edge weights are
link success probabilities, and for more compact notation we use $p_{l}$ or
$p_{ij}$ to denote the probability of link $l=(i,j)$. The number of edges in
$G$ is denoted $E$. In a similar fashion to $G$, let
$\bar{G}=(\bar{{\mathcal{V}}},\bar{{\mathcal{E}}},p)$ represent a weighted
undirected graph (but now $\bar{{\mathcal{E}}}$ consists of undirected edges).
The source node is denoted $a$ and the sink (destination) node is denoted $b$.
A vertex cut of $a$ and $b$ on a connected graph is a set of nodes
${\mathcal{C}}$ such that the subgraph induced by
${\mathcal{V}}\backslash{\mathcal{C}}$ does not have a single connected
component that contains both $a$ and $b$. Note that this definition differs
from the conventional definition of a vertex cut because $a$ and $b$ can be
elements in ${\mathcal{C}}$.
The graph $G$ is a _DODAG_ (Destination-Oriented DAG) if all the nodes in $G$
have at least one outgoing edge except for the destination node $b$, which has
no outgoing edges. We say that a node $i$ is _upstream_ of a node $j$ (and
node $j$ is _downstream_ of node $i$) if there exists a directed path from
node $i$ to $j$ in $G$. Similarly, node $i$ is an _upstream neighbor_ of node
$j$ (and node $j$ is a _downstream neighbor_ of node $i$) if $(i,j)$ is an
edge in ${\mathcal{E}}$. The indegree of a node $i$, denoted as
$\delta^{-}(i)$, is the number of incoming links, and similarly the outdegree
of a node $i$, denoted as $\delta^{+}(i)$, is the number of outgoing links.
The maximum indegree of a graph is
$\Delta^{-}=\max_{i\in{\mathcal{V}}}\delta^{-}(i)$ and the maximum outdegree
of a graph is $\Delta^{+}=\max_{i\in{\mathcal{V}}}\delta^{+}(i)$.
Finally, define $2^{\mathcal{X}}$ to be the set of all subsets of the set
${\mathcal{X}}$.
## IV FPP Metric
This section presents the FPP metric, which assumes that multiple copies of a
packet are flooded over the routing topology to try all possible paths to the
destination.
### IV-A FPP Packet Forwarding Model
In the FPP packet forwarding model, a node listens for a packet from all its
upstream neighbors and multicasts the packet once on all its outgoing links
once it receives a packet. There are no retransmissions on the outgoing links
even if the node receives multiple copies of the packet. The primary
difference between this forwarding model and general flooding is that the
multicast must respect the orientation of the edges in the routing topology
DAG.
### IV-B Defining and Computing the Metric
* Definition
Flooding Path Probability Metric
Let $G=({\mathcal{V}},{\mathcal{E}},p)$ be a weighted DODAG, where each link
$(i,j)$ in the graph has a probability $p_{ij}$ of successfully delivering a
packet and all links independently succeed or fail. The _FPP_ metric
$p_{a\rightarrow b}\in[0,1]$ for a source-destination pair $(a,b)$ is the
probability that a packet sent from node $a$ over the routing topology $G$
reaches node $b$ under the FPP packet forwarding model. ∎
Since the FPP packet forwarding model tries to send copies of the packet down
all directed paths in the network, $p_{a\rightarrow b}$ is the probability
that a directed path of successful links exists in $G$ between the source $a$
and the sink $b$. This leads to a straightforward formula to calculate the FPP
metric.
$p_{a\rightarrow b}=\sum_{{\mathcal{E}}^{\prime}\in 2_{a\rightarrow
b}^{\mathcal{E}}}\left(\prod_{l\in{\mathcal{E}}^{\prime}}p_{l}\prod_{\bar{l}\in{\mathcal{E}}\backslash{\mathcal{E}}^{\prime}}(1-p_{\bar{l}})\right)\qquad,$
(1)
where $2_{a\rightarrow b}^{\mathcal{E}}$ is the set of all subsets of
${\mathcal{E}}$ that contain a path from $a$ to $b$. Unfortunately, this
formula is computationally expensive because it takes $O(E2^{E})$ to compute.
Algorithm 1 computes the FPP metric $p_{a\rightarrow b}$ using dynamic
programming and is significantly faster. The state used by the dynamic
programming algorithm is the joint probability distribution of receiving a
packet on vertex cuts ${\mathcal{C}}$ of the graph separating $a$ and $b$ (See
Figure 1 for an example). Recall that our definition of ${\mathcal{C}}$ allows
$a$ and $b$ to be elements of ${\mathcal{C}}$, which is necessary for the
first and last steps of the algorithm.
Algorithm 1 Fast_FPP
Input: $G=({\mathcal{V}},{\mathcal{E}},p),a$ $\triangleright$ $G$ is a
connected DAG.
Output: $\\{p_{a\rightarrow v},\forall v\in{\mathcal{V}}\\}$
${\mathcal{C}}:=\\{a\\}$ $\triangleright$ ${\mathcal{C}}$ is the vertex cut.
${\mathcal{V}}^{\prime}:={\mathcal{V}}\backslash a$ $\triangleright$
${\mathcal{V}}^{\prime}$ is the set of remaining vertices.
5:${\mathcal{E}}^{\prime}:={\mathcal{E}}$ $\triangleright$
${\mathcal{E}}^{\prime}$ is the set of remaining edges.
$u:=a$ $\triangleright$ $u$ is the node targeted for removal from
${\mathcal{C}}$.
$p_{\mathcal{C}}(\\{a\\}):=1$; $p_{\mathcal{C}}(\emptyset):=0$
$\triangleright$ pmf for vertex cut ${\mathcal{C}}$.
while ${\mathcal{V}}^{\prime}\neq\emptyset$ do
[Find node $u$ to remove from vertex cut]
10: if $u\not\in{\mathcal{C}}$ then
Let
${\mathcal{J}}=\\{j\,:\,\forall(i,j)\in{\mathcal{E}}^{\prime},i\in{\mathcal{C}}\\}$
$u:=\operatorname*{arg\,min}_{i\in{\mathcal{C}}}\big{|}\\{(i,j)\in{\mathcal{E}}^{\prime}\,:\,j\in{\mathcal{J}}\\}\big{|}$
end if
[Add node $v$ to vertex cut]
15: Select any node
$v\in\\{j\in{\mathcal{V}}:(u,j)\in{\mathcal{E}}^{\prime}\\}$
$p_{\mathcal{C}}^{\prime}:=\mathtt{NIL}$ $\triangleright$ Probabilities for
next vertex cut.
for all subsets ${\mathcal{C}}^{\prime}$ of ${\mathcal{C}}$ do
Let
${\mathcal{L}}=\\{(i,v)\in{\mathcal{E}}^{\prime}\,:\,i\in{\mathcal{C}}^{\prime}\\}$
$p_{\mathcal{C}}^{\prime}({\mathcal{C}}^{\prime}\cup\\{v\\}):=p_{\mathcal{C}}({\mathcal{C}}^{\prime})\cdot\left(1-\prod_{l\in{\mathcal{L}}}(1-p_{l})\right)$
20:
$p_{\mathcal{C}}^{\prime}({\mathcal{C}}^{\prime}):=p_{\mathcal{C}}({\mathcal{C}}^{\prime})\cdot\prod_{l\in{\mathcal{L}}}(1-p_{l})$
end for
${\mathcal{E}}^{\prime}:={\mathcal{E}}^{\prime}\backslash\\{(i,v)\in{\mathcal{E}}^{\prime}\,:\,i\in{\mathcal{C}}\\}$
${\mathcal{V}}^{\prime}:={\mathcal{V}}^{\prime}\backslash v$
${\mathcal{C}}:={\mathcal{C}}\cup\\{v\\}$
25: [Compute path probability]
Let $2_{v}^{\mathcal{C}}=\\{{\mathcal{C}}^{\prime}\in
2^{\mathcal{C}}\,:\,v\in{\mathcal{C}}^{\prime}\\}$
$p_{a\rightarrow v}:=\sum_{{\mathcal{C}}^{\prime}_{v}\in
2_{v}^{\mathcal{C}}}p_{\mathcal{C}}^{\prime}({\mathcal{C}}^{\prime}_{v})$
[Remove nodes ${\mathcal{D}}$ from vertex cut]
Let ${\mathcal{D}}=\\{i\in{\mathcal{C}}\,:\,\forall
j,\,(i,j)\not\in{\mathcal{E}}^{\prime}\\}$
30: ${\mathcal{C}}:={\mathcal{C}}\backslash{\mathcal{D}}$
$p_{\mathcal{C}}:=\mathtt{NIL}$
for all subsets ${\mathcal{C}}^{\prime}$ of ${\mathcal{C}}$ do
$p_{\mathcal{C}}({\mathcal{C}}^{\prime}):=\sum_{{\mathcal{D}}^{\prime}\in
2^{\mathcal{D}}}p_{\mathcal{C}}^{\prime}({\mathcal{C}}^{\prime}\cup{\mathcal{D}}^{\prime})$
end for
35:end while
Return: $\\{p_{a\rightarrow v},\forall v\in{\mathcal{V}}\\}$
Figure 1: An example of a sequence of vertex cuts that can be used by
Algorithm 1. The vertex cut after adding and removing nodes from each
iteration of the outer loop is circled in red. Figure 2: Running Algorithm 1
on the network graph shown on the left when selecting vertex cuts in the order
depicted in Figure 1 is equivalent to creating the vertex cut DAG shown on the
right and finding the probability that state $a$ will transition to state $b$.
Conceptually, the algorithm is converting the DAG representing the network to
a _vertex cut DAG_ , where each vertex cut at step $k$, ${\mathcal{C}}^{(k)}$,
is represented by the set of nodes
${\mathcal{S}}^{(k)}=2^{{\mathcal{C}}^{(k)}}$. Each node in
${\mathcal{S}}^{(k)}$ represents the event that a particular subset of the
vertex cut received a copy of the packet. The algorithm computes a probability
for each node in ${\mathcal{S}}^{(k)}$, and the collection of probabilities of
all the nodes in ${\mathcal{S}}^{(k)}$ represent the joint probability
distribution that nodes in the vertex cut ${\mathcal{C}}^{(k)}$ can receive a
copy of the packet. A link in the vertex cut DAG represents a valid (nonzero
probability) transition from a subset of nodes that have received a copy of
the packet in ${\mathcal{C}}^{(k-1)}$ to a subset of nodes that have received
a copy of the packet in ${\mathcal{C}}^{(k)}$. Figure 2 shows an example of
this graph conversion using the selection of vertex cuts depicted in Figure 1.
Algorithm 1 tries to keep the vertex cut small by using the greedy criteria in
lines 10–15 to adds nodes to the vertex cut. A node can only be added to the
vertex cut if all its incoming links originate from the vertex cut. When a
node is added to the vertex cut, its incoming links are removed. A node is
removed from the vertex cut if all its outgoing links have been removed.
Computing the path probability $p_{a\rightarrow b}$ reduces to computing the
joint probability distribution that a packet is received by a subset of the
vertex cut in each step of the algorithm. The joint probability distribution
over the vertex cut ${\mathcal{C}}^{(k)}$ is represented by the function
$p_{\mathcal{C}}^{(k)}:{\mathcal{S}}^{(k)}\mapsto[0,1]$. Step $k$ of the
algorithm computes $p_{\mathcal{C}}^{(k)}$ from $p_{\mathcal{C}}^{(k-1)}$ on
lines 19, 20, and 33 in Algorithm 1. Notice that the nodes in each
${\mathcal{S}}^{(k)}$ represent disjoint events, which is why we can combine
probabilities in lines 27 and 33 using summation.
### IV-C Computational Complexity
The running time of Algorithm 1 is
$O\big{(}N(\hat{C}\Delta^{+}+2^{\hat{C}}\Delta^{-})\big{)}$, where $\hat{C}$
is the size of the largest vertex cut used in the algorithm. This is typically
much smaller than the time to compute the FPP metric from (1), especially if
we restrict flooding to a subgraph of the routing topology with a small vertex
cut. The analysis to get the running time of Algorithm 1 can be found in
Section 2.2.2 of the dissertation [20].
The main drawback with the FPP metric is that it cannot be computed in-network
with a single round of local communication (i.e., between 1-hop neighbors).
Algorithm 1 requires knowledge of the outgoing link probabilities of a vertex
cut of the network, but the nodes in a vertex cut may not be in communication
range of each other. Nonetheless, if a gateway node can gather all the link
probabilities from the network, it can give an estimate of the end-to-end
packet delivery probability (the FPP metric) to systems built on this network.
### IV-D Discussion
Figure 3 shows the probability of nodes in a mesh network receiving a packet
flooded from the source. This simple topology shows that a network does not
need to have large vertex cuts to have good reliability in a network with poor
links. In regions of poor connectivity, flooding constrained to a directed
acyclic subgraph with a small vertex cut can significantly boost reliability.
Figure 3: FPP metric $p_{a\rightarrow v}$ for all nodes $v$, where all links
have probability $0.7$. The source node $a$ is circled in red.
Oftentimes, it is not possible to estimate the probability of the links
accurately in a network. Fortunately, since the FPP metric is monotonically
increasing with respect to all the link probabilities, the range of the FPP
metric can be computed given the range of each link probability. The upper
(lower) bound on $p_{a\rightarrow b}$ can be computed by replacing every link
probability $p_{l}$ with its respective upper (lower) bound $\overline{p}_{l}$
($\underline{p}_{l}$) and running Algorithm 1. For instance, the FPP metrics
in Figure 3 can be interpreted as a lower bound on the reliability between the
source and each node if all links have probability greater than $0.7$.
## V URF Metric
This section presents the URF metric, which assumes that a single copy of the
packet is routed hop-by-hop over the routing topology. Packets are forwarded
without prior knowledge of which downstream links have failed.
### V-A URF Packet Forwarding Model
Under the URF packet forwarding model, a node that receives a packet will
select a link uniformly at random from all its outgoing links for
transmission. If the transmission fails, the node will select another link for
transmission uniformly at random from all its outgoing links that _have not
been selected for transmission before_. This repeats until either a
transmission on a link succeeds or the node has attempted to transmit on all
its outgoing links and failed each time. In the latter case, the packet is
dropped from the network.
### V-B Defining and Computing the Metric
* Definition
Unicast Retransmission Flow Metric
Let $G=({\mathcal{V}},{\mathcal{E}},p)$ be a weighted DODAG, where each link
$(i,j)$ in the graph has a probability $p_{ij}$ of successfully delivering a
packet and all links independently succeed or fail. The _URF_ metric
$\varrho_{a\rightarrow b}\in[0,1]$ for a source-destination pair $(a,b)$ is
the probability that a packet sent from node $a$ over the routing topology $G$
reaches node $b$ under the URF packet forwarding model. ∎
The URF metric $\varrho_{a\rightarrow b}$ can be computed using
$\displaystyle\varrho_{a\rightarrow a}$ $\displaystyle=1$ (2)
$\displaystyle\varrho_{a\rightarrow v}$
$\displaystyle=\sum_{u\in{\mathcal{U}}_{v}}\varrho_{a\rightarrow
u}\varpi_{uv}\qquad,$
where ${\mathcal{U}}_{v}$ are all the upstream neighbors of node $v$ and
$\varpi_{uv}\in[0,1]$ is the Unicast Retransmission Flow weight (URF weight)
of link $(u,v)$. The URF weight for link $l=(u,v)$ is the probability that a
packet at $u$ will traverse the link to $v$, and is given by
$\displaystyle\varpi_{uv}=\sum_{{\mathcal{E}}^{\prime}\in
2^{{\mathcal{E}}_{u}\backslash
l}}\frac{p_{uv}}{|{\mathcal{E}}^{\prime}|+1}\left(\prod_{e\in{\mathcal{E}}^{\prime}}p_{e}\right)\left(\prod_{\bar{e}\in{\mathcal{E}}_{u}\backslash({\mathcal{E}}^{\prime}\cup
l)}1-p_{\bar{e}}\right)$ (3)
where ${\mathcal{E}}_{u}=\\{(u,v)\in{\mathcal{E}}:v\in{\mathcal{V}}\\}$ is the
set of node $u$’s outgoing links.
Next, we sketch how (2) and (3) can be derived from the URF packet forwarding
model. Recall that only one copy of the packet is sent through the network and
the routing topology is a DAG, so the event that the packet traverses link
$(u_{1},v)$ is disjoint from the event that the packet traverses $(u_{2},v)$.
The probability that a packet sent from $a$ traverses link $(u,v)$ is simply
$\varrho_{a\rightarrow u}\varpi_{uv}$, where $\varrho_{a\rightarrow u}$ is the
probability that a packet sent from node $a$ visits node $u$ (therefore,
$\varrho_{a\rightarrow a}=1$). Thus, the probability that the packet visits
node $v$ is the sum of the probabilities of the events where the packet
traverses an incoming edge of node $v$, as stated in (2).
Now, it remains to show that $\varpi_{uv}$ as defined by (3) is the
probability that a packet at $u$ will traverse the link $(u,v)$. Recall that a
packet at $u$ will traverse $(u,v)$ if all the previous links selected by $u$
for transmission fail and link $(u,v)$ is successful. Alternately, this event
can be described as the union of several disjoint events arising from two
independent processes:
* •
each of $u$’s outgoing links is either up or down (with its respective
probability), and
* •
$u$ selects a link transmission order uniformly at random from all possible
permutations of its outgoing links.
Each disjoint event is the intersection of: a particular realization of the
success and failure of $u$’s outgoing links where $(u,v)$ is successful
(corresponding to $p_{uv}\prod p_{e}\prod(1-p_{\bar{e}})$ in (3)); and a
permutation of the outgoing links where $(u,v)$ is ordered before all the
other successful links (corresponding to $1/(|{\mathcal{E}}^{\prime}|+1)$ in
(3)). Summing the probabilities of these disjoint events yields (3). For a
rigorous derivation of the URF weights from the packet forwarding model,
please see Section 2.3.3 of the dissertation [20].
### V-C Computational Complexity
The slowest step in computing the URF metric between all nodes and the sink is
computing (3), which has complexity $O(\Delta^{+}\cdot 2^{\Delta^{+}})$. Using
some algebra (See the Appendix), (3) simplifies to
$\varpi_{uv}=p_{uv}\int_{0}^{1}\prod_{e\in{\mathcal{E}}_{u}\backslash(u,v)}\left(1-p_{e}x\right)dx\qquad,$
(4)
which can be evaluated efficiently in $O((\Delta^{+})^{2})$. This results from
the $O((\Delta^{+})^{2})$ operations to expand the polynomial and
$O(\Delta^{+})$ operations to evaluate the integral. Since there are
$O(\Delta^{+})$ link weights per node and $N$ nodes in the graph, the
complexity to compute the URF metric sequentially on all nodes in the graph is
$O(N(\Delta^{+})^{3})$. (There are also $O(E)$ operations in (2), but
$E<2N\Delta^{+}$.) If we allow the link weights to be computed in parallel on
the nodes, then the complexity becomes $O((\Delta^{+})^{3}+E)$.
Unlike the FPP metric, The URF metric can be computed in-network with local
message exchanges between nodes. First, each node would locally compute the
URF link weights $\varpi_{uv}$ from link probability estimates on its outgoing
links. Then, since the URF metric $\varrho_{a\rightarrow v}$ is a linear
function of the URF weights, we can rewrite (2) as
$\displaystyle\varrho_{b\rightarrow b}$ $\displaystyle=1$ (5)
$\displaystyle\varrho_{u\rightarrow b}$
$\displaystyle=\sum_{v\in{\mathcal{V}}_{u}}\varpi_{uv}\varrho_{v\rightarrow
b}\qquad,$
where ${\mathcal{V}}_{u}$ are all the downstream neighbors of node $u$. This
means that each node $u$ only needs the URF metric of its downstream neighbors
to compute its URF metric to the sink, so the calculations propagate outwards
from the sink with only one message exchange on each link in the DAG.
### V-D Discussion
The URF forwarding model can be implemented in both CSMA and TDMA networks. In
the latter it describes a randomized schedule that is agnostic to the quality
of the links and routes in the network, such that the scheduling problem is
less coupled to the routing problem. Loosely speaking, such a randomized
packet forwarding policy is also good for load balancing and exploiting the
path diversity of mesh networks.
The definition of the URF link weights is tightly tied to the URF packet
forwarding model. One alternate packet forwarding model would be for a node to
always attempt transmission on outgoing links $(u,v)$ in decreasing order of
downstream neighbor URF metrics $\varrho_{v\rightarrow b}$. As before, the
node tries each link once and drops the packet when all links fail.333An
opportunistic packet forwarding model that would result in the same metric
would broadcast the packet once and select the most reliable relay to continue
forwarding the packet. This model leads to the following Remaining-
Reliability-ordered URF metric (RRURF), $\varrho_{a\rightarrow b}^{\prime}$,
also calculated like $\varrho_{a\rightarrow b}$ from (5) except $\varpi_{uv}$
is replaced by
$\displaystyle\varpi_{l_{i}}^{\prime}=\prod_{k=1}^{i-1}(1-p_{l_{k}})p_{l_{i}}\quad,$
(6)
where the outgoing links of node $u$ have been sorted into the list
$(l_{1},\ldots,l_{\delta^{+}(u)})$ from highest to lowest downstream neighbor
URF metrics.444The RRURF metric would be equivalent to the ERS-best E2E
anypath routing metric of [6] if every $D_{x}$ in the remaining path cost
$R_{iJ}^{\mathrm{best}}$ (Equation 5 in [6]) were replaced by
$\mathrm{exp}(-D_{x})$.
Notice that with unicast, a packet can reach a node where all its outgoing
links fail, i.e., the packet is “trapped at a node.” Thus, topologies where a
node is likely to receive a packet but has outgoing links with very low
success probabilities tend to perform poorly. Flooding is not affected by this
phenomenon of “trapped packets” because other copies of the packet can still
propagate down other paths. In fact, given the same routing topology $G$, the
URF metric $\varrho_{a\rightarrow v}$ is always less than the FPP metric
$p_{a\rightarrow v}$ for all nodes $v$ in the network. The URF and FPP metrics
allow us to compare how much reliability is lost when unicasting packets. A
comparison of Figure 4 with Figure 3 reveals that this drop in reliability can
be significant in deep networks with low probability links. Nonetheless,
unicast routing over a mesh still provides much better reliability than
routing down a single path or a small number of disjoint paths with the same
number of hops and the same link probabilities, if the links are independent
and bursty.
Figure 4: URF metric $\varrho_{a\rightarrow v}$ for all nodes $v$, where all
links have probability $0.7$. The source node $a$ is circled in red.
Below are several properties of the URF metric that will be exploited in
Section VI to build a good mesh routing topology.
###### Property 1 (Trapped Packets)
Adding an outgoing link to a node can lower its URF metric. Similarly,
increasing the probability of an outgoing link can also lower a node’s URF
metric. ∎
Property 1 can be seen on the example shown in Figure 5a. Here, link $(2,1)$
lowers the reliability of node $2$ to $b$. Generally, nodes want to route to
other nodes that have better reliability to the sink, but Figure 5b shows an
example where routing to a node with worse reliability can increase your
reliability.
(a) Illustration of Property 1.
(b) Illustration of Property 2.
Figure 5: Links are labeled with probabilities, and nodes are labeled with
URF metrics $\varrho_{v\rightarrow b}$ (boxed).
LABEL:sub@fig:urf_3_node_example1 Increasing $p_{12}$ lowers node 2’s
reliability. LABEL:sub@fig:urf_3_node_example2 Node 1 has a lower probability
link to the sink than node 2, but link $(2,1)$ boosts the reliability of node
2.
###### Property 2
A node $u$ may add an outgoing link to node $v$, where $\varrho_{v\rightarrow
b}<\varrho_{u\rightarrow b}$, to increase $u$’s URF metric. ∎
Property 2 means that adding links between nodes with poor reliability to the
sink can boost their reliability, as shown in Figure 6.
Figure 6: Nodes can significantly increase their reliabilities using cross
links. Links are labeled with probabilities, and nodes are labeled with URF
metrics $\varrho_{v\rightarrow b}$ (boxed). Without the cross links (the links
with probability 1 in the diagram), the nodes would all have URF metric $0.5$.
###### Property 3
Increasing the URF metric of a downstream neighbor of node $u$ always
increases $u$’s URF metric. ∎
Property 3 is because $\varrho_{u\rightarrow b}$, defined by (2), is
monotonically increasing in $\varrho_{v\rightarrow b}$ for all $v$ that are
downstream neighbors of $u$.
###### Property 4
A node may have a greater URF metric than some of its downstream neighbors
(from Property 2), but not a greater URF metric than all of its downstream
neighbors. ∎
Property 4 comes from
$\varrho_{u\rightarrow
b}=\sum_{v\in{\mathcal{V}}_{u}}\varpi_{uv}\varrho_{v\rightarrow
b}\leq(\max_{v\in{\mathcal{V}}_{u}}\varrho_{v\rightarrow
b})\sum_{v\in{\mathcal{V}}_{u}}\varpi_{uv}\leq\max_{v\in{\mathcal{V}}_{u}}\varrho_{v\rightarrow
b}.$
Not surprisingly, Properties 3 and 4 highlight the importance of ensuring that
nodes near the sink have a very high URF metric $\varrho_{v\rightarrow b}$
when deploying networks and building routing topologies.
If there is uncertainty estimating the link probabilities, bounding the URF
metric is not as simple as bounding the FPP metric because the URF metric is
not monotonically increasing in the link probabilities, as noted in Property
1. However, the URF metric $\varrho_{u\rightarrow b}$ is monotonically
increasing with the link flow weights $\varpi_{uv}$ so bounds on the flow
weights can be used to compute bounds on the URF metric by simple
substitution. Similarly, each flow weight varies monotonically with each link
probability, so it can also be bounded by simple substitution. For instance,
to compute the upper bound of $\varpi_{uv}$, you would substitute the upper
bound $\overline{p_{uv}}$ for $p_{uv}$ and the lower bounds
$\underline{p_{e}}$ for all the other links in (4). Note that the upper bounds
for all the flow weights on the outgoing links from a node may sum to a value
greater than 1, which would lead to poor bounds on the URF metric.
## VI Constructing a Reliable Routing Topology
The URF-Delayed_Thresholds (URF-DT) algorithm presented below uses the URF
metric to help construct a reliable, loop-free routing topology from an ad-hoc
deployment of wireless nodes. The algorithm assumes that each node can
estimate the packet delivery probability of its links. Only symmetric links,
links where the probability to send and receive a packet are the same, are
used by the algorithm. The algorithm either removes or assigns an orientation
to each undirected link in the underlying network connectivity graph to
indicate the paths a packet can follow from its source to its destination. The
resulting directed graph is the routing topology.
To ensure that the routing topology is loop-free, the URF-DT algorithm assigns
an ordering to the nodes and only allows directed edges from larger nodes to
smaller nodes. The algorithm assigns a _mesh hop count_ to each node to place
them in an ordering, analogous to the use of rank in RPL [18].
The URF-DT algorithm is distributed on the nodes in the network and constructs
the routing topology (a DODAG) outward from the destination. Each node uses
the URF metric to decide how to join the network — who it should select as its
downstream neighbors such that packets from the node are likely to reach the
sink. A node has an incentive to join the routing topology after its neighbors
have joined, so they can serve as its downstream neighbors and provide more
paths to the sink. To break the stalemate where each node is waiting for
another node to join, URF-DT forces a node to join the routing topology if its
reliability to the sink after joining would cross a threshold. This threshold
drops over time to allow all nodes to eventually join the network.
### VI-A URF Delayed Thresholds Algorithm
The URF-DT algorithm given in Algorithm 2 operates in rounds, where each round
lasts a fixed interval of time. The algorithm requires all the nodes share a
global time (e.g., by a broadcast time synchronization algorithm) so they can
keep track of the current round $k$.
Algorithm 2 URF-Delayed_Thresholds
Input: connectivity graph
$\bar{G}=(\bar{{\mathcal{V}}},\bar{{\mathcal{E}}},p),b,\bm{\tau},K$
Output: routing topology $G=({\mathcal{V}},{\mathcal{E}},p)$, mesh hop counts
$\bm{\hbar}$
${\mathcal{V}}:=\emptyset,{\mathcal{E}}:=\emptyset$
$\forall i,\hbar_{i}:=\mathtt{NIL}$ $\triangleright$ $\mathtt{NIL}$ means not
yet assigned.
$\hbar_{b}:=0$
for $k:=1$ to $K$ do
[Run this code simultaneously on all nodes $u\not\in{\mathcal{V}}$]
Let
$\hbar_{{\mathcal{V}}_{u}}^{\mathrm{min}}=\min_{v\in{\mathcal{V}}_{u}}\hbar_{v}\;,\;\hbar_{{\mathcal{V}}_{u}}^{\mathrm{max}}=\max_{v\in{\mathcal{V}}_{u}}\hbar_{v}$
for $h:=\hbar_{{\mathcal{V}}_{u}}^{\mathrm{min}}+1$ to
$\hbar_{{\mathcal{V}}_{u}}^{\mathrm{max}}+1$ do
${\mathcal{V}}_{u}^{<h}$ are $u$’s neighbors with hop count less than $h$.
Select ${\mathcal{V}}_{u}^{\star}\subseteq{\mathcal{V}}_{u}^{<h}$ to maximize
$\varrho_{u\rightarrow b}$ from (4), (5).
Let $\varrho_{u\rightarrow b}^{\star}$ be the maximum $\varrho_{u\rightarrow
b}$.
if $\varrho_{u\rightarrow b}^{\star}\geq\tau_{k-h+1}$ then
$\hbar_{u}:=h$
Add $u$ to ${\mathcal{V}}$. Add links
$\\{(u,v):v\in{\mathcal{V}}_{u}^{\star}\\}$ to ${\mathcal{E}}$.
Break from for loop over $h$.
end if
end for
end for
Return: $G,\bm{\hbar}$
At each round $k$, a node $u$ decides whether it should join the routing
topology with mesh hop count $\hbar_{u}$. If node $u$ joins with hop count
$\hbar_{u}$, then $u$’s downstream neighbors are the neighbors $v_{i}$ with a
mesh hop count less than $\hbar_{u}$ that maximize $\varrho_{u\rightarrow b}$
from (5). Node $u$ decides whether to join the topology, and with what mesh
hop count $\hbar_{u}$, by comparing the maximum reliability
$\varrho_{u\rightarrow b}^{\star}$ for each mesh hop count
$h\in\\{\min_{v\in{\mathcal{V}}_{u}}\hbar_{v}+1,\ldots,\max_{v\in{\mathcal{V}}_{u}}\hbar_{v}+1\\}$
with a threshold $\tau_{m}$ that depends on $h$. The threshold $\tau_{m}$ is
selected from a predefined vector of thresholds
$\bm{\tau}=[\tau_{1}\;\cdots\;\tau_{M}]\in[0,1]^{M}$ using the index
$m=k-h+1$, as shown in Figure 7. When there are multiple $h$ with
$\varrho_{u\rightarrow b}^{\star}\geq\tau_{m}$, node $u$ sets its mesh hop
count $\hbar_{u}$ to the smallest $h$. If none of the $h$ have
$\varrho_{u\rightarrow b}^{\star}\geq\tau_{m}$, then node $u$ does not join
the network in round $k$.
Figure 7: Illustration of how thresholds are used to help assign a node a mesh
hop count. The horizontal row of thresholds represent $\bm{\tau}$. The shaded
vertical column of thresholds are the thresholds tested by a node in round
$k$. A node $u$ picks the smallest mesh hop count $h$ such that
$\varrho_{u\rightarrow b}^{\star}\geq\tau_{m}$ (see text for details).
For the algorithm to work correctly, the thresholds $\bm{\tau}$ must decrease
with increasing $m$. The network designer gets to choose $\bm{\tau}$ and the
number of rounds $K$ to run the algorithm. URF-DT can construct a better
routing topology if $\bm{\tau}$ has many thresholds that slowly decrease with
$m$, but the algorithm will take more rounds to construct the topology.
Algorithm 2 is meant to be implemented in parallel on the nodes in the
network. All the nodes have the vector of thresholds $\bm{\tau}$. In each
round, each node $u$ listens for a broadcast of the pair
$(\varrho_{v_{i}\rightarrow b}$, $\hbar_{v_{i}})$ from each of its neighbors
$v_{i}$ that have joined the routing topology. After receiving the broadcasts,
node $u$ performs the computations and comparisons with the thresholds to
determine if it should join the routing topology with some mesh hop count
$\hbar_{u}$. Once $u$ joins the network, it broadcasts its value of
$(\varrho_{u\rightarrow b},\hbar_{u})$.
After a node $u$ joins the network, it may improve its reliability
$\varrho_{u\rightarrow b}$ by adding outgoing links to other nodes with the
same mesh hop count. To prevent routing loops, a node $u$ may only add a link
to another node $v$ with the same mesh hop count if $\varrho_{v\rightarrow
b}>\varrho_{u\rightarrow b}$, where both URF metrics are computed using only
downstream neighbors with lower mesh hop count.
### VI-B Discussion
The slowest step in the URF-DT algorithm is selecting the optimal set of
downstream neighbors ${\mathcal{V}}_{u}^{\star}$ from the neighbors with hop
count less than $h$ to maximize $\varrho_{u\rightarrow b}$. Properties 1 and 2
of the URF metric make it difficult to find a simple rule for selecting
downstream neighbors. Rather than compute $\varrho_{u\rightarrow b}$ for all
possible ${\mathcal{V}}_{u}^{\star}$ and comparing to find the maximum, one
can use the following _lexicographic approximation_ to find
${\mathcal{V}}_{u}^{\star}$. First, associate each outgoing link $(u,v)$ with
a pair $(\varrho_{v\rightarrow b},p_{uv})$ and sort the pairs in lexicographic
order. Then, make one pass down the list of links, adding a link to
${\mathcal{V}}_{u}^{\star}$ if it improves the value of $\varrho_{u\rightarrow
b}$ computed from the links that have been added thus far. This order of
processing links is motivated by Property 3 of the URF metric.
Note that the URF metric in the URF-DT algorithm can be replaced by any metric
which can be computed on a node using only information from a node’s
downstream neighbors. For instance, the URF metric can be replaced by the
RRURF metric described in Section V-D.
## VII Simulations
This section compares the performance of the URF-DT algorithm with two other
simple mesh topology generation schemes described below: Minimum_Hop (MinHop)
and URF-Global_Greedy (URF-GG). The performance measures are each node’s URF
metric $\varrho_{v\rightarrow b}$ and the maximum number of hops from each
node to the sink.
MinHop generates a loop-free minimum hop topology by building a minimum
spanning tree rooted at the sink on the undirected connectivity graph and then
orienting edges from nodes with a higher minimum hop count to nodes with a
lower minimum hop count. If node $u$ and $v$ have the same minimum hop count
but node $u$ has a smaller maximum link probability to nodes with a lower hop
count, $u$ routes to $v$. This last rule ensures that we utilize most of the
links in the network to increase reliability (otherwise, MinHop performs very
poorly).
URF-GG is a centralized algorithm that adds nodes sequentially to the routing
topology, starting from the sink. At each step, every node $u$ selects the
optimal set of downstream neighbors ${\mathcal{V}}_{u}^{\star}$ from nodes
that have already joined the routing topology to compute its maximum
reliability $\varrho_{u\rightarrow b}^{\star}$. Then, the node with the best
$\varrho_{u\rightarrow b}^{\star}$ of all nodes that have not joined the
topology is added to ${\mathcal{V}}$, and the links
$\\{(u,v):v\in{\mathcal{V}}_{u}^{\star}\\}$ are added to ${\mathcal{E}}$. Note
that URF-GG does not generate an optimum topology that maximizes the average
URF metric across all the nodes (The authors have not found an optimum
algorithm.).
Figure 8 compares the performance of routing topologies generated under the
MinHop, URF-DT, and URF-GG algorithms on randomly generated connectivity
graphs. Forty nodes were randomly placed in a $10\times 10$ area with a
minimum node spacing of 0.5 (this gives a better chance of having a connected
graph). Nodes less than 2 units apart always have a link, nodes more than 3
units apart never have a link, and nodes with distance between 2 and 3
sometimes have a link. The link probabilities are drawn uniformly at random
from $[0.7,1]$. The inputs to URF-DT are the number of rounds $K=100$ and a
vector of thresholds $\bm{\tau}$ which drops from 1 to 0 in increments of
$-0.01$. We used the lexicographic approximation to find the optimal set of
neighbors ${\mathcal{V}}_{u}^{\star}$. There were 100 simulation runs of which
only 10 are shown, but a summary of all the runs appears in Table I.
(a)
(b)
Figure 8: Comparison of routing topologies generated by MinHop, URF-DT, and URF-GG, using the LABEL:sub@fig:DAG_batch_comp_URF URF metric $\varrho_{v\rightarrow b}$ and LABEL:sub@fig:DAG_batch_comp_MaxHop maximum hop count on each node. The distributions are represented by box and whiskers plots, where the median is represented by a circled black dot, the outliers are represented by points, and the interquartile range (IQR) is 1.5 for LABEL:sub@fig:DAG_batch_comp_URF and 0 for LABEL:sub@fig:DAG_batch_comp_MaxHop. TABLE I: MinHop, URF-DT, URF-GG Routing Topology Statistics over 100 Random Graphs Routing | URF Metric $\varrho_{v\rightarrow b}$ | Max Hop Count
---|---|---
Topology | mean | median | variance | mean | median
MinHop | 0.8156 | 0.8252 | 0.0075 | 10.50 | 10.59
URF-DT | 0.8503 | 0.8539 | 0.0041 | 11.41 | 11.68
URF-GG | 0.8529 | 0.8549 | 0.0039 | 12.38 | 12.76
While in some runs the URF-DT topology shows marginal improvements in
reliability over the MinHop topology, other runs (like run 17) show a
significant improvement.555A small increase in probabilities close to 1 is a
significant improvement. Figure 8b shows that this often comes at the cost of
increasing the maximum hop count on some of the nodes (though not always, as
shown by run 17).
## VIII Conclusions
Both the FPP and URF metrics show that multiple interleaved paths typically
provide better end-to-end reliability than disjoint paths. Furthermore, since
they were derived directly from link probabilities, the DAG representing the
routing topology, and simple packet forwarding models, they help us understand
when a network is reliable. Using these routing topology metrics a network
designer can estimate whether a deployed network is reliable enough for his
application. If not, he may place additional relay nodes to add more links and
paths to the routing topology. He may also use these metrics to quickly
compare different routing topologies and develop an intuition of which ad-hoc
placement strategies generate good connectivity graphs.
These metrics provide a starting point for designing routing protocols that
try to maintain and optimize a routing topology. The URF-DT algorithm
describes how to build a reliable static routing topology, but it would be
interesting to study algorithms that gradually adjusts the routing topology
over time as the link estimates change.
## References
* [1] J. P. Hespanha, P. Naghshtabrizi, and Yonggang Xu, “A survey of recent results in networked control systems,” _Proceedings of the IEEE_ , vol. 95, pp. 138–162, 2007.
* [2] Wireless Industrial Networking Alliance (WINA), “WINA website,” http://www.wina.org.
* [3] K. Pister, P. Thubert, S. Dwars, and T. Phinney, “Industrial Routing Requirements in Low-Power and Lossy Networks,” RFC 5673 (Informational), Internet Engineering Task Force, Oct. 2009. [Online]. Available: http://www.ietf.org/rfc/rfc5673.txt
* [4] D. Chen, M. Nixon, and A. Mok, _WirelessHART: Real-time Mesh Network for Industrial Automation_. Springer, 2010\.
* [5] International Society of Automation, “ISA-SP100 wireless systems for automation website,” http://www.isa.org/isa100.
* [6] H. Dubois-Ferriere, M. Grossglauser, and M. Vetterli, “Valuable detours: Least-cost anypath routing,” _IEEE/ACM Trans. on Networking_ , 2010, preprint, available online at IEEE Xplore.
* [7] M. E. M. Campista _et al._ , “Routing metrics and protocols for wireless mesh networks,” _IEEE Network_ , vol. 22, no. 1, pp. 6–12, Jan.–Feb. 2008\.
* [8] T. Cormen, C. Leiserson, and R. Rivest, _Introduction to Algorithms_. Cambridge, Massachusetts: MIT Press, 1990.
* [9] S. De, C. Qiao, and H. Wu, “Meshed multipath routing with selective forwarding: an efficient strategy in wireless sensor networks,” _Computer Networks_ , vol. 43, no. 4, pp. 481–497, 2003.
* [10] F. Ye, G. Zhong, S. Lu, and L. Zhang, “GRAdient Broadcast: a robust data delivery protocol for large scale sensor networks,” _Wireless Networks_ , vol. 11, no. 3, pp. 285–298, 2005.
* [11] S. Biswas and R. Morris, “Opportunistic routing in multi-hop wireless networks,” _SIGCOMM Computer Communication Review_ , vol. 34, no. 1, pp. 69–74, 2004.
* [12] P. Chen and S. Sastry, “Latency and connectivity analysis tools for wireless mesh networks,” in _Proc. of the 1st International Conference on Robot Communication and Coordination (ROBOCOMM)_ , October 2007.
* [13] K. S. J. Pister and L. Doherty, “TSMP: Time synchronized mesh protocol,” in _Proc. of the IASTED International Symposium on Distributed Sensor Networks (DSN)_ , November 2008.
* [14] P. Soldati, H. Zhang, Z. Zou, and M. Johansson, “Optimal routing and scheduling of deadline-constrained traffic over lossy networks,” in _Proc. of the IEEE Global Telecommunications Conference (GLOBECOMM)_ , Miami FL, USA, Dec. 2010.
* [15] Z. Zou, P. Soldati, H. Zhang, and M. Johansson, “Delay-constrained maximum reliability routing over lossy links,” in _Proc. of the IEEE Conference on Decision and Control (CDC)_ , Dec. 2010.
* [16] J. L. Sobrinho, “Algebra and algorithms for QoS path computation and hop-by-hop routing in the Internet,” _IEEE/ACM Trans. on Networking_ , vol. 10, no. 4, pp. 541–550, Aug. 2002.
* [17] R. Bhandari, “Optimal physical diversity algorithms and survivable networks,” in _Proc. of the 2nd IEEE Symposium on Computers and Communications (ISCC)_ , Jul. 1997, pp. 433–441.
* [18] Internet Engineering Task Force, “Routing protocol for low power and lossy networks (RPL),” http://www.ietf.org/id/draft-ietf-roll-rpl-19.txt, 2011\.
* [19] N. Baccour _et al._ , “A comparative simulation study of link quality estimators in wireless sensor networks,” in _IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)_ , Sep. 2009, pp. 1–10.
* [20] P. Chen, “Wireless sensor network metrics for real-time systems,” Ph.D. dissertation, EECS Department, University of California, Berkeley, May 2009. [Online]. Available: http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-75.html
$\displaystyle\varpi_{uv}$ $\displaystyle=\sum_{{\mathcal{E}}^{\prime}\in
2^{{\mathcal{E}}_{u}\backslash
l}}\frac{p_{l}}{|{\mathcal{E}}^{\prime}|+1}\left(\prod_{e\in{\mathcal{E}}^{\prime}}p_{e}\right)\left(\prod_{\bar{e}\in{\mathcal{E}}_{u}\backslash({\mathcal{E}}^{\prime}\cup
l)}1-p_{\bar{e}}\right)$ $\displaystyle=p_{l}\sum_{{\mathcal{E}}^{\prime}\in
2^{{\mathcal{E}}_{u}\backslash
l}}\frac{1}{|{\mathcal{E}}^{\prime}|+1}\left(\prod_{e\in{\mathcal{E}}^{\prime}}p_{e}\right)\left(\prod_{\bar{e}\in{\mathcal{E}}_{u}\backslash({\mathcal{E}}^{\prime}\cup
l)}1-p_{\bar{e}}\right)$
$\displaystyle=p_{l}\int_{0}^{1}\sum_{{\mathcal{E}}^{\prime}\in
2^{{\mathcal{E}}_{u}\backslash
l}}\left(\prod_{e\in{\mathcal{E}}^{\prime}}p_{e}\right)\left(\prod_{\bar{e}\in{\mathcal{E}}_{u}\backslash({\mathcal{E}}^{\prime}\cup
l)}1-p_{\bar{e}}\right)x^{|{\mathcal{E}}^{\prime}|}dx$
$\displaystyle=p_{l}\int_{0}^{1}\prod_{e\in{\mathcal{E}}_{u}\backslash
l}\left((1-p_{e})+p_{e}x\right)dx$
$\displaystyle=p_{l}\int_{0}^{1}\prod_{e\in{\mathcal{E}}_{u}\backslash
l}\left(1-p_{e}(1-x)\right)dx$
$\displaystyle=p_{l}\int_{0}^{1}\prod_{e\in{\mathcal{E}}_{u}\backslash
l}\left(1-p_{e}x\right)dx$
|
arxiv-papers
| 2011-03-30T15:23:09 |
2024-09-04T02:49:18.024784
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Phoebus Chen, Karl H. Johansson, Paul Balister, B\\'ela Bollob\\'as,\n Shankar Sastry",
"submitter": "Phoebus Chen",
"url": "https://arxiv.org/abs/1103.5957"
}
|
1103.6082
|
# Pebble bed pebble motion: Simulation and Applications
Joshua J. Cogliati
(A dissertation
submitted in partial fulfillment of
the requirements for the degree of
Doctor of Philosophy in Applied Science and Engineering
Idaho State University
October 2010 )
Copyright
This work is copyright 2010 by Joshua J. Cogliati. This work is licensed under
the Creative Commons Attribution 3.0 Unported License. To view a copy of this
license, visit http://creativecommons.org/licenses/by/3.0/ or send a letter to
Creative Commons, 171 Second Street, Suite 300, San Francisco, California,
94105, USA. As specified in the license, verbatim copying and use is freely
permitted, and properly attributed use is also allowed.
Committee Approval Page
To the graduate faculty:
The members of the Committee appointed to examine the thesis of Joshua J.
Cogliati find it satisfactory and recommend that it be accepted.
Dr. Abderrafi M. Ougouag,
Major Advisor, Chair
Dr. Mary Lou Dunzik-Gougar,
Co-Chair
Dr. Michael Lineberry,
Committee Member
Dr. Steve C. Chiu,
Committee Member
Dr. Steve Shropshire,
Graduate Faculty Representative
Acknowledgments
Thanks are due to many people who have provided information, comments and
insight. I apologize in advance for anyone that I have left out. The original
direction and technical ideas came from my adviser Abderrafi Ougouag as well
as continued encouragement and discussion throughout its creation. Thanks go
to Javier Ortensi for the encouragement and discussion as he figured out how
use the earthquake data and I figured out how to generate it. At INL the
following people assisted: Rob Bratton and Will Windes with the graphite
literature review, Brian Boer with discussion and German translation, Hongbin
Zhang with encouragement and Chinese translation and Suzette J. Payne for
providing me with the Earthquake motion data. The following PBMR (South
Africa) employees provided valuable help locating graphite literature and
data: Frederik Reitsma, Pieter Goede and Alastair Ramlakan. The Juelich
(Germany) people Peter Pohl, Johannes Fachinger and Werner von Lensa provided
valuable assistance with understanding AVR. Thanks to Professor Mary Dunzik-
Gougar for introducing me to many of these people, as well as encouragement
and feedback on this PhD and participating as co-chair on the dissertation
committee. Thanks to the other members of my committee, Dr. Michael Lineberry,
Dr. Steve C. Chiu and Dr. Steve Shropshire, for providing valuable feedback on
the dissertation. Thanks to Gannon Johnson for pointing out that length needed
to be tallied separately from the length times force tally for the wear
calculation (this allowed the vibration issue to be found). Thanks to
Professor Jan Leen Kloosterman of the Delft University of Technology for
providing me the PEBDAN program used for calculating Dancoff factors. The work
was partially supported by the U.S. Department of Energy, Assistant Secretary
for the office of Nuclear Energy, under DOE Idaho Operations Office Contract
DEAC07-05ID14517. The financial support is gratefully acknowledged.
This dissertation contains work that was first published in the following
conferences: Mathematics and Computation, Supercomputing, Reactor Physics and
Nuclear and Biological Applications, Palais des Papes, Avignon, France,
September 12-15, 2005; HTR2006: 3rd International Topical Meeting on High
Temperature Reactor Technology October 1-4, 2006, Johannesburg, South Africa;
Joint International Topical Meeting on Mathematics & Computation and
Supercomputing in Nuclear Applications (M&C + SNA 2007) Monterey, California,
April 15-19, 2007; Proceedings of the 4th International Topical Meeting on
High Temperature Reactor Technology, HTR2008, September 28-October 1, 2008,
Washington, DC USA and PHYSOR 2010 - Advances in Reactor Physics to Power the
Nuclear Renaissance Pittsburgh, Pennsylvania, USA, May 9-14, 2010.
My mother helped me edit papers I wrote in my pre-college years, and my father
taught me that math is useful in the real world. For this and all the other
help in launching me into the world, I thank my parents.
Last, but certainly not least, thanks go to my wife, Elizabeth Cogliati for
her encouragement and support. This support has provided me with the time
needed to work on the completion of this dissertation. This goes above and
beyond the call of duty since I started a PhD the same time we started a
family. Thanks to my son for letting my skip my first Nuclear Engineering exam
by being born, as well as asking all the really important questions. Thanks to
my daughter for working right beside me on her dissertation on the little red
toy laptop.
###### Contents
1. Abstract
2. 1 Introduction
1. 1.1 Pebble Bed Reactors Introduction
2. 1.2 Dissertation Introduction
3. 2 Motivation
4. 3 Previous work
1. 3.1 Static Friction Overview
2. 3.2 Simulation of Mechanics of Granular Material
5. 4 Mechanics Model
1. 4.1 Overview of Model
2. 4.2 Integration
3. 4.3 Geometry Modeling
4. 4.4 Packing Methods
5. 4.5 Typical Parameters
6. 5 A New Static Friction Model
1. 5.0.1 Use of Parallel Velocity for Slip Updating
2. 5.0.2 Adjustment of Oversize Slips
3. 5.0.3 Rotation of Stuck-Slip
4. 5.0.4 Differential Equation for Surface Slip Rotating
5. 5.1 Testing of Static Friction Model With Pyramid Test
1. 5.1.1 Derivation of Minimum Static Frictions
2. 5.1.2 Use of Benchmark
6. 5.2 Janssen’s Formula Comparison
7. 6 Unphysical Approximations
8. 7 Code Speedup and Parallelization
1. 7.1 General Information about profiling
2. 7.2 Overview of Parallel Architectures and Coding
3. 7.3 Lock-less Parallel O(N) Collision Detection
4. 7.4 MPI Speedup
5. 7.5 OpenMP Speedup
6. 7.6 Checking the Parallelization
7. 7.7 Results
9. 8 Applications
1. 8.1 Applications in Support of Reactor Physics
1. 8.1.1 Dancoff Factors
2. 8.1.2 Angle of Repose
3. 8.1.3 Pebble Ordering with Recirculation
2. 8.2 Application to Earthquake modeling
1. 8.2.1 Movement of Earthquakes
2. 8.2.2 Method Of Simulation
3. 8.2.3 Earthquake Results
4. 8.2.4 Earthquake Equations
10. 9 Construction of a Dust Production Framework
11. 10 Future Work
12. 11 Summary and Conclusions
13. A Calculation of Packing Fractions
14. B Determination of dust production coefficients
1. B.1 Calculation of Force in Reactor Bed
2. B.2 Prior data on dust production
3. B.3 Prior Prediction Work
###### List of Figures
1. 3.1 Comparison Between PEBBLES Outputs and Benenati and Brosilow Data
2. 4.1 Principle Vectors in the Interaction of Two Pebbles
3. 4.2 PRIMe Method Illustration
4. 4.3 Virtual Chute Method
5. 5.1 Static Friction Vectors
6. 5.2 Projections to ds
7. 5.3 Static Friction Vectors for Wall
8. 5.4 Sphere Location Diagram
9. 5.5 Pyramid Diagram
10. 5.6 Force Diagram
11. 5.7 Relevant Forces on Wall from Pebble
12. 5.8 Comparison With 0.05 and 0.15 $\mu$
13. 5.9 Comparison With 0.25 and 0.9 $\mu$
14. 7.1 Sample Cluster Architecture
15. 7.2 Determining Nearby Pebbles from Grid
16. 8.1 Flow Field Representation (arrow lengths are proportional to local average pebble velocity)
17. 8.2 Dancoff Factors for AVR
18. 8.3 2-D Projection of Pebble Cone on HTR-10 (crosses represent centers of pebbles)
19. 8.4 Pebbles Before Recirculation
20. 8.5 Pebbles After Recirculation
21. 8.6 Total Earthquake Displacement
22. 8.7 0.65 Static Friction Packing over Time
23. 8.8 0.35 Static Friction Packing over Time
24. 8.9 Different Radial Packing Fractions
25. 8.10 Changes in Packing Fraction
26. 8.11 Neutronics and Thermal Effects from J. Ortensi
27. 9.1 Pebble to Pebble Distance Traveled
28. 9.2 Pebble to Surface Distance Traveled
29. 9.3 Average Normal Contact Force
30. 9.4 Pebble to Pebble Wear
31. 9.5 Pebble to Surface Wear
32. A.1 Area Inside Geometry
33. A.2 Area Outside Geometry
34. B.1 AVR Dimensions
###### List of Tables
1. 4.1 Typical Constants used in Simulation
2. 5.1 Sphere location table.
3. 7.1 OpenMP speedup results
4. 7.2 MPI/OpenMP speedup results
5. B.1 AVR Data
6. B.2 THTR Data
## Abstract
Pebble bed reactors (PBR) have moving graphite fuel pebbles. This unique
feature provides advantages, but also means that simulation of the reactor
requires understanding the typical motion and location of the granular flow of
pebbles.
This dissertation presents a method for simulation of motion of the pebbles in
a PBR. A new mechanical motion simulator, PEBBLES, efficiently simulates the
key elements of motion of the pebbles in a PBR. This model simulates
gravitational force and contact forces including kinetic and true static
friction. It’s used for a variety of tasks including simulation of the effect
of earthquakes on a PBR, calculation of packing fractions, Dancoff factors,
pebble wear and the pebble force on the walls. The simulator includes a new
differential static friction model for the varied geometries of PBRs. A new
static friction benchmark was devised via analytically solving the mechanics
equations to determine the minimum pebble-to-pebble friction and pebble-to-
surface friction for a five pebble pyramid. This pyramid check as well as a
comparison to the Janssen formula was used to test the new static friction
equations.
Because larger pebble bed simulations involve hundreds of thousands of pebbles
and long periods of time, the PEBBLES code has been parallelized. PEBBLES runs
on shared memory architectures and distributed memory architectures. For the
shared memory architecture, the code uses a new O(n) lock-less parallel
collision detection algorithm to determine which pebbles are likely to be in
contact. The new collision detection algorithm improves on the traditional
non-parallel O(n log(n)) collision detection algorithm. These features combine
to form a fast parallel pebble motion simulation.
The PEBBLES code provides new capabilities for understanding and optimizing
PBRs. The PEBBLES code has provided the pebble motion data required to
calculate the motion of pebbles during a simulated earthquake. The PEBBLES
code provides the ability to determine the contact forces and the lengths of
motion in contact. This information combined with the proper wear coefficients
can be used to determine the dust production from mechanical wear. These new
capabilities enhance the understanding of PBRs, and the capabilities of the
code will allow future improvements in understanding.
## Chapter 1 Introduction
### 1.1 Pebble Bed Reactors Introduction
Pebble bed nuclear reactors are a unique reactor type that have been proposed
and used experimentally. Pebble bed reactors were initially developed in
Germany in the 1960s when the AVR demonstration reactor was created. The 10
megawatt HTR-10 reactor achieved first criticality in 2000 in China and future
reactors are planned. In South Africa, Pebble Bed Modular Reactor Pty. Ltd.
was designing a full scale pebble bed reactor to produce process heat or
electricity.
Pebble bed nuclear reactors use graphite spheres (usually about 6 cm in
diameter) for containing the fuel of the reactor. The graphite spheres encase
smaller spheres of TRistructural-ISOtropic (TRISO) particle fuel. Unlike most
reactors, the fuel is not placed in an orderly static arrangement. Instead,
the graphite spheres are dropped into the top of the reactor, travel randomly
down through the reactor core, and are removed from the bottom of the reactor.
The pebbles are then possibly recirculated depending on the amount of burnup
of the pebble and the reactor’s method of operation.
The first pebble bed reactor was conceived in 1950s in the West Germany using
helium gas-cooling and spherical graphite fuel elements. Construction on the
Arbeitsgemeinschaft Versuchsreaktor (AVR) 15 MWe reactor was started in 1959
at the KFA Research Centre Jülich. It started operation in 1967 and continued
operation for 21 years until 1988. The reactor operated with an outlet
temperature of 950∘C. The AVR demonstrated the potential for the pebble bed
reactor concept. Over the course of its operation, loss-of-coolant experiments
were successfully performed.
The second pebble bed reactor was the Thorium High Temperature Reactor (THTR).
This reactor was built in West Germany for an electric utility. It was a 300
MWe plant that achieved full power in September 1986\. In October 1988, when
the reactor was shutdown for maintenance, 35 bolt heads were found in the hot
gas ducts leading to the steam generators. The determination was made that the
plant could be restarted, but funding difficulties prevented this from
occurring and the reactor was decommissioned (Goodjohn, 1991).
The third pebble bed reactor to be constructed and the only one that is
currently operable is the 10 MWt High Temperature Reactor (HTR-10). This
reactor is at the Tsinghua University in China. Construction was started in
1994 and reached first criticality in December 2000. This reactor is helium
cooled and has an outlet temperature of 700∘C (Wu et al., 2002; Xu and Zuo,
2002).
The use of high temperature helium cooled graphite moderated reactors with
TRISO fuel particles have a number of advantages. A TRISO particle consists of
spherical fuel kernel (such as uranium oxide) surrounded by four concentric
layers: 1) a porous carbon buffer layer to accommodate fission product gases
which limits pressure on the outer layers, 2) an interior pyrolytic carbon
layer, 3) a layer of silicon carbide, and 4) an outer layer of pyrolytic
carbon. The pyrolytic layers shrink and creep with irradiation, partially
offsetting the pressure from the fission products in the interior as well as
helping contain the fission gases. The silicon carbide acts as a containment
mechanism for the metallic fission products.(Miller et al., 2002) These layers
provide an in-core containment structure for the radioactive fuel and fission
products.
The high temperature gas reactors have some advantages over conventional light
water reactors. First, the higher outlet temperatures allow higher Carnot
efficiency to be obtained111The outlet temperatures for pebble bed reactors
have ranged from 700 ∘C to 950 ∘C, compared to typical outlet temperatures on
the order of 300∘C for light water reactors, so the intrinsic Carnot
efficiency is higher. . Second, the higher temperatures can be used for
process heat, which can reduce the use of methane. Third, the high temperature
under which TRISO particles can operate allows for the exploitation of the
negative temperature coefficient to safely shutdown the reactor without use of
control rods.222Control rods are needed for a cold shutdown, however. Fourth,
the higher temperature is above the annealing temperature for graphite, which
safely removes Wigner energy333The accumulation of Wigner energy led to the
Windscale fire in that lower temperature graphite reactor.. These are
advantages of both prismatic and pebble bed high temperature reactors.(Gougar
et al., 2004; Wu et al., 2002)
Pebble bed reactors, unlike most other reactors types, have moving fuel. This
provides advantages but complicates modeling the reactors. A key advantage is
that pebble bed reactors can be refueled online, that is, reactor shutdown is
not needed for refueling. As a consequence, the reactors have low excess
reactivity, as new pebbles can be added or excess pebbles removed to maintain
the reactor at critical. The low excess reactivity removes the need for
burnable poisons. A final advantage is that the moving fuel allows the pebble
bed to be run with optimal moderation, where both increases and decreases in
the fuel-to-moderator ratio cause reduction in reactivity. Ougouag et al.
(2004) discuss the advantages of optimal moderation including improved fuel
utilization. However, because the fuel is moving, many traditional methods of
modeling nuclear reactors are inapplicable without a method for quantifying
the motion. Hence, there is a need for development of methods usable for
pebble bed reactor modeling.
### 1.2 Dissertation Introduction
This dissertation describes a computer code, PEBBLES, that is designed to
provide a method of simulating the motion of the pebbles in a pebble bed
reactor.
Chapter 4 provides the details of how the simulation works. Chapter 5 has a
new static friction model developed for this dissertation.
Several checks have been made of the code. Figure 3.1 compares the PEBBLES
simulation to experimentally determined radial packing fractions. Section 5.1
describes a new analytical benchmark that was used to test the static friction
model in PEBBLES. Section 5.2 uses the Janssen model to test the static
friction in a cylindrical vat.
Motivating all the above are the new applications, including Dancoff factors
(8.1.1), calculating the angle of repose (8.1.2) and modeling an earthquake in
section 8.2.
## Chapter 2 Motivation
Most nuclear reactors have fixed fuel, including typical light water reactors.
Some reactor designs, such as non-fixed fuel molten salt reactors, have fuel
that is in fluid flow. Most designs for pebble bed reactors, however, have
moving granular fuel. Since this fuel is neither fixed nor easily treatable as
a fluid, predicting the behavior of the reactor requires the ability to
understand the characteristics of the positions and motion of the pebbles. For
example, predicting the probability of a neutron leaving one TRISO’s fueled
region and entering another fueled region depends on the typical locations of
the pebbles. A second example is predicting the effect of an earthquake on the
reactivity of the pebble bed reactor. This requires knowing how the positions
of the pebbles in the reactor change from the forces of the earthquake.
Accurate prediction of the typical features of the flow and arrangement of the
pebbles in the pebble bed reactor would be highly useful for their design and
operation.
The challenge is to gain the ability to predict the pebble flow and pebble
positions for start-up, steady state and transient pebble bed reactor
operation.
The objective of the research presented in this dissertation is to provide
this predicting ability. The approach used is to create a distinct element
method computer simulation. The simulation determines the locations and
velocities of all the pebbles in a pebble bed reactor and can calculate needed
tallies from this data. Over the course of creating this simulation, various
applications of the simulation were performed. These models allow the
operation of the pebble bed reactor to be better understood.
## Chapter 3 Previous work
Because the purpose of this dissertation is to produce a high fidelity
simulation that can provide predictions of the pattern and flow of pebbles,
previous efforts to simulate granular methods and packing were examined. A
variety of simulations of the motion of discrete elements have been created
for different purposes. Lu et al. (2001) applied a discrete element method
(DEM) to determine the characteristics of packed beds used as fusion reactor
blankets. Jullien et al. (1992) used a DEM to determine packing fractions for
spheres using different non-motion methods. Soppe (1990) used a rain method to
determine pore structures in different sized spheres. The rain method randomly
chooses a horizontal position, and then lowers a sphere down until it reaches
other existing spheres. This is then repeated to fill up the container. Freund
et al. (2003) used a rain method for fluid flow in chemical processing.
The use of non-motion pebble packing methods provide an approximation of the
positions of the pebble. Unfortunately, non-motion methods will tend to either
under pack or over pack (sometimes both in the same model). For large pebble
bed reactors, the approximately ten-meter height of the reactor core will
result in different forces at the bottom than at the top. This will change the
packing fractions between the top and the bottom, so without key physics,
including static friction and the transmittal of force, non-motion physics
models will not even be able to get correct positional information. Non-
physics based modeling can not be used for predicting the effect of changes in
static friction or pebble loading methods even if only the position data is
required.
The initial PEBBLES code for calculation of pebble positions minimized the sum
of the gravitational and Hookes’ law potential energies by adjusting pebble
positions. However, that simulation was insufficient for determining flow and
motion parameters and simulation of earthquake packing.
Additional references addressing full particle motion simulation were
evaluated. Kohring (1995) created a 3-D discrete element method simulation to
study diffusional mixing and provided detailed information on calculating the
kinetic forces for the simulation. The author describes a simple method of
calculating static friction. Haile (1997) discusses both how to simulate hard
spheres and soft spheres using only potential energy. The soft sphere method
in Haile proved useful for determining plausible pebble positions, but is
insufficient for modeling the motion. Hard spheres are simulated by
calculating the collision results from conservation laws. Soft spheres are
simulated by allowing small overlaps, and then having a resulting force
dependent on the overlap. Soft spheres are similar to what physically happens,
in that the contact area distorts, allowing distant points to approach closer
than would be possible if the spheres were truly infinitely hard and only
touched at one infinitesimal point. Hard spheres are impractical for a pebble
bed due to the frequent and continuous contact between spheres so soft spheres
are used instead. The dissertation by Ristow (1998) describes multiple methods
for simulation of granular materials. On Ristow’s list of methods was a model
similar to that used as the kernel of the work supporting this dissertation.
Ristow’s dissertation mentioned static friction and provided useful references
that will be discussed in Section 3.2.
To determine particle flows, Wait (2001) developed a discrete element method
that included only dynamic friction. Concurrently with this dissertation
research, Rycroft et al. (2006b) used a discrete element method, created for
other purposes, to simulate the flow of pebbles through a pebble bed reactor.
Multiple other discrete element codes have been created and PEBBLES is similar
to several of the full motion models. For most of the applications discussed
in this dissertation, only a model that simulates the physics with high
fidelity is useful. The PEBBLES dynamic friction model is similar to the model
used by Wait or Rycroft, but the static friction model incorporates some new
improvements that will be discussed later.
In addition to simulation by computer, other methods of determining the
properties of granular fluids have been used. Bedenig et al. (1968) used a
scale model to experimentally determine residence spectra (the amount of time
that pebbles from a given group take to pass through a reactor) for different
exit cone angles. Kadak and Bazant (2004) used scale models and small spheres
to estimate the flow of pebbles through a full scale pebble bed reactor. These
researchers also examined the mixing that would occur between different radial
zones as the pebbles traveled downward. Bernal et al. (1960) carefully lowered
steel spheres into cylinders and shook the cylinders to determine both loose
and dense packing fractions. The packing fraction and boundary density
fluctuations were experimentally measured by Benenati and Brosilow (1962). The
Benenati and Brosilow data have been used to verify that the PEBBLES code was
producing correct boundary density fluctuations (See Figure 3.1). Many
experiments were performed in the designing and operating of the AVR reactor
to determine relevant properties such as residence times and optimal chute
parameters (Bäumer et al., 1990). These experiments provide data for testing
the implementation of any computational model of pebble flow.
Figure 3.1: Comparison Between PEBBLES Outputs and Benenati and Brosilow Data
The PEBBLES simulation uses elements from a number of sources and uses
standard classical mechanics for calculating the motion of the pebbles based
on the forces calculated. The features in PEBBLES have been chosen to
implement the necessary fidelity required while allowing run times small
enough to accommodate hundreds of thousands of pebbles. The next sections will
discuss handling static friction.
### 3.1 Static Friction Overview
Static friction is an important effect in the movement of pebbles and their
locations in pebble bed reactors. This section briefly reviews static friction
and its effects in pebble bed reactors. Static friction is a force between two
contacting bodies that counteracts relative motion between them when they are
moving sufficiently slowly(Marion and Thornton, 2004). Macroscopically, the
maximum magnitude of the force is proportional to the normal force with the
following equation:
$|\mathbf{F}_{s}|\leq\mu|\mathbf{F}_{\perp}|$ (3.1)
where $\mu$ is the coefficient of static friction, $\mathbf{F}_{s}$ is the
static friction force and $\mathbf{F}_{\perp}$ is the normal (load) force.
Static friction results in several effects on granular materials. Without
static friction, the angle of the slope of a pile of a material (angle of
repose) would be zero(Duran, 1999). Static friction also allows ‘bridges’ or
arches to be formed near the outlet chute. If the outlet chute is too small,
the bridging will be stable enough to clog the chute. Static friction will
also transfer force from the pebbles to the walls. This will result in lower
pressure on the walls than would occur without static friction(Sperl, 2006;
Walker, 1966).
For an elastic sphere, static friction’s counteracting force is the result of
elastic displacement of the contact point. Without static friction, the
contact point would _slide_ as a result of relative motion at the surface.
With static friction, the spheres will experience local shear that distorts
their shape so that the contact point remains constant. This change will be
called _stuck-slip_ , and continues until the counteracting force exceeds
$\mu\mathbf{F}_{\perp}$. When the counteracting force exceeds that value, the
contact point changes and slide occurs. The mechanics of this force with
elastic spheres were investigated by Mindlin and Deresiewicz (1953). Their
work created exact formulas for the force as a function of the past relative
motion and force.
### 3.2 Simulation of Mechanics of Granular Material
Many simulations of granular materials incorporating static friction have been
devised. Cundall and Strack (1979) developed an early distinct element
simulation of granular materials that incorporated a computationally efficient
static friction approximation. Their method involved integration of the
relative velocity at the contact point and using the sum as a proxy for the
current static friction force. Since their method was used for simulation of
2-D circles, adaptation was required for 3-D granular materials. One key
aspect of adaptation is determining how the stuck-slip direction changes as a
result of contacting objects’ changing orientation.
Vu-Quoc and Zhang (1999) and Vu-Quoc et al. (2000) developed a 3-D discrete-
element method for granular flows. This model was used for simulation of
particle flow in chutes. They used a simplification of the Mindlin and
Deresiewicz model for calculating the stuck-slip magnitude, and project the
stuck-slip onto the tangent plane each time-step to rotate the stuck-slip
force direction. This correctly rotates the stuck-slip, but requires that the
rotation of the stuck-slip be done as a separate step since it not written in
a differential form.
Silbert et al. (2001) and Landry et al. (2003) describe a 3-D differential
version of the Cundall and Strack method. The literature states that particle
wall interactions are done identically. The amount of computation of the model
is less than the Vu-Quoc, Zhang and Walton model. This model was used for
modeling pebble bed flow(Rycroft et al., 2006a, b). This model however, does
not specify how to apply their differential version to modeling curved walls.
## Chapter 4 Mechanics Model
The PEBBLES simulation calculates the forces on each individual pebble. These
forces are then used to calculate the subsequent motion and position of the
pebbles.
### 4.1 Overview of Model
The PEBBLES simulation tracks each individual pebble’s velocity, position,
angular velocity and static friction loadings. The following classical
mechanics differential equations are used for calculating the time derivatives
of those variables:
$\frac{d{\mathbf{v}}_{i}}{dt}=\frac{m_{i}\mathbf{{g}}+\sum_{i\neq
j}\mathbf{{F}}_{ij}+\mathbf{{F}}_{ci}}{m_{i}}$ (4.1)
$\frac{d{\mathbf{p}}_{i}}{dt}={\mathbf{v}}_{i}$ (4.2)
$\frac{d\mathbf{\omega}_{i}}{dt}=\frac{\sum_{i\neq j}\mathbf{{F}}_{\parallel
ij}\times r_{i}\hat{\mathbf{{n}}}_{ij}+\mathbf{F}_{\parallel ci}\times
r_{i}\hat{\mathbf{n}}_{ci}}{I_{i}}$ (4.3)
$\frac{d\mathbf{s}_{ij}}{dt}=\mathbf{S}(\mathbf{F}_{\perp
ij},\mathbf{v}_{i},\mathbf{v}_{j},\mathbf{p}_{i},\mathbf{p}_{j},\mathbf{s}_{ij})$
(4.4)
where $\mathbf{F}_{ij}$ is the force from pebble $j$ on pebble $i$,
$\mathbf{F}_{ci}$ is the force of the container on pebble $i$, $\mathbf{g}$ is
the gravitational acceleration constant, $m_{i}$ is the mass of pebble $i$,
$\mathbf{v}_{i}$ is the velocity of pebble $i$, $\mathbf{p}_{i}$ is the
position vector for pebble $i$, $\omega_{i}$ is the angular velocity of pebble
i, $\mathbf{F}_{\parallel ij}$ is the tangential force between pebbles $i$ and
$j$, $\mathbf{F}_{\perp ij}$ is the perpendicular force between pebbles $i$
and $j$, $r_{i}$ is the radius of pebble $i$, $I_{i}$ is the moment of inertia
for pebble i, $\mathbf{F}_{\parallel ci}$ is the tangential force of the
container on pebble i, $\hat{\mathbf{n}}_{ci}$ is the unit vector normal to
the container wall on pebble i, $\mathbf{\hat{n}}_{ij}$ is the unit vector
pointing from the position of pebble $i$ to that of pebble $j$,
$\mathbf{s}_{ij}$ is the current static friction loading between pebbles $i$
and $j$, and $\mathbf{S}$ is the function to compute the change in the static
friction loading. The static friction model contributes to the
$\mathbf{F}_{\parallel ij}$ term which is also part of the $\mathbf{F}_{ij}$
term. Figure 4.1 illustrates the principal vectors with pebble $i$ going in
the $\mathbf{v}_{i}$ direction and rotating around the $\mathbf{\omega}_{i}$
axis, and pebble $j$ going in the $\mathbf{v}_{j}$ direction and rotating
around the $\mathbf{\omega}_{j}$ axis.
Figure 4.1: Principle Vectors in the Interaction of Two Pebbles
The mass and moment of inertia are calculated assuming spherical symmetry with
the equations:
$m=\frac{4}{3}\pi\left[\rho_{c}r_{c}^{3}+\rho_{o}(r_{o}^{3}-r_{c}^{3})\right]$
(4.5)
$I=\frac{8}{15}\pi\left[\rho_{c}r_{c}^{5}+\rho_{o}(r_{o}^{5}-r_{c}^{5})\right]$
(4.6)
where $r_{c}$ is the radius of inner (fueled) zone of the pebble, $r_{o}$ is
the radius of whole pebble, $\rho_{c}$ is the average density of center fueled
region and $\rho_{o}$ is the average density of outer non-fueled region.
The dynamic (or kinetic) friction model is based on the model described by
Wait (2001). Wait’s and PEBBLES model calculate the dynamic friction using a
combination of the relative velocities and pressure between the pebbles, as
shown in Equations (4.7) and (4.8):
$\mathbf{F}_{\perp ij}=hl_{ij}\hat{\mathbf{n}}_{ij}-C_{\perp}\mathbf{v}_{\perp
ij},l_{ij}>0$ (4.7)
$\mathbf{F}_{d\parallel ij}=-min(\mu|\mathbf{F}_{\perp
ij}|,C_{\parallel}|\mathbf{v}_{\parallel ij}|)\hat{\mathbf{v}}_{\parallel
ij},l_{ij}>0$ (4.8)
where $C_{\parallel}$ is the tangential dash-pot constant, $C_{\perp}$ is the
normal dash-pot constant, $\mathbf{F}_{\perp ij}$ is the normal force between
pebbles i and j, $\mathbf{F}_{d\parallel ij}$ is the tangential dynamic
friction force between pebbles i and j, $h$ is the normal Hooke’s law
constant, $l_{ij}$ is the overlap between pebbles i and j,
$\mathbf{v}_{\parallel ij}$ is the component of the velocity between two
pebbles perpendicular to the line joining their centers, $\mathbf{v}_{\perp
ij}$ is the component of the velocity between two pebbles parallel to the line
joining their centers, $\mathbf{v}_{ij}$ is the relative velocity between
pebbles i and j and $\mu$ is the kinetic friction coefficient. Equations
(4.9-4.12) relate supplemental variables to the primary variables:
$\mathbf{F}_{ij}=\mathbf{F}_{\perp ij}+\mathbf{F}_{\parallel ij}$ (4.9)
$\mathbf{v}_{\perp
ij}=(\mathbf{v}_{ij}\cdot\hat{\mathbf{n}}_{ij})\mathbf{\hat{n}}_{ij}$ (4.10)
$\mathbf{v}_{\parallel ij}=\mathbf{v}_{ij}-\mathbf{v}_{\perp ij}$ (4.11)
$\mathbf{v}_{ij}=(\mathbf{v}_{i}+\mathbf{\omega}_{i}\times
r_{i}\mathbf{\hat{n}}_{ij})-(\mathbf{v}_{j}+\mathbf{\omega}_{j}\times
r_{j}\mathbf{\hat{n}}_{ji})$ (4.12)
The friction force is then bounded by the friction coefficient and the normal
force, to prevent it from being too great:
$\mathbf{F}_{f\parallel ij}=\mathbf{F}_{s\parallel ij}+\mathbf{F}_{d\parallel
ij}$ (4.13)
$\mathbf{F}_{\parallel ij}=min(\mu|\mathbf{F}_{\perp
ij}|,|\mathbf{F}_{f\parallel ij}|){\hat{\mathbf{F}}_{f\parallel ij}}$ (4.14)
where $\mathbf{F}_{s\parallel ij}$ is the static friction force between
pebbles $i$ and $j$, $\mathbf{F}_{d\parallel ij}$ is the kinetic friction
force between pebbles $i$ and $j$, $h_{s}$ is the coefficient for force from
slip, $\mathbf{s}_{ij}$ is the slip distance perpendicular to the normal force
between pebbles $i$ and $j$, ${v_{\mathrm{max}}}$ is the maximum velocity
under which static friction is allowed to operate, and $\mu$ is the static
friction coefficient when the velocity is less than $v_{\mathrm{max}}$ and the
kinetic friction coefficient when the velocity is greater. These equations
fully enforces the first requirement of a static friction method,
${|\mathbf{F}_{s}|\leq\mathit{{\mu}}|\mathbf{F}_{\perp}|}$.
### 4.2 Integration
When all the position, linear velocity, angular velocity and slips are
combined into a vector $\mathbf{y}$, the whole computation can be written as a
differential formulation in the form:
$\displaystyle\mathbf{y}^{\prime}=\mathbf{f}(t,\mathbf{y})$ (4.15)
$\displaystyle\mathbf{y}(t_{0})=\mathbf{y}_{0}$ (4.16)
This can be solved by a variety of methods with the simplest being Euler’s
method:
$\mathbf{y}_{1}=\mathbf{y}_{0}+\Delta t\mathbf{f}(t,\mathbf{y}_{0})$ (4.17)
In addition, both the Runge-Kutta method and the Adams-Moulton method can be
used for solving this equation. These methods improve the accuracy of the
simulation. However, they do not improve the wall-clock time at the lowest
stable simulation, since the additional time required for computation negates
the advantage of being able to use somewhat longer time-steps. In addition,
when running on a cluster, more data needs to be transferred since the methods
allow non-contacting pebbles to affect each other in a single ‘time-step
calculation’.
### 4.3 Geometry Modeling
For any geometry interaction, two things need to be calculated, the overlap
distance $l$ (or, technically, the mutual approach of distant points) and the
normal to the surface $\hat{n}$. The input is the radius of the pebble $r$ and
the position of the pebble, $\mathbf{p}$ with components $\mathbf{p}_{x}$,
$\mathbf{p}_{y}$, and $\mathbf{p}_{z}$
For the floor contact this is:
$\displaystyle l$ $\displaystyle=(\mathbf{p}_{z}-r)-floor\\_location$ (4.18)
$\displaystyle\hat{n}$ $\displaystyle=\hat{z}$ (4.19)
For cylinder contact on the inside of a cylinder this is:
$\displaystyle pr$
$\displaystyle=\sqrt{\mathbf{p}_{x}^{2}+\mathbf{p}_{y}^{2}}$ (4.20)
$\displaystyle l$ $\displaystyle=(pr+r)-cylinder\\_radius$ (4.21)
$\displaystyle\hat{n}$
$\displaystyle=\frac{-\mathbf{p}_{x}}{pr}\hat{x}+\frac{-\mathbf{p}_{y}}{pr}\hat{y}$
(4.22)
For cylinder contact on the outside of a cylinder this is:
$\displaystyle pr$
$\displaystyle=\sqrt{\mathbf{p}_{x}^{2}+\mathbf{p}_{y}^{2}}$ (4.23)
$\displaystyle l$ $\displaystyle=cylinder\\_radius+r-pr$ (4.24)
$\displaystyle\hat{n}$
$\displaystyle=\frac{\mathbf{p}_{x}}{pr}\hat{x}+\frac{\mathbf{p}_{y}}{pr}\hat{y}$
(4.25)
For contact on the inside of a cone defined by the $radius=mz+b$:
$\displaystyle pr$
$\displaystyle=\sqrt{\mathbf{p}_{x}^{2}+\mathbf{p}_{y}^{2}}$ (4.26)
$\displaystyle z_{c}$ $\displaystyle=\frac{m(pr-b)+z}{m^{2}+1}$ (4.27)
$\displaystyle r_{c}$ $\displaystyle=mz_{c}+b$ (4.28) $\displaystyle x_{c}$
$\displaystyle=(r_{c}/pr)\mathbf{p}_{y}$ (4.29) $\displaystyle y_{c}$
$\displaystyle=(r_{c}/pr)\mathbf{p}_{x}$ (4.30) $\displaystyle\mathbf{c}$
$\displaystyle=x_{c}\hat{x}+y_{c}\hat{y}+z_{c}\hat{z}$ (4.31) $\displaystyle
d$ $\displaystyle=\mathbf{p}-\mathbf{c}$ (4.32) $\displaystyle l$
$\displaystyle=|d|+r,r_{c}<pr$ (4.33) $\displaystyle\hat{n}$
$\displaystyle=-\hat{d},r_{c}<pr$ (4.34) $\displaystyle l$
$\displaystyle=r-|d|,r_{c}>=pr$ (4.35) $\displaystyle\hat{n}$
$\displaystyle=\hat{d},r_{c}>=pr$ (4.36)
These equations are derived from minimizing the distance between the contact
point $\mathbf{c}$ and the pebble position $\mathbf{p}$.
For contact on a plane defined by $ax+by+cz+d=0$ where the equation has been
normalized so that $a^{2}+b^{2}+c^{2}=1$, the following is used:
$\displaystyle dp$
$\displaystyle=a\mathbf{p}_{x}+b\mathbf{p}_{y}+c\mathbf{p}_{z}+d$ (4.37)
$\displaystyle l$ $\displaystyle=r-dp$ (4.38) $\displaystyle\hat{n}$
$\displaystyle=a\hat{x}+b\hat{y}+c\hat{z}$ (4.39)
Combinatorial geometry operations can be done. Intersections and unions of
multiple geometry types are done by calculating the overlaps and normals for
all the geometry objects in the intersection or union. For an intersection,
where there is overlap on all the geometry objects, then the smallest overlap
and associated normal are kept, which may be no overlap. For a union, the
largest overlap and its associated normal are kept.
For testing that a geometry is correct, a simple check is to fill up the
geometry with pebbles using one of the methods described in Section 4.4, and
then make sure that linear and angular energy dissipate. Many geometry errors
will show up by artificially creating extra linear momentum. For example, if a
plane is only defined at the top, but it is possible for pebbles to leak deep
into the bottom of the plane, they will go from having no overlap to a very
high overlap, which will give the pebble a large force. This results in extra
energy being added each time a pebble encounters the poorly defined plane,
which will show up in energy tallies.
### 4.4 Packing Methods
The pebbles are packed using three main methods. The simplest creates a very
loose packing with an approximately 0.15 packing fraction by randomly choosing
locations, and removing the overlapping ones. These pebbles then allowed to
fall down to compact to a realistic packing fraction.
The second is the PRIMe method developed by Kloosterman and Ougouag (2005). In
this method large numbers of random positions (on the order of 100,000 more
than will fit) are generated. The random positions are sorted by height, and
starting at the bottom, the ones that fit are kept. Figure 4.2 illustrates
this process. This generates packing fractions of approximately 0.55. Then
they are allowed to fall to compact. This compaction takes less time than
starting with a 0.15 packing fraction.
Figure 4.2: PRIMe Method Illustration
The last method is to automatically generates virtual chutes above the bed
where the actual inlet chutes are, and then loads the pebbles into the chutes.
Figure 4.3 shows this in progress. This allows locations that have piles where
the inlet chutes are, but can be done quicker than a recirculation. The other
two methods generate flat surfaces at the top, which is unrealistic, since the
surface of a recirculated bed will have cones under each inlet chute.
Figure 4.3: Virtual Chute Method
### 4.5 Typical Parameters
The typical parameters used with the PEBBLES code are described in Table 4.1.
Alternative numbers will be described when used.
Table 4.1: Typical Constants used in Simulation Constant | Value
---|---
Gravitational Acceleration $g$ | 9.8 m/s2
radius of pebbles $r$ | 0.03 m
Mass of Pebble $m$ | 0.2071 kg
Moment of Inertia $I$ | 7.367e-05 kg m2
Hooke’s law constant $h$ | 1.0e6
Dash-pot constants $C_{\parallel}$ and $C_{\perp}$ | 200.0
Kinetic Friction Coefficient $\mu$ | 0.4 or sometimes 0.25
Static Friction Coefficient $\mu_{s}$ | 0.65 or sometimes 0.35
Maximum static friction velocity $v_{max}$ | 0.1 m/s
## Chapter 5 A New Static Friction Model
The static friction model in PEBBLES is used to calculate the force and
magnitude of the static friction force. Other models have been created before
to calculate static friction, but the PEBBLES model provides the combination
of being a differential model (as opposed to one where the force is rotated as
a separate step) and being able to handle the type of geometries that exist in
pebble bed reactors.
The static friction model has two key requirements. First, the force from
stuck-slip must be updated based on relative motion of the pebbles. Second,
the current direction of the force must be calculated since the pebbles can
rotate in space.
#### 5.0.1 Use of Parallel Velocity for Slip Updating
For elastic spheres, the true method of updating the stuck-slip force is to
use the method of Mindlin and Deresiewicz (1953). This method requires
computationally and memory intensive calculations to track the forces.
Instead, a simpler method is used to approximate the force. This method,
described by Cundall and Strack (1979) uses the integration of the parallel
relative velocity as the displacement. The essential idea is that the farther
the pebbles have stuck-slipped at the contact point, the greater the
counteracting static friction force needs to be. This is what happens under
more accurate models such as Mindlin and Deresiewicz. There are two
approximations imposed by this assumption. First, the amount the force changes
is independent of the normal force. Second, the true hysteretic effects that
are dependent on details of the loading history are ignored. For simulations
where the exact dynamics of static friction are important, these could
potentially be serious errors. However, since static friction only occurs when
the relative speed is low, the dynamics of the simulation usually are
unimportant. Thus, for most circumstances, the following approximation can be
used for the rate of change of the magnitude and non-rotational change of the
stuck-slip:
$\frac{d\mathbf{s}_{ij}}{dt}=\mathbf{v}_{\parallel ij}$ (5.1)
#### 5.0.2 Adjustment of Oversize Slips
The slips can build up to unrealistically large amounts, that is, greater than
$\mu|\mathbf{F}_{\perp}|$; equation 5.1 places no limit on the maximum size of
the slip. The excessive slip is solved at two different locations. First, when
the frictions are added together to determine the total friction they are
limited by $\mu|\mathbf{F}_{\perp}|$ in equation (4.14). This by itself is
insufficient, because the slip is storing potential energy that appears
anytime the normal force increases. This manifests itself by causing vibration
of the pebbles to continue for long periods of time. Two methods for fixing
the hidden slip problem are available in PEBBLES. The simplest drops any slip
that exceeds the static friction threshold (or an input parameter value
somewhat above the static friction threshold so small vibrations do not cause
the slip to disappear).
The second method used in PEBBLES is to decrease the slip that is over a
threshold value. If the slip is too great, a derivative that is the opposite
as the current slip is added as an additional term in the slip time
derivative. This occurs in the following additional term:
$\frac{d\mathbf{s}_{ij}}{dt}=-\mathrm{R}(|\mathbf{s}_{ij}|-s_{sd}\mu|\mathbf{F}_{\perp
ij}|)\hat{s_{ij}}$ (5.2)
In this $\mathrm{R}(x)$ is the ramp function (which is $x$ if $x>0$ and 0
otherwise), $s_{sd}$ is a constant to select how much the slip is allowed to
exceed the static friction threshold (usually 1.1 in PEBBLES). This derivative
adder is used in most PEBBLES runs since it does allow vibrational energy to
decrease, yet does not cause the pyramid benchmark to fail like complete
removal of too great slips does.
When using non-Euler integration methods, the change in slip is calculated
multiple times. Each time it is calculated, it might be set to be zeroed. In
the PEBBLES code, if any of the added up slips for a given contact were set to
be zeroed, the final slip is zeroed. This is not an ideal method, but it works
well enough.
#### 5.0.3 Rotation of Stuck-Slip
The static friction force must also be rotated so that it is in the plane of
contact between the two pebbles. When there is a difference between the
pebbles’ center velocities, which changes in the relative pebble center
location, change in the direction in the stuck-slip occurs. That is:
$\mathbf{p}_{in+1}-\mathbf{p}_{jn+1}\approx\mathbf{p}_{in}-\mathbf{p}_{jn}+(\mathbf{v}_{in}-\mathbf{v}_{jn})\Delta
t$ (5.3)
First, let $\mathbf{n}_{ijn}=\mathbf{p}_{i}-\mathbf{p}_{j}$ and
$d{\mathbf{n}}_{ijn}=\mathbf{v}_{i}-\mathbf{v}_{j}$. The cross product
$-d{\mathbf{n}}_{ijn}\times\mathbf{n}_{ijn}$ is perpendicular to both
$\mathbf{n}$ and $d\mathbf{n}$ and signed to create the axis around which
$\mathbf{s}$ is rotated in a right-handed direction. Then, using the cross
product of the axis and $\mathbf{s}$,
$-(d\mathbf{n}_{ij}\times\mathbf{n}_{ijn})\times\mathbf{s}_{ijn}$ gives the
correct direction that $\mathbf{s}$ should be increased.
Next, determine the factors required to make the differential the proper
length. By cross product laws,
$|-(d\mathbf{n}_{ij}\times\mathbf{n}_{ijn})\times\mathbf{s}_{ijn}|=|d\mathbf{n}_{ij}||\mathbf{n}_{ijn}||\mathbf{s}_{ijn}|\sin\theta\sin\phi$
(5.4)
where ${\theta}$ is the angle between $\mathbf{n}_{\mathbf{ijn}}$ and
$d\mathbf{n}_{\mathbf{ij}}$ and $\phi$ is the angle between
$d\mathbf{n}_{ij}\times\mathbf{n}_{ijn}$ and $\mathbf{s}_{ijn}$.
The relevant vectors are shown in figure 5.1.
Figure 5.1: Static Friction Vectors
The goal is to rotate $\mathbf{s}$ by angle $\alpha\prime$ which is the
‘projection’ into the proper plane of the angle $\alpha$ that $\mathbf{n}$
rotates by. Since the direction has been determined, for simplicity the figure
leaves the indexes off, and concentrates on determining the lengths. In Figure
5.1, $\mathbf{s}$ is the old slip vector, $\mathbf{s}\prime$ is the new slip
vector, $\mathbf{n}$ is the vector pointing from one pebble to another. The
vector $d\mathbf{n}\Delta t$ is added to $\mathbf{n}$ to get the new
$\mathbf{n}\prime$, $\mathbf{n}+d\mathbf{n}{\Delta}t$. The initial condition
is that $\mathbf{s}$ and $\mathbf{n}$ are perpendicular. The final conditions
are that $\mathbf{s}\prime$ and $\mathbf{n}\prime$ are perpendicular, and that
$\mathbf{s}$ and $\mathbf{s}\prime$ are the same length and that
$\mathbf{s}\prime$ is the closest vector to $\mathbf{s}$ as it can be while
satisfying the other conditions. There is no requirement that $\mathbf{s}$ or
$\mathbf{s}\prime$ are coplanar with $d\mathbf{n}{\Delta}t$ (otherwise
${\alpha}\prime$ would be equal to ${\alpha}$). From the law of sines we have:
$\frac{|d\mathbf{n}\Delta t|}{\sin\alpha}=\frac{|\mathbf{n}|}{\sin\theta}$
(5.5)
so
$\sin\alpha=\frac{|d\mathbf{n}\Delta t|\sin\theta}{|\mathbf{n}|}$ (5.6)
In Figure 5.2 the projection to the correct plane occurs. First by using
$\phi$ the length of $\mathbf{s}$ is projected to the plane. Note that $\phi$
is the angle both to $\mathbf{s}$ and to $\mathbf{s}\prime$. So, the length of
the line on the $d\mathbf{n}\times\mathbf{n}$ plane is
$|\mathbf{s}|\sin{\phi}$, and the length of the straight line at the end of
the triangle is $|\mathbf{s}|\sin{\phi}\sin{\alpha}$ (note that the chord
length is $|\mathbf{s}|(\sin{\phi}){\alpha}$, but as ${\Delta}t$ approaches 0
the other can be used). From these calculations, the length of the
$d\mathbf{s}{\Delta}t$ can be calculated with the following formula:
$d\mathbf{s}\Delta t=\frac{|s|\sin\phi|d\mathbf{n}\Delta
t|\sin\theta}{|\mathbf{n}|}$ (5.7)
Since
$|-(d\mathbf{n}_{ij}\times\mathbf{n}_{ijn})\times\mathbf{s}_{ijn}|=|d\mathbf{n}_{ij}||\mathbf{n}_{ijn}||\mathbf{s}_{ijn}|\sin\theta\sin\phi$
the formula for the rotation is:
$\mathbf{s}_{ijn+1}=-\frac{(d{\mathbf{n}}_{ijn}\times\mathbf{n}_{ijn})\times\mathbf{s}_{ijn}}{\mathbf{n}^{2}}\Delta
t+\mathbf{s}_{ijn}$ (5.8)
Figure 5.2: Projections to ds
As a differential equation this is:
$\frac{d\mathbf{s}_{ij}}{dt}=-\frac{\left[((\mathbf{v}_{i}-\mathbf{v}_{j})\times(\mathbf{p}_{i}-\mathbf{p}_{j}))\times\mathbf{s}_{ij}\right]}{|\mathbf{p}_{i}-\mathbf{p}_{j}|^{2}}$
(5.9)
By the vector property $a\times(b\times c)=b(a\cdot c)-c(a\cdot b)$ and since
$(\mathbf{p}_{i}-\mathbf{p}_{j})\cdot\mathbf{s}_{ij}=0$, this can be rewritten
as the version in Silbert et al. (2001):
$\frac{d\mathbf{s}_{ij}}{dt}=-\frac{(\mathbf{p}_{i}-\mathbf{p}_{j})(\mathbf{s}_{ij}\cdot(\mathbf{v}_{i}-\mathbf{v}_{j}))}{|\mathbf{p}_{i}-\mathbf{p}_{j}|^{2}}$
(5.10)
#### 5.0.4 Differential Equation for Surface Slip Rotating
It might seem that the wall interaction could be modeled the same way as the
pebble-to-pebble interaction. For sufficiently simple wall geometries this may
be possible, but actual pebble bed reactor geometries are more complicated,
and violate some of the assumptions that underpin the derivation. For a flat
surface, there is no rotation, so the formula can be entirely dropped. For a
spherical surface, it would be possible to measure the curvature at pebble to
surface contact point in the direction of relative velocity to the surface.
This curvature could then be used as an effective radius in the pebble-to-
pebble formulas.
The pebble reactor walls have additional features that violate assumptions
made for the derivation. For surfaces such as a cone, the curvature is not in
general constant, because the path can follow elliptical curves. As well, the
curvature has discontinuities where different parts of the reactor join
together (for example, the transition from the outlet cone to the outlet
chute). At these transitions, the assumption that the slip stays parallel to
the surface fails because the slip is parallel to the old surface, but the new
surface has a different normal.
Because of the complications with using the pebble to pebble interaction,
PEBBLES uses an approximation of the “rotation delta.” This is similar to the
Vu-Quoc and Zhang (1999) method of adjusting the slip so that it is parallel
to the surface each time. Each time when the slip is used, a temporary version
of the slip that is properly aligned to the surface is computed and used for
calculating the force. As well, a rotation to move the slip more parallel to
the surface is also computed.
The rotation is computed as follows. Let the normal direction of the wall at
the point of contact of the pebble be $\mathbf{n}$, and the old stuck-slip be
$\mathbf{s}$. Let $a$ be the angle between $\mathbf{n}$ and $\mathbf{s}$.
$\mathbf{n}\times\mathbf{s}$ is perpendicular to both $\mathbf{n}$ and
$\mathbf{s}$ and so this cross product is the axis that needs to be rotated
around. Then $(\mathbf{n}\times\mathbf{s})\times\mathbf{s}$ is perpendicular
to this vector, so it is either pointing directly towards $\mathbf{n}$ if $a$
is acute or directly away from $\mathbf{n}$ if $a$ is obtuse. To obtain the
correct direction, this vector is multiplied by the scalar
$\mathbf{s}\cdot\mathbf{n}$ which has the correct sign from $\cos a$. The
magnitude of
$(\mathbf{s}\cdot\mathbf{n})[(\mathbf{n}\times\mathbf{s})\times\mathbf{s}]$
needs to be determined for reasonableness. We define the angle $b$, which is
between $(\mathbf{n}\times\mathbf{s})$ and $\mathbf{s}$. By these definitions
the magnitude is $(|\mathbf{s}||\mathbf{n}|\cos
a)[(|\mathbf{n}||\mathbf{s}|\sin a)|\mathbf{s}|\sin b]$. $b$ is a right angle
since $\mathbf{n}\times\mathbf{s}$ is perpendicular to $\mathbf{s}$, so $\sin
b=1$. Collecting terms gives the magnitude as
$|\mathbf{s}|^{3}|\mathbf{n}|^{2}\cos a\sin a$ which is divided by
$|\mathbf{n}\times\mathbf{s}||\mathbf{n}||\mathbf{s}|$ to give the full term
the magnitude $|\mathbf{s}|\cos a$. This is the length of the vector that goes
from $\mathbf{s}$ to the plane perpendicular to $\mathbf{n}$. This produces
equation 5.11, which can be used to ensure that the wall stuck-slip vector
rotates towards the correct direction.
$\frac{d\mathbf{s}}{dt}=(\mathbf{s}\cdot\mathbf{n})\frac{[(\mathbf{n}\times\mathbf{s})\times\mathbf{s}]}{|\mathbf{n}\times\mathbf{s}||\mathbf{n}||\mathbf{s}|}$
(5.11)
Figure 5.3: Static Friction Vectors for Wall
### 5.1 Testing of Static Friction Model With Pyramid Test
Static friction is an important physical feature in the implementation of
mechanical models of pebbles motion in a pebble bed, and checking its
correctness is important. A pyramid static friction test model was devised as
a simple tool for verifying the implementation of a static friction model
within the code. The main advantages of the pyramid test are that the model
test is realistic and that it can be modeled analytically, providing an exact
basis for the comparison. The test benchmark consists of a pyramid of five
spheres on a flat surface. This configuration is used because the forces
acting on each pebble can be calculated simply and the physical behavior of a
model with only kinetic friction is fully predictable on physical and
mathematical grounds: with only kinetic friction and no static friction, the
pyramid will quickly flatten. Even insufficient static friction will result in
the same outcome. The four bottom spheres are arranged as closely as possible
in a square, and the fifth sphere is placed on top of them as shown in Fig.
5.4.
Figure 5.4: Sphere Location Diagram
The lines connecting the centers of the spheres form a pyramid with sides
$2R$, as shown in Fig. 5.5, where $R$ is the radius of the spheres. The length
of $a$ in the figure is $\frac{2R}{\sqrt{2}}$, and because $b$ is part of a
right triangle,
$(2R)^{2}-(\frac{2R}{\sqrt{2}})^{2}=b^{2}=4R^{2}-\frac{4R^{2}}{2}=2R^{2}$, so
$b$ has the same length as $a$, and thus the elevation angle for all vertexes
of the pyramid are $45^{\circ}$ from horizontal.
Figure 5.5: Pyramid Diagram
Taking for origin of the coordinates system the projection of the pyramid
summit onto the ground, the locations (coordinates) of the sphere centers are
given in Table 5.1.
Table 5.1: Sphere location table. X | Y | Z
---|---|---
$-R$ | $-R$ | $R$
$R$ | $-R$ | $R$
$-R$ | $R$ | $R$
$R$ | $R$ | $R$
0 | 0 | $R(1+\sqrt{2})$
#### 5.1.1 Derivation of Minimum Static Frictions
The initial forces on the base sphere are the force of gravity $m\mathbf{g}$,
and the normal forces $\mathbf{Tn}$ and $\mathbf{Fn}$ as shown in Fig. 5.6.
This causes initial stuck-slip which will cause $\mathbf{Fs}$ to develop to
counter the slip, and $\mathbf{Ts}$ to counter the rotation of the base sphere
relative to the top sphere. The top sphere will have no rotation because the
forces from the four spheres will be symmetric and counteract each other.
The forces on the base sphere are:
> $\mathbf{Tn}$
>
>
> – Normal force from the top sphere
>
> $\mathbf{Ts}$
>
>
> – Static friction force from the top sphere
>
> $m\mathbf{g}$
>
>
> – Force of gravity on the base sphere
>
> $\mathbf{Fn}$
>
>
> – Normal force from floor
>
> $\mathbf{Fs}$
>
>
> – Static friction force from the floor
Figure 5.6: Force Diagram
Note that $\mathbf{Fn}$ is larger than $\mathbf{Tn}$ since $\mathbf{Tn}$ is
only a portion of the $m\mathbf{g}$ force since the top sphere transmits (and
splits) its force onto all four base spheres.
There are three requirements for a base sphere to be non-accelerated.
If a base sphere is not rotating than there is no torque, so:
$|\mathbf{Fs}|=|\mathbf{Ts}|$ (5.12)
The resultant of all forces must also be zero in the x and the y direction
(vector notation dropped since they are in one dimension and therefore
scalars) as follows:
$-Fs-Tsx+Tnx=0$ (5.13)
$-mg-Tsy-Tny+Fn=0$ (5.14)
Since the angle of contact between a base sphere and the top sphere is
$45^{\circ}$, the following two equations hold (where $Ts$ is the magnitude of
$\mathbf{Ts}$ and $Tn$ is the magnitude of $\mathbf{Tn}$):
$Tsx=Tsy=\frac{Ts}{\sqrt{2}}$ (5.15)
$Tnx=Tny=\frac{Tn}{\sqrt{2}}$ (5.16)
This changes equations 5.13 and 5.14 into:
$-Fs-\frac{Ts}{\sqrt{2}}+\frac{Tn}{\sqrt{2}}=0$ (5.17)
$-mg-\frac{Ts}{\sqrt{2}}-\frac{Tn}{\sqrt{2}}+Fn=0$ (5.18)
Combining equation 5.12 and 5.17 provides:
$-Ts-\frac{Ts}{\sqrt{2}}+\frac{Tn}{\sqrt{2}}=0$ (5.19)
Which gives the relation:
$Tn=Ts(\sqrt{2}+1)$ (5.20)
By the static friction Equation 3.1,
$Ts\leq\mu_{sphere}Tn$ (5.21)
Combining equations 5.20 and 5.21 and simplifying gives the requirement that
$\sqrt{2}-1\leq\mu_{sphere}$ (5.22)
For use with testing, the static friction program can be tested twice with a
sphere-to-sphere friction coefficient slightly above 0.41421… and one slightly
below 0.41421…. In the first case the pyramid should be stable, and in the
second case the top ball should fall to the floor.
Since ¼ of the weight of the top pebble is on one of the base pebbles, the
following holds:
$Fn=\frac{5}{4}mg$ (5.23)
Combining 5.18 and 5.23 provides the following equation:
$\frac{mg}{4}-\frac{Ts}{\sqrt{2}}-\frac{Tn}{\sqrt{2}}=0$ (5.24)
Equations 5.17 and 5.24 can be added to produce
$-Fs-\sqrt{2}Ts+\frac{mg}{4}=0$ (5.25)
Using 5.12 and 5.24 and solving for Fs gives the following value for Fs:
$Fs=\frac{mg}{4(1+\sqrt{2})}$ (5.26)
By the static friction Equation 3.1:
$Fs\leq\mu_{surface}Fn.$ (5.27)
Substituting the values for $Fs$ and $Fn$ gives:
$\frac{mg}{4(1+\sqrt{2})}\leq\mu_{surface}\frac{5}{4}mg$ (5.28)
Simplifying provides the following relation for the surface-to-sphere static
friction requirement:
$\frac{1}{5(1+\sqrt{2})}\leq\mu_{surface}.$ (5.29)
This can be used similarly to the other static friction requirement by setting
the value slightly above 0.08284… and slightly below 0.08284… and making sure
that it is stable with the higher value and not stable with the lower value.
This test was inspired by an observation of lead cannon balls stacked into a
pyramid. I tried to stack used glass marbles into a five ball pyramid and it
was not stable. Since lead has a static friction coefficient around 0.9 and
used glass has a much lower static friction, the physics of pyramid stability
was further investigated, resulting in this benchmark test of static friction
modeling.
#### 5.1.2 Use of Benchmark
The benchmark test of two critical static friction coefficients is defined by
the following equations. If both static friction coefficients are above the
critical values, the spheres will form a stable pyramid. If either or both
values are below the critical values the pyramid will collapse.
$\mu_{criticalsurface}=\frac{1}{5(1+\sqrt{2})}\approx 0.08284$ (5.30)
$\mu_{criticalsphere}=\sqrt{2}-1\approx 0.41421$ (5.31)
To set up the test cases, the sphere locations from Table 5.1 should be used
as the initial locations of the sphere. Then, static friction coefficients for
the sphere-to-sphere contact and the sphere-to-surface contact are chosen. The
code is then run until either the center sphere falls to the surface, or the
pyramid obtains a stable state. There are three test cases that are run to
test the model.
1. 1.
$\mu_{surface}=\mu_{criticalsurface}+\epsilon$ and
$\mu_{sphere}=\mu_{criticalsphere}+\epsilon$ which should result in a stable
pyramid.
2. 2.
$\mu_{surface}=\mu_{criticalsurface}-\epsilon$ and
$\mu_{sphere}=\mu_{criticalsphere}+\epsilon$ which should result in a fall.
3. 3.
$\mu_{surface}=\mu_{criticalsurface}+\epsilon$ and
$\mu_{sphere}=\mu_{criticalsphere}-\epsilon$ which should result in a fall.
For soft sphere models, there are fundamental limits to how close the model’s
results can be to the critical coefficient. Essentially, as the critical
coefficients are approached, the assumptions become less valid. For example,
with soft (elastic) spheres, the force from the center sphere will distort the
contact angle, so the actual critical value will be different. For the PEBBLES
code, an $\epsilon$ value of 0.001 is used.
### 5.2 Testing of the Static Friction Model by Comparison with Janssen’s
Formula
The pyramid static friction test is used as a simple test of the static
friction model. Another test compares the static friction model against the
Janssen formula’s behavior (Sperl, 2006). This formula specifies the expected
wall pressure as a function of depth. This formula only applies when the
static friction is fully loaded, that is when
$\mathbf{F}_{s}|=\mu|\mathbf{F}_{\perp}|$. This condition is generally not
satisfied until some recirculation has occurred. Figure 5.7 shows the normal
force and the static friction force from a pebble to the wall. With the
PEBBLES code, this is only satisfied after recirculation with lower values of
the static friction coefficient $\mu$.
Figure 5.7: Relevant Forces on Wall from Pebble
The equation used to calculate the pressure on the region $R$ from the normal
force in PEBBLES is:
$p={1\over R_{h}2\pi r}\sum_{i\ in\ R}|\mathbf{F}_{\perp ci}|$ (5.32)
where $p$ is the pressure, $R_{h}$ is the height of the region, and $r$ is the
radius of the cylinder.
The equation for calculating the Janssen formula pressure is
$\displaystyle K$
$\displaystyle=2\mu_{pp}^{2}-2\mu_{pp}\sqrt{\mu_{pp}^{2}+1}+1$ (5.33)
$\displaystyle b$ $\displaystyle=f\rho g$ (5.34) $\displaystyle p$
$\displaystyle={b2r\over 4\mu_{wall}}\left(1-e^{-{4\mu_{wall}Kz\over
2r}}\right)$ (5.35)
where $\mu_{pp}$ is the pebble to pebble static friction coefficient,
$\mu_{wall}$ is the pebble to wall, $f$ is the packing fraction, $\rho$ is the
density, $g$ is the gravitational acceleration, and $z$ is the depth that the
pressure is being calculated. For the Figures 5.8 and 5.9, a packing fraction
of 0.61 is used and a density of 1760 kg/m3 are used. There are 20,000 pebbles
packed into a 0.5 meter radius cylinder, and 1,000 are recirculated before the
pressure measurement is done.
Figure 5.8 compares the Janssen model with the PEBBLES simulation for static
friction values of 0.05 and 0.15. For this case, the Janssen formula and the
simulated pressures match closely. Figure 5.9 compares these again. In this
case, the 0.25 $\mu$ values only approximately match, and the 0.9 static
friction pressure values do not match at all. The static friction slip vectors
were examined, and they are not perfectly vertical, and they are not fully
loaded. This results in the static friction force being less than the maximum
possible, and thus the pressure is higher since less of the force is removed
by the walls.
Figure 5.8: Comparison With 0.05 and 0.15 $\mu$ Figure 5.9: Comparison With
0.25 and 0.9 $\mu$
## Chapter 6 Unphysical Approximations
Modeling the full physical effects that occur in a pebble bed reactor
mechanics is not computationally possible with current computer resources. In
fact, even modeling all the intermolecular forces that occur between two
pebbles at sufficient levels to reproduce all macroscopic behavior is probably
computationally intractable at the present time. This is partially caused by
the complexity of effects such as inter-grain boundaries and small quantities
of impurities that affect the physics and different levels between the atomic
effects and the macroscopic world. Instead, all attempts at modeling the
behavior of pebble bed reactor mechanics have relied on approximation to make
the task computationally practical. The PEBBLES simulation has as high or
higher fidelity than past efforts, but it does use multiple unphysical
approximations. This chapter will discuss the approximations so that future
simulation work can be improved, and an understanding of what limitations
exist when applying PEBBLES to different problems.
In different regions of the reactor, the radioactivity and the fission will
heat the pebbles differently, and the flow of the coolant helium will
distribute this heat around the reactor. This will change the temperature of
different parts of the reactor. Since the temperature will be different, the
parameters driving the mechanics of the pebbles will be different as well.
This includes parameters such as the static friction coefficients and the size
of the pebbles which will change through thermal expansion. As well,
parameters such as static friction can also vary depending on the gas in which
they currently are in and in which they were, since some of the gas tends to
remain in and on the carbon surface. Graphite dust produced by wear may also
affect static friction in downstream portions of the reactor.
The pebbles in a pebble bed reactor have helium gas flowing around and past
them. PEBBLES and all other pebble bed simulations ignore effects of this on
pebble movement. However, the gas will cause both additional friction when the
pebbles are dropping through the reactor, and the motion of gas will cause
additional forces on pebbles.
Pebble bed mechanics simulations use soft spheres. Physically, there will be
deflection of spheres under pressure (even the pressure of just one sphere on
the floor), but the true compression is much smaller than what is actually
modeled. In PEBBLES, the forces are chosen to keep the compression distance at
a millimeter or below. Another effect related to the physics simulation is
that force is transmitted via contact. This means the force from one end of
the reactor is transmitted at a speed related to the time-step used for the
simulation, instead of the speed of sound.
Since simulating billions of time-steps is time consuming, two approximations
are made. First, instead of simulating the physical time that pebble bed
reactors have between pebble additions (on the order of 2-5 minutes), new
pebbles are added at a rate between a quarter second and two seconds. This may
result in somewhat unphysical simulations since some vibration that would have
dampened out with a longer time between pebble additions still exists when the
next pebble impacts the bed. Second, since full recirculation of all the
pebbles is computationally costly, for some simulations, only a partial
recirculation or no recirculation is done.
The physics models do not take into account several physical phenomena. The
physics do not handle pure spin effects, such as when two pebbles are
contacting and are spinning with an axis around the contact point. This should
result in forces on the pebbles, but the physics model does not handle this
effect since the contact velocity is calculated as zero. In addition, when the
pebble is rolling so that the contact velocity is zero because the pebble’s
turning axis is parallel to the surface and at the same rate as the pebble is
moving along the surface, there should be rolling friction, but this effect is
not modeled either. As well, the equations used assume that the pebbles are
spherically symmetric, but defects in manufacturing and slight asymmetries in
the TRISO particle distribution mean that there will be small deviations from
being truly spherically symmetric.
The physics model does not match classical Hertzian or Mindlin and Deresiewicz
elastic sphere behavior. The static friction model is a simplification and
does not capture all the hysteretic effects of true static friction.
Effectively, this means that $h_{s}$, the coefficient used to calculate the
force from slip, is not a constant. In order to fully discuss this, some
features of these models will be discussed in the following paragraphs.
Since closed-form expressions exist for elastic contact between spheres, they
will be used, instead of a more general case which lacks closed-form
expressions. Spheres are not a perfect representation of the effect of contact
between shapes such as a cone and a sphere, but should give an approximation
of the size of the effect of curvature.
The amount of contact area and displacement of distant points for two spheres
or one sphere and one spherical hole (that is negative curvature) for elastic
spheres can be calculated via Hertzian theory(Johnson, 1985). For two
spherical surfaces the following variables are defined:
$\frac{1}{R}=\frac{1}{R_{1}}+\frac{1}{R_{2}}$ (6.1)
and
$\frac{1}{E^{*}}=\frac{1-\nu_{1}^{2}}{E_{1}}+\frac{1-\nu_{2}^{2}}{E_{2}}$
(6.2)
with $R_{i}$ the $i$th’s sphere’s radius, $E_{i}$ the Young’s modulus,
$\nu_{i}$ the Poisson’s ration of the material. For a concave sphere, the
radius will be negative. Then, via Hertzian theory, the contact circle radius
will be:
$a=\left(\frac{3NR}{4E^{*}}\right)^{1/3}$ (6.3)
where $N$ is the normal force. The mutual approach of distant points is given
by:
$\delta=\frac{a^{2}}{R}=\left(\frac{9N^{2}}{16RE^{*2}}\right)^{1/3}$ (6.4)
Notice that the above differs compared to the Hooke’s Law formulation that
PEBBLES uses. The maximum pressure will be:
$p_{0}=\frac{3N}{2\pi a^{2}}$ (6.5)
So as a function of the radii $R_{1}$ and $R_{2}$, the circle radius of the
contact will be:
$a=\left(\frac{3N}{4E^{*}}\left[\frac{1}{R_{1}}+\frac{1}{R_{2}}\right]^{-1}\right)^{1/3}$
(6.6)
If $m$ is used for the multiple of negative curvature sphere of the radius of
the other, then the equation becomes:
$a=\left(\frac{3N}{4E^{*}}\left[\frac{1}{R_{1}}-\frac{1}{mR_{1}}\right]^{-1}\right)^{1/3}$
(6.7)
which can be rearranged to:
$a=\left(\frac{3NR_{1}}{4E^{*}}\right)^{1/3}\left(1-\frac{1}{m}\right)^{-1/3}$
(6.8)
From this equation, as $m$ increases, it has less effect on contact area, so
if $R_{2}$ is much greater than $R_{1}$, the contact area will tend to be
dominated by $R_{1}$ rather than $R_{2}$. For example, typical radii in
PEBBLES might be 18 cm outlet chute and a 3 cm pebble, which would put $m$ at
6, so the effect on contact area radius would be about 33% difference compared
to pebble to pebble contact area radius, or 6% compared to a flat surface.111
Sample values of $k=\left(1-\frac{1}{m}\right)^{-1/3}$: $m=-1,k=1.26$ for
sphere to sphere, $m=6,k=0.94$ sphere to outlet chute and $m=\infty,k=1$
sphere to flat plane.
To some extent, the actual contact area is irrelevant for calculating the
maximum static friction force as long as some conditions are met. Both
surfaces need to be of a uniform material. The basic macroscopic description
$|F_{S}|<=\mu|N|$ needs to hold, so changing the area changes the pressure
$P=N/a$, but not the maximum static friction force. If the smaller area causes
the pressure to increase enough to cause plastic rather than elastic contact,
then through that mechanism, the contact area would cause actual differences
in experimental values. PEBBLES also does not calculate plastic contact
effects.
The contact area causes an effect through another mechanism. The tangential
compliance in the case of constant normal and increasing tangential force,
that is the slope of the curve relating displacement to tangential force, is
given in Mindlin and Deresiewicz as:
$\frac{2-\nu}{8\mu a}$ (6.9)
Since the contact area radius, $a$, is a function of curvature, the slope of
the tangential compliance will be as well, which is another effect that
PEBBLES’ constant $h_{s}$ does not capture.
In summary for the static friction using a constant coefficient for $h_{s}$
yields two different approximations. First, using the same constants for wall
contact when there is different curvatures is an approximation that will give
somewhat inconsistent results. Since the equations for spherical contact are
dominated by the smaller radius object, this effect is somewhat less but still
exists. Second, using the same constant coefficient for different loading
histories is a approximation. For a higher fidelity, these effects need to be
taken into account.
## Chapter 7 Code Speedup and Parallelization
Planned and existing pebble bed reactors can have on the order of 100,000
pebbles. For some simulations, these pebbles need to be followed for long time
periods, which can require computing billions of time-steps. Multiplying the
time-steps required by the number of pebbles being computed over leads to the
conclusion that large numbers of computations are required. These computations
should be as fast as possible, and should be as parallel as possible, so as to
allow relevant calculations to be done in a reasonable amount of time. This
chapter discusses the process of speeding up the code and parallelizing it.
The PEBBLES program has three major portions of calculation. The first is
determining which pebbles are in contact with other pebbles. The second
computational part is determining the time derivatives for all the vectors for
all the pebbles. The third computational part is using the derivatives to
update the values. Overall, for calculation of a single time-step, the
algorithm’s computation time is linearly proportional to the number of
pebbles, that is O(n)111O(n): The algorithm scales linearly (n) with
increasing input size. So if it runs with 100 pebbles it takes roughly 10
times as long as when it runs it only 10 pebbles. Or if it goes from 10
pebbles to 20 pebbles it will take twice as long to run. .
### 7.1 General Information about profiling
There are four different generic parts of the complete calculation that need
to considered for determining the overall speed. The first consideration is
the time to compute arithmetic operations. Modern processors can complete
arithmetic operations in nanoseconds or fractions of nanoseconds. In the
PEBBLES code, the amount of time spent on arithmetic is practically
undetectable in wall clock changes. The second consideration is the time
required for reading memory and writing memory. For main memory accesses, this
takes hundreds of CPU clock cycles, so these times are on the order of
fractions of microseconds (Drepper, 2007). Because of the time required to
access main memory, all modern CPUs have on-chip caches, that contain a copy
of the recently used data. If the memory access is in the CPU’s cache, the
data can be retrieved and written in a small number of CPU cycles. Main memory
writes are somewhat more expensive than main memory reads, since any copies of
the memory that exist in other processor’s caches need to be updated or
invalidated. So for a typical calculation like $a+b\to c$ the time spent doing
the arithmetic is trivial compared to the time spent reading in $a$ and $b$
and writing out $c$.
The third consideration is the amount of time required for parallel
programming constructs. Various parallel synchronization tools such as atomic
operations, locks and critical sections take time. These take an amount of
time on the same order of magnitude as memory writes. However, they typically
need a read and then a write without any other processor being able to access
that chunk of memory in between which requires additional overhead, and a
possible wait if the memory address is being used by another process. Atomic
operations on x86_64 architectures are faster than using locks, and locks are
generally faster than using critical sections. The fourth consideration is
network time. Sending and receiving a value can easily take over a millisecond
for the round trip time. These four time consuming operations need to be
considered when choosing algorithms and methods of calculation.
There are a variety of methods for profiling the computer code. The simplest
method is to use the FORTRAN 95 intrinsics CPU_TIME and DATE_AND_TIME. The
CPU_TIME subroutine returns a real number of seconds of CPU time. The
DATE_AND_TIME subroutine returns the current wall clock time in the VALUES
argument. With gfortran both these times are accurate to at least a
millisecond. The difference between two different calls of these functions
provide information on both the wall clock time and the CPU time between the
calls. (For the DATE_AND_TIME subroutine, it is easiest if the days, hours,
minutes, seconds and milliseconds are converted to a real seconds past some
arbitrary time.) The time methods provide basic information and a good
starting point for determining which parts of the program are consuming time.
For more detailed profiling the oprofile (opr, 2009) program can be used on
Linux. This program can provide data at the assembly language level which
makes it possible to determine which part of a complex function is consuming
the time. Non-assembly language profilers are difficult to accurately use on
optimized code, and profiling non-optimized code is misrepresentative.
### 7.2 Overview of Parallel Architectures and Coding
Parallel computers can be arranged in a variety of ways. Because of the
expense of linking shared memory to all processors, a common architecture is a
cluster of nodes with each node having multiple processors. Each node is
linked to other nodes via a fast network connection. The processors on a
single node share memory. Figure 7.1 shows this arrangement. For this
arrangement, the code can use both the OpenMP (Open Multi-Processing) (ope,
2008) and the MPI (Message Passing Interface) (mpi, 2009) libraries. MPI is a
programming interface for transferring data across a network to other nodes.
OpenMP is a shared memory programming interface. By using both programming
interfaces high speed shared memory accesses can be used on memory shared on
the node and the code can be parallelized across multiple nodes.
Figure 7.1: Sample Cluster Architecture
### 7.3 Lock-less Parallel O(N) Collision Detection
For any granular material simulation, which particles are in contact must be
determined quickly and accurately for each time-step. This is called collision
detection, though for pebble simulations it might be more accurately labeled
contact detection. The simplest algorithm for collision detection is to
iterate over all the other objects and compare each one to the current object
for collision. To determine all the collisions using that method, $O(N^{2})$
time is required.
An improved algorithm by Cohen et al. (1995) uses six sorted lists of the
lower and upper bounds for each object. (There is one upper bound list and one
lower bound list for each dimension.) With this algorithm, to determine the
collisions for a given object, the bounds of the current objects are compared
to bounds in the list—only objects that overlap the bounds in all three
dimensions will potentially collide. This algorithm typically has
approximately $O(N\log(N))$ time,222In order from slowest to fastest (for
sufficiently big N): $O(N^{2})$,$O(N\log(N)$,$O(N)$,$O(1)$. because of the
sorting of the bounding lists (Cohen et al., 1995).
A third, faster method, grid collision detection, is available if the
following requirements hold: (1) there is a maximum diameter of object, and no
object exceeds this diameter, and (2) for a given volume, there is a
reasonably small, finite, maximum number of objects that could ever be in that
volume. These two constraints are easily satisfied by pebble bed simulations,
since the pebbles are effectively the same size (small changes in diameter
occur due to wear and thermal effects). A three-dimensional parallelepiped
grid is used over the entire range in which the pebbles are simulated. The
grid spacing $gs$ is set at the maximum diameter of any object (twice the
maximum radius for spheres).
Two key variables are initialized, $grid\\_count(x,y,z)$, the number of
pebbles in grid locations x,y,z; and $grid\\_ids(x,y,z,i)$, the pebble
identification numbers ($ids$) for each x,y,z location. The $id$ is a unique
number assigned to each pebble in the simulation. The spacing between
successive grid indexes is $gs$, so the index of a given x location can be
determined by $\lfloor(x-x_{min})/gs\rfloor$ where $x_{min}$ is the zero x
index’s floor; similar formulas are used for y and z.
The grid is initialized by setting $grid\\_count(:,:,:)=0$, and then the x,y,z
indexes are determined for each pebble. The $grid\\_count$ at that location is
then atomically333In this chapter, atomic means uncutable, that is the entire
operation is done in one action without interference from other processors.
incremented by one and fetched. Because OpenMP 3.0 does not have a atomic add-
and-fetch, the lock xadd x86_64 assembly language instruction is put in a
function. The $grid\\_count$ provides the fourth index into the $grid\\_ids$
array, so the pebble $id$ can be stored into the $ids$ array. The amount of
time to zero the $grid\\_count$ array is a function of the volume of space,
which is proportional to the number of pebbles. The initialization iteration
over the pebbles can be done in parallel because of the use of an atomic add-
and-fetch function. Updating the grid iterates over the entire list of pebbles
so the full algorithm for updating the grid is $O(N)$ for the number of
pebbles.
Once the grid is updated, the nearby pebbles can be quickly determined. Figure
7.2 illustrates the general process. First, index values are computed from the
pebble and used to generate $xc$, $yc$, and $zc$. This finds the center grid
location, which is shown as the bold box in the figure. Then, all the possible
pebble collisions must have grid locations (that is, their centers are in the
grid locations) in the dashed box, which can be found by iterating over the
grid locations from $xc-1$ to $xc+1$ and repeating for the other two
dimensions. There are $3^{3}$ grid locations to check, and the number of
pebbles in them are bounded (maximum 8), so the time to do this is bounded.
Since this search does not change any grid values, it can be done in parallel
without any locks.
Figure 7.2: Determining Nearby Pebbles from Grid
Therefore, because of the unique features of pebble bed pebbles simulation, a
parallel lock-less $O(N)$ algorithm for determining the pebbles in contact can
be created.
### 7.4 MPI Speedup
The PEBBLES code uses MPI to distribute the computational work across
different nodes. The MPI/OpenMP hybrid parallelization splits the calculation
of the derivatives and the new variables geometrically and passes the data at
the geometry boundaries between nodes using messages. Each pebble has a
primary node and may also have various boundary nodes. The pebble-primary-node
is responsible for updating the pebble position, velocity, angular velocity,
and slips. The pebble-primary-node also sends data about the pebble to any
nodes that are the pebble boundary nodes and will transfer the pebble to a
different node if the pebble crosses the geometric boundary of the node.
Boundary pebbles are those close enough to a boundary that their data needs to
be present in multiple nodes so that the node’s primary pebbles can be
properly updated. Node 0 is the master node and does processing that is
simplest to do on one node, such as writing restart data to disk and
initializing the pebble data. The following steps are used for initializing
the nodes and then transferring data between them:
1. 1.
Node 0 calculates or loads initial positions of pebbles.
2. 2.
Node 0 creates the initial domain to node mapping.
3. 3.
Node 0 sends domain to node mapping to other nodes.
4. 4.
Node 0 sends other nodes their needed pebble data.
Order of calculation and data transfers in main loop:
1. 1.
Calculate derivatives for node primary and boundary pebbles.
2. 2.
Apply derivatives to node primary pebble data.
3. 3.
For every primary pebble, check with the domain module to determine the
current primary node and any boundary nodes.
1. (a)
If the pebble now has a different primary node, add the pebble id to the
transfer list to send to the new primary node.
2. (b)
If the pebble has any boundary nodes, add the pebble id to the boundary send
list to send it to the node for which it is a boundary.
4. 4.
If this is a time step where Node 0 needs all the pebble data (such as when
restart data is being written), add all the primary pebbles to the Node 0
boundary send list.
5. 5.
Send the number of transfers and the number of boundary sends that this node
has to all the other nodes using buffered sends.
6. 6.
Initialize three Boolean lists of other nodes that this node has:
1. (a)
data-to-send-to with “true” if the number of transfers or boundary sends is
nonzero, and “false” otherwise
2. (b)
received-data-from to “false”
3. (c)
received-the-number-of-transfers and the number-of-boundary-sends with
“false.”
7. 7.
While this node has data to send to other nodes and other nodes have data to
send to this node loop:
1. (a)
Probe to see if any nodes that this node needs data from have data available.
1. i.
If yes, then receive the data and update pebble data and the Boolean lists as
appropriate
2. (b)
If there are any nodes that this node has data to send to, and this node has
received the number of transfers and boundary sends from, then send the data
to those nodes and update the Boolean data send list for those nodes.
8. 8.
Flush the network buffers so any remaining data gets received.
9. 9.
Node 0 calculates needed tallies.
10. 10.
If this is a time to rebalance the execution load:
1. (a)
Send wall clock time spent computing since last rebalancing to node 0
2. (b)
Node 0 uses information to adjust geometric boundaries to move work towards
nodes with low computation time and away from nodes with high computation time
3. (c)
Node 0 sends new boundary information to other nodes, and needed data to other
nodes.
11. 11.
Continue to next time step and repeat this process until all time-steps have
been run.
All the information and subroutines needed to calculate the primary and
boundary nodes that a pebble belongs to are calculated and stored in a FORTRAN
95 module named network_domain_module. The module uses two derived types:
network_domain_type and network_domain_location_type. Both types have no
public components so the implementation of the domain calculation and the
location information can be changed without changing anything but the module,
and the internals of the module can be changed without changing the rest of
the PEBBLES code. The location type stores the primary node and the boundary
nodes of a pebble. The module contains subroutines for determining the
location type of a pebble based on its position, primary and boundary nodes
for a location type, and subroutines for initialization, load balancing, and
transferring of domain information over the network.
The current method of dividing the nodes into geometric domains uses a list of
boundaries between the z (axial) locations. This list is searched via binary
search to find the nodes nearest to the pebble position, as well as those
within the boundary layer distance above and below the zone interface in order
to identify all the boundary nodes that participate in data transfers. The
location type resulting from this is cached on a fine grid, and the cached
value is returned when the location type data is needed. The module contains a
subroutine that takes a work parameter (typically, the computation time of
each of the nodes) and can redistribute the z boundaries up or down to shift
work towards nodes that are taking less time computing their share of
information. If needed in the future, the z-only method of dividing the
geometry could be replaced with a full 3-D version by modifying the network
domain module.
### 7.5 OpenMP Speedup
The PEBBLES code uses OpenMP to distribute the calculation over multiple
processes on a single node. OpenMP allows directives to be given to the
compiler that direct how portions of code are to be parallelized. This allows
a single piece of code to be used for both the single processor version and
the OpenMP version. The PEBBLES parallelization typically uses OpenMP
directives to cause loops that iterate over all the pebbles to be run in
parallel.
Some details need to be taken into consideration for the parallelization of
the calculation of acceleration and torque. The physical accelerations imposed
by the wall are treated in parallel, and there is no problem with writing over
the data because each processor is assigned a portion of the total zone
inventory of pebbles. For calculating the pebble-to-pebble forces, each
processor is assigned a fraction of the pebbles, but there is a possibility of
the force addition computation overwriting another calculation because the
forces on a pair of pebbles are calculated and then the calculated force is
added to the force on each pebble. In this case, it is possible for one
processor to read the current force from memory and add the new force from the
pebble pair while another processor is reading the current force from memory
and adding its new force to that value; they could both then write back the
values they have computed. This would be incorrect because each calculation
has only added one of the new pebble pair forces. Instead, PEBBLES uses an
OpenMP ATOMIC directive to force the addition to be performed atomically,
thereby guaranteeing that the addition uses the latest value of the force sum
and saves it before a different processor has a chance to read it.
For calculating the sum of the derivatives using Euler’s method, updating
concurrently poses no problem because each individual pebble has derivatives
calculated. The data structure for storing the pebble-to-pebble slips (sums of
forces used to calculate static friction) is similar to the data structure
used for the collision detection grid. A 2-D array exists where one index is
the from-pebble and the other index is for storing $ids$ of the pebbles that
have slip with the first pebble. A second array exists that contains the
number of $ids$ stored, and that number is always added and fetched
atomically, which allows the slip data to be updated by multiple processors at
once. These combine to allow the program to run efficiently on shared memory
architectures.
### 7.6 Checking the Parallelization
The parallelization of the algorithm is checked by running the test case with
a short number of time steps (10 to 100). Various summary data are checked to
make sure that they match the values computed with the single processor
version and between different numbers of nodes and processors. For example,
with the NGNP-600 model used in the results section, the average overlap of
pebbles at the start of the run is 9.665281e-5 meters. The single processor
average overlap at the end of the 100 time-step run is 9.693057e-5 meters, the
2 nodes average overlap is 9.693043e-5 meters, and the 12 node average overlap
is 9.693029e-5 meters. The lower order numbers change from run to run. The
start-of-run values match each other exactly, and the end-of-run values match
the start of run values to two significant figures. However, the three
different end-of-run values match to five significant digits. In short, the
end values match each other more than they match the start values. The overlap
is very sensitive to small changes in the calculation because it is a function
of the difference between two positions. During coding, multiple defects were
found and corrected by checking that the overlaps matched closely enough
between the single processor calculation and the multiple processor
calculations. The total energy or the linear energy or other computations can
be used similarly since the lower significant digits also change frequently
and are computed over all the pebbles.
### 7.7 Results
The data in Table 7.1 and Table 7.2 provide information on the time used with
the current version of PEBBLES for running 80 simulation time steps on two
models. The NGNP-600 model has 480,000 pebbles. The AVR model contains 100,000
pebbles. All times are reported in units of wall-clock seconds. The single
processor NGNP-600 model took 251 seconds and the AVR single processor model
took 48 seconds when running the current version. The timing runs were carried
out on a cluster with two Intel Xeon X5355 2.66 GHz processors per node with a
DDR 4X InfiniBand interconnect network. The nodes had 8 processors per node.
The gfortran 4.3 compiler was used.
Table 7.1: OpenMP speedup results Processes | AVR | Speedup | Efficiency | NGNP-600 | Speedup | Efficiency
---|---|---|---|---|---|---
1 | 47.884 | 1 | 100.00% | 251.054 | 1 | 100.00%
1 | 53.422 | 0.89633 | 89.63% | 276.035 | 0.90950 | 90.95%
2 | 29.527 | 1.6217 | 81.09% | 152.479 | 1.6465 | 82.32%
3 | 21.312 | 2.2468 | 74.89% | 104.119 | 2.4112 | 80.37%
4 | 16.660 | 2.8742 | 71.85% | 80.375 | 3.1235 | 78.09%
5 | 13.884 | 3.4489 | 68.98% | 68.609 | 3.6592 | 73.18%
6 | 12.012 | 3.98635 | 66.44% | 61.168 | 4.1043 | 68.41%
7 | 10.698 | 4.4760 | 63.94% | 54.011 | 4.6482 | 66.40%
8 | 9.530 | 5.0246 | 62.81% | 49.171 | 5.1057 | 63.82%
Table 7.2: MPI/OpenMP speedup results Nodes | Procs | AVR | Speedup | Efficiency | NGNP-600 | Speedup | Efficiency
---|---|---|---|---|---|---|---
1 | 1 | 47.884 | 1 | 100.00% | 251.054 | 1 | 100.00%
1 | 8 | 10.696 | 4.4768 | 55.96% | 55.723 | 4.5054 | 56.32%
2 | 16 | 6.202 | 7.7207 | 48.25% | 30.642 | 8.1931 | 51.21%
3 | 24 | 4.874 | 9.8244 | 40.93% | 23.362 | 10.746 | 44.78%
4 | 32 | 3.935 | 12.169 | 38.03% | 17.841 | 14.072 | 43.97%
5 | 40 | 3.746 | 12.783 | 31.96% | 16.653 | 15.076 | 37.69%
6 | 48 | 3.534 | 13.550 | 28.23% | 15.928 | 15.762 | 32.84%
7 | 56 | 3.285 | 14.577 | 26.03% | 15.430 | 16.271 | 29.05%
8 | 64 | 2.743 | 17.457 | 27.28% | 11.688 | 21.480 | 33.56%
9 | 72 | 2.669 | 17.941 | 24.92% | 11.570 | 21.699 | 30.14%
10 | 80 | 2.657 | 18.022 | 22.53% | 11.322 | 22.174 | 27.72%
11 | 88 | 2.597 | 18.438 | 20.95% | 11.029 | 22.763 | 25.87%
12 | 96 | 2.660 | 18.002 | 18.75% | 11.537 | 21.761 | 22.67%
Significant speedups have resulted with both the OpenMP and MPI/OpenMP
versions. A basic time step for the NGNP-600 model went from 3.138 seconds to
146 milliseconds when running on 64 processors. Since a full recirculation
would take on the order of 1.6e9 time steps, the wall clock time for running a
full recirculation simulation has gone from about 160 years to a little over 7
years. For smaller simulation tasks, such as simulating the motion of the
pebbles in a pebble bed reactor during an earthquake, the times are more
reasonable, taking about 5e5 time steps. Thus, for the NGNP-600 model, a full
earthquake can be simulated in about 20 hours when using 64 processors. For
the smaller AVR model, the basic time step takes about 34 milliseconds when
using 64 processors. Since there are less pebbles to recirculate, a full
recirculation would take on the order of 2.5e8 time steps, or about 98 days of
wall clock time.
## Chapter 8 Applications
The knowledge of the packing and flow patterns (and to a much lesser extent
the position) of pebbles in the pebble bed reactor is an essential
prerequisite for many in-core fuel cycle design activities as well as for
safety assessment studies. Three applications have been done with the PEBBLES
code. The major application is the computation of pebble positions during a
simulated earthquake. Two other applications that have been done are
calculation of space dependent Dancoff factors and calculation of the angle of
repose for a HTR-10 simulation.
Figure 8.1: Flow Field Representation (arrow lengths are proportional to local
average pebble velocity)
### 8.1 Applications in Support of Reactor Physics
#### 8.1.1 Dancoff Factors
The calculation of Dancoff factors is an example application that needs
accurate pebble position data. The Dancoff factor is used for adjusting the
resonance escape probability for neutrons. There are two Dancoff factors that
use pebble position data. The first is the inter-pebble Dancoff factor that is
the probability that a neutron escaping from the fuel zone of a pebble crosses
a fuel particle in another pebble. The second is the pebble-pebble Dancoff
factor, which is the probability that a neutron escaping one fuel zone will
enter another fuel zone without interacting with a moderator nuclide.
Kloosterman and Ougouag (2005) use pebble location information to calculate
the probability by ray tracing from fuel lumps until another is hit or the ray
escapes the reactor. The PEBBLES code has been used for providing position
information to J. L. Kloosterman and A. M. Ougouag’s PEBDAN program. This
program calculate these factors as shown in Figure 8.2 which calculates them
for the AVR reactor model.
Figure 8.2: Dancoff Factors for AVR Figure 8.3: 2-D Projection of Pebble Cone
on HTR-10 (crosses represent centers of pebbles)
#### 8.1.2 Angle of Repose
The PEBBLES code was used for calculating the angle of repose for an analysis
of the HTR-10 first criticality (Terry et al., 2006). The pebble bed code
recirculated pebbles to determine the angle at which the pebbles would stack
at the top of the reactor as shown in Figure 8.3, since this information was
not provided, but was needed for the simulation of the reactor(Bäumer et al.,
1990).
#### 8.1.3 Pebble Ordering with Recirculation
During experimental work before the construction of the AVR, it was discovered
that when the pebbles were recirculated, the ordering in the pebbles
increased. Figures 8.4 and 8.5 show that this effect occurs in the PEBBLES
simulation as well. The final AVR design incorporated indentations in the wall
to prevent this from occurring.
Figure 8.4: Pebbles Before Recirculation Figure 8.5: Pebbles After
Recirculation
### 8.2 Application to Earthquake modeling
The packing fraction of the pebbles in a pebble bed reactor can vary depending
on the method of packing and the subsequent history of the packing. This
packing fraction can affect the neutronics behavior of the reactor, since it
translates into an effective fuel density. During normal operation, the
packing fraction will vary only slowly, over the course of weeks and then
stabilize. During an earthquake, the packing fraction can increase suddenly.
This packing fraction change is a concern since packing fraction increase can
increase the neutron multiplication and cause criticality concerns as shown by
Ougouag and Terry (2001).
The PEBBLES code can simulate this increase and determine the rate of change
and the expected final packing fraction, thus allowing the effect of an
earthquake to be simulated.
#### 8.2.1 Movement of Earthquakes
The movement of earthquakes has been well studied in the past. The magnitude
of the motion of earthquakes is described by the Mercalli scale, which
describes the maximum acceleration that a given earthquake will impart to
structures. For a Mercalli X earthquake, the maximum acceleration is about 1
g. The more familiar Richter scale measures the total energy release of an
earthquake (Lamarsh, 1983), which is not useful for determining the effect on
a pebble bed core. For a given location, the soil properties can be measured,
and using soil data and the motion that the bedrock will undergo, the motion
on the surface can be simulated. The INL site had this information generated
in order to determine the motion from the worst earthquake that could be
expected over a 10,000 years period (Payne, 2003). This earthquake has roughly
a Mercalli IX intensity. The data for such a 10,000 year earthquake are used
for the simulation in this dissertation.
#### 8.2.2 Method Of Simulation
The code simulates earthquakes by adding a displacement to the walls of the
reactor. As well, the velocity of the walls needs to be calculated. The
displacement in the simulation can be specified either as the sum of sine
waves, or as a table of displacements that specifies the x, y, and z
displacements for each time. At each time step both the displacement and the
velocity of the displacement are calculated. When the displacement is
calculated by a sum of sine functions, the current displacement is calculated
by adding vector direction for each wave and the velocity is calculated from
the sum of the first derivative of all the waves. When the displacement is
calculated from a table of data, the current displacement is a linear
interpolation of the two nearest data points in the table, and the velocity is
the slope between them. The walls are then assigned the appropriate computed
displacement and velocity. Figure 8.6 shows the total displacement for the INL
earthquake simulation specifications that were used in this paper.
Figure 8.6: Total Earthquake Displacement
#### 8.2.3 Earthquake Results
The results of two simulations carried out here show a substantially safer
behavior than the Ougouag and Terry (2001) bounding calculations. The
methodology was applied to a model of the PBMR-400 model and two different
static friction coefficients were tested, 0.65 and 0.35. The packing fraction
increased from 0.593 to 0.594 over the course of the earthquake with the 0.65
static friction model, with the fastest increase was from 0.59307 to 0.59356
and took place over 0.8 seconds. With the 0.35 static friction model, the
overall increase was from 0.599 to 0.601. The fasted increase was from 0.59964
to 0.60011 in 0.8 seconds. This is remarkably small when compared to the
bounding calculation packing fraction increase rate of 0.129 sec-1 in free
fall.111The free fall rate is determined by calculating the packing fraction
increase if the pebbles were in gravitational free fall. Both computed
increases and packing fraction change rates are substantially below the free
fall bounding rate and packing fraction change of a transition from 0.60 to
0.64 in 0.31 seconds. The computed rate and the total packing fraction
increase are in the range that can be offset by thermal feedback effects for
uranium fueled reactors.
Figure 8.7: 0.65 Static Friction Packing over Time Figure 8.8: 0.35 Static
Friction Packing over Time
During the course of the earthquake, the boundary density fluctuations (that
is the oscillations in packing fraction near a boundary) are observed to
increase in amplitude. Figure 8.9 shows the packing fraction before the
earthquake and after the earthquake in the radial direction. These were taken
from 4 to 8 meters above the fuel outlet chute in the PBMR-400 model. All the
radial locations have increased packing compared to the packing fraction
before the earthquake, but the points that are at boundary density fluctuation
peaks increase the most. This effect can be seen in figure 8.10, which shows
the increase in packing fraction before the earthquake and after
Figure 8.9: Different Radial Packing Fractions Figure 8.10: Changes in Packing
Fraction
A previous version of the positional data from the earthquake simulation was
provided to J. Ortensi. This data was used by him to simulate the effects of
an earthquake on a pebble bed reactor(Ortensi, 2009). Essentially, two factors
cause an increase in reactivity. The first is the increased density of the
pebbles and the second is due to the control rods being at the top of reactor,
so when the top pebbles move down the control rod worth (effect) decreases.
However, the reactivity increase causes the fuel temperature to rise, which
causes Doppler broadening and more neutrons are absorbed by the 238U, which
causes the reactivity to fall. Figure 8.11 shows an example of this.
Figure 8.11: Neutronics and Thermal Effects from J. Ortensi
#### 8.2.4 Earthquake Equations
For each time-step, the simulation calculates both a displacement and a wall
velocity.
For the sum of waves method, the displacement is calculated by:
$\mathbf{d}=\sum_{i}\mathbf{D}\left[sin\left((t-S){2.0\pi\over
p}+c\right)+o\right]$ (8.1)
where $t$ is the current time, $S$ is the time the wave starts, $p$ is the
period of the wave, $c$ is the initial cycle of the wave, $o$ is the offset,
and $\mathbf{D}$ is the maximum displacement vector.
The velocity is calculated by:
$\mathbf{m}=\sum_{i}{2\pi\mathbf{D}\over p}cos\left((t-S){2\pi\over
p}+c\right)$ (8.2)
For the tabular data, the displacement and velocity are calculated by:
$\displaystyle\mathbf{d}$ $\displaystyle=(1-o)T_{k}+oT_{k+1}$ (8.3)
$\displaystyle\mathbf{m}$ $\displaystyle={1\over\delta}(T_{k+1}-T_{k})$ (8.4)
where $T_{i}$ is the displacement at the $i$th time-step, $o$ is a number
between 0 and 1 that specifies where 0 is the start of the time-step and 1 is
the end, and $\delta$ is the time in seconds between time-steps.
With these displacements, the code then uses:
$\displaystyle\mathbf{p^{\prime}}$ $\displaystyle=\mathbf{p}+\mathbf{d}$ (8.5)
$\displaystyle\mathbf{v^{\prime}}$ $\displaystyle=\mathbf{v}+\mathbf{m}$ (8.6)
as the adjusted position and velocity.
## Chapter 9 Construction of a Dust Production Framework
With the creation of the PEBBLES simulation, one issue that was examined was
using the simulation to attempt to predict the volume of dust that would be
produced by an operating pebble bed reactor. This is an important issue that
could affect the choice of a pebble bed reactor versus a prismatic reactor for
process heat applications. However, as this chapter and Appendix B will
discuss, while the PEBBLES code has the force and motion data required for
this simulation, the coefficients that would allow this information to be used
have not been sufficiently robustly experimentally determined yet.
Figure 9.1: Pebble to Pebble Distance Traveled Figure 9.2: Pebble to Surface
Distance Traveled Figure 9.3: Average Normal Contact Force Figure 9.4: Pebble
to Pebble Wear Figure 9.5: Pebble to Surface Wear
With the data provided by PEBBLES, equations to link the dust production to
PEBBLES calculated quantities were examined. As shown in equation B.1, the
volume of dust produced can be approximated if the normal force of contact,
the length slide and the wear coefficients are known. The force of contact and
the length slide are calculated as part of the PEBBLES simulation, so this
method was used to calculate dust production for the AVR reactor. This
resulted in an estimate of four grams of graphite dust produced per year as
compared to the measured value of three kilograms of dust produced per year.
Several possible causes of this were identified in the paper documenting this
work (Cogliati and Ougouag, 2008). A key first issue as described by this
dissertation’s previous work section is that there are no good measurements of
graphite wear rates in pebble bed reactor relevant conditions (especially for
a reactor run at AVR temperatures). A second issue is that the previous model
of AVR was missing features including the reactor control rod nose cones and
wall indentations. A third issue, identified after the paper’s publication, is
that significant portions of the length traveled were due not to motion down
through the reactor. Instead, much of the length that was tallied was due to
pebbles vibrating. In the model used in the paper, this problem was about four
times more severe than the current model, due to the new addition of slip
correction via equation 5.2.
As an illustration of the general framework for dust production, a simple
cylindrical vat the size of AVR was simulated. In this model, the outlet chute
starts shrinking at height zero. The length (L), and length times normal force
(NL) were tallied on intervals of 6 cm and the figures 9.1 to 9.5 show the
results after recirculating 400 pebbles. The results are on a per pebble-pass
basis. Two sets of static friction and kinetic friction pairs are used, one
with a static friction coefficient of 0.35 and kinetic of 0.25, and the other
with static of 0.65 and kinetic of 0.4. Figure 9.1 shows the calculated
pebble-to-pebble lengths. Notice that for the 0.65 static friction simulation,
about 10 meters of length traveled is occurring at the peak in a 6 cm long
tally. Since the pebble is traveling about 0.06 m and has at most 12 pebble to
pebble contacts, essentially all but a small portion of this length traveled
is due to when the pebbles vibrate relative to each other. This vibration is
caused by the impact of the pebbles coming from the inlet chutes and hitting
the top of the bed. This likely is a true physical effect, which has not been
discussed in literature this author is aware of. However, in order to obtain
the correct magnitude of this vibrational effect, two things must be correct.
First, the simulation must dissipate the vibration at the correct rate, and
second, the pebbles must impact the bed at the correct velocity. Quality
estimates of both will need to be made to finish the dust production work.
Note that with the lower static friction value, the effect is even more
pronounced.
Figure 9.2 is also expected to have a vibrational component, since only a
small portion of pebbles should contact the wall, and therefore for a 6 cm
tally, the length traveled by the average pebble should be much lower than 6
cm. As the chute is entered, the distance the average pebble travels increases
in the 0.65 static friction case. Figure 9.3 shows the average normal contact
force. The peak value for the pebble to surface values is due to the base of
the ‘arch’ formed by pebbles. The curves for both the 0.65 static friction
coefficient and the 0.35 static friction coefficient are approximately the
same because the static friction force is not reaching the full Coulomb limit,
so both have the same effective $\mu$.
Figure 9.4 shows the normal times force sums. For the 0.65 case, the peak is
due to the vibrational impact. For the 0.35 case, the vibration travels deep
into the reactor bed, producing dust throughout the reactor. Figure 9.5 shows
the peak dust production coming in the base of the reactor, where the forces
are the highest, and the greatest lengths are traveled next to the wall.
## Chapter 10 Future Work
The dust production simulation requires both proper dust production wear
coefficients, and properly determining the correct method of dealing with
vibrational issues. It would be useful to determine the number of pebbles that
need to be simulated to provide a correct representation of a full NGNP-600
sized reactor. Since the middle portions are geometrically similar,
determining the amount of recirculation that is required to reach a
geometrically asymptotic state might allow only a portion of the recirculation
to be done. Those two changes might allow quicker simulation of full sized
reactors. Finally, in order to allow sufficiently fast simulations on today’s
computer hardware many approximations to the true behavior are done. In the
future, some of these approximations maybe relaxed.
## Chapter 11 Summary and Conclusions
Research results presented in this dissertation demonstrates a distinct
element method that provides high fidelity and yet has reasonable run-times
for many pebble fuel element flow simulations. The new static friction test
will be useful for evaluating any implementation of static friction for
spheres. The PEBBLES code produced for this dissertation has been able to
provide data for multiple applications including Dancoff factor calculation,
neutronics simulation and earthquake simulation. The new static friction model
provides expected static friction behavior in the reactor including partial
matching of the Janssen model predictions and correctly matching stability
behavior in a pyramid. The groundwork has been created for predicting the dust
production from wear in a pebble bed reactor once further experimental data is
available. Future work includes potentially relaxing some of the physical
approximations made for speed purposes when faster computing hardware exists,
and investigating new methods for allowing faster simulations. This
dissertation has provided significant enhancements in simulation of the
mechanical movement of pebbles in a pebble bed reactor.
## References
* mpi (2009) Mpi: A message-passing interface standard, version 2.2. 2009\. URL http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf.
* ope (2008) Openmp application program interface, version 3.0. 2008\. URL http://www.openmp.org/mp-documents/spec30.pdf.
* opr (2009) Oprofile - a system profiler for linux. 2009\. URL http://oprofile.sourceforge.net.
* tht (a) The commissioning of the thtr 300. a performance report, a. D HRB 129288E.
* tht (b) Hkg hochtemperatur-kernkraftwerk gesellschaft mit beschränkter haftung, b. http://www.thtr.de/ Accessed Oct 27, 2009. Technical data on: http://www.thtr.de/technik-tht.htm.
* Atomwirtschaft-Atomtechnik-atw (1966) Atomwirtschaft-Atomtechnik-atw. Avr-atomversuchskraftwerk mit kugelhaufen-hochtemperatur-reaktor in juelich. _Atomwirtschaft_ , May 1966. Sonderdruck aus Heft (Available as part of NEA-1739: IRPhE/AVR http://www.nea.fr/abs/html/nea-1739.html. Pages particularly useful are 230 and 240).
* Bäumer et al. (1990) R. Bäumer, H. Barnert, E. Baust, A. Bergerfurth, H. Bonnenberg, H. Bülling, St. Burger, C.-B. von der Decken, W. Delle, H. Gerwin, K.-G. Hackstein, H.-J. Hantke, H. Hohn, G. Ivens, N. Kirch, A. I. Kirjushin, W. Kröger, K. Krüger, N. G. Kuzavkov, G. Lange, C. Marnet, H. Nickel, P. Pohl, W. Scherer, J. Schöning, R. Schulten, J. Singh, W. Steinwarz, W. Theymann, E. Wahlen, U. Wawrzik, I. Weisbrodt, H. Werner, M. Wimmers, and E. Ziermann. _AVR: Experimental High Temperature Reactor; 21 years of sucessful operation for a future energy technology_. Association of German Engineers (VDI), The Society for Energy Technologies, 1990. ISBN 3-18-401015-5.
* Bedenig et al. (1968) D. Bedenig, W. Rausch, and G. Schmidt. Parameter studies concerning the flow behaviour of a pebble with reference to the fuel element movement in the core of the thtr 300 mwe prototype reactor. _Nuclear Engineering and Design_ , 7:367–378, 1968.
* Benenati and Brosilow (1962) R.F. Benenati and C. B. Brosilow. Void fraction distribution in beds of spheres. _A. I. Ch. E. Journal_ , 8:359–361, 1962.
* Bernal et al. (1960) J. D. Bernal, J. Mason, and G. David Scott. Packing of spheres. _Nature_ , 188:908–911, 1960.
* Bhushan (2000) Bharat Bhushan. _Modern Tribology Handbook_. CRC Press, Boca Raton, Florida, USA, 2000. Chap. 7.5.
* Bratberg et al. (2005) I. Bratberg, K.J. Maløy, and A. Hansen. Validity of the janssen law in narrow granular columns. _The European Physical Journal E_ , 18:245–252, 2005.
* Cogliati and Ougouag (2008) J. J. Cogliati and A. M. Ougouag. Pebble bed reactor dust production model. Proceedings of the 4th International Topical Meeting on High Temperature Reactor Technology, 2008. Washington, D.C., USA, September 28 – October 1.
* Cohen et al. (1995) Jonathan D. Cohen, Ming C. Lin, Dinesh Manocha, and Madhav K. Ponamgi. I–collide: an interactive and exact collision detection system for large scale environments. Proceedings of the 1995 Symposium on Interactive 3D Graphics, 1995. Monterey, CA, April 9-12, pp. 19-24.
* Cundall and Strack (1979) P. A. Cundall and O. D. L. Strack. A discrete numerical model for granular assemblies. _Géotechniqe_ , 29:47–65, 1979.
* Drepper (2007) Ulrich Drepper. What every programmer should know about memory. 2007\. URL http://people.redhat.com/drepper/cpumemory.pdf.
* Duran (1999) Jacques Duran. _Sands, Powders, and Grains: An Introduction to the Physics of Granular Materials_. Springer, New York, New York, USA, 1999. ISBN 978-0387986562.
* Freund et al. (2003) Hannsjörg Freund, Thomas Zeiser, Florian Huber, Elias Klemm, Gunther Brenner, Franz Durst, and Gerhand Emig. Numerical simulations of single phase reacting flows in randomly packed fixed-bed reactors and experimental validation. _Chemical Engineering Science_ , 58:903–910, 2003.
* Goodjohn (1991) A. J. Goodjohn. Summary of gas-cooled reactor programs. _Energy_ , 16:79–106, 1991.
* Gotoh et al. (1997) Keishi Gotoh, Hiroaki Masuda, and Ko Higashitanti. _Powder Technology Handbook, 2nd ed._ Marcel Dekker, Inc, New York, New York, 1997.
* Gougar et al. (2004) Hans D. Gougar, Abderrafi M. Ougouag, and William K. Terry. Advanced core design and fuel management for pebble-bed reactors. 2004\. INEEL/EXT-04-02245.
* Haile (1997) J. M. Haile. _Molecular Dynamics Simulation_. John Wiley & Sons, Inc, New York, 1997.
* Johnson (1985) K.L. Johnson. _Contact Mechanics_. Cambridge University Press, 1985. ISBN 0-521-34796-3. Section 4.2.
* Jullien et al. (1992) Rémi Jullien, André Pavlovitch, and Paul Meakin. Random packings of spheres built with sequential models. _Journal Phys. A: Math. Gen._ , 25:4103–4113, 1992.
* Kadak and Bazant (2004) Andrew C. Kadak and Martin Z. Bazant. Pebble flow experiments for pebble bed reactors. Bejing, China, September 22-24 2004. 2nd International Topical Meeting on High Temperature Reactor Technology.
* Kloosterman and Ougouag (2005) J. L. Kloosterman and A. M. Ougouag. Computation of dancoff factors for fuel elements incorporating randomly packed triso particles. Technical report, 2005. INEEL/EXT-05-02593, Idaho National Laboratory.
* Kohring (1995) G. A. Kohring. Studies of diffusional mixing in rotating drums via computer simulations. _Journal de Physique I_ , 5:1551, 1995.
* Lamarsh (1983) J.R. Lamarsh. _Introduction to Nuclear Engineering 2nd Ed._ Addison-Wesley Publishing Company, Reading, Massaschusetts, 1983. pp. 591-593.
* Lancaster and Pritchard (1980) J.K. Lancaster and J.R. Pritchard. On the ‘dusting’ wear regime of graphite sliding against carbon. _J. Phys. D: Appl. Phys._ , 13:1551–1564, 1980.
* Landry et al. (2003) James W. Landry, Gary S. Grest, Leonardo E. Silbert, and Steven J. Plimpton. Confined granular packings: Structure, stress, and forces. _Physical Review E_ , 67:041303, 2003.
* Lu et al. (2001) Zi Lu, Mohamed Abdou, and Alice Ying. 3d micromechanical modeling of packed beds. _Journal of Nuclear Materials_ , 299:101–110, 2001.
* Luo et al. (2004) Xiaowei Luo, Lihong Zhang, and Yu Suyuan. The wear properties of nuclear grade graphite ig-11 under different loads. _International Journal of Nuclear Energy Science and Technology_ , 1(1):33–43, 2004.
* Luo et al. (2005) Xiaowei Luo, Yu Suyuan, Sheng Xuanyu, and He Shuyan. Temperature effects on ig-11 graphite wear performance. _Nuclear Engineering and Design_ , 235:2261–2274, 2005\.
* Marion and Thornton (2004) Jerry B. Marion and Stephen T. Thornton. _Classical Dynamics of Particles and Systems, 5th Ed._ Saunders College Publishing, 2004. ISBN 0-534-40896-6.
* Miller et al. (2002) G. K. Miller, D. A. Petti, and J. T. Maki. Development of an integrated performance model for triso-coated gas reactor particle fuel. High Temperature Reactor 2002 Conference, April 2002. URL http://www.inl.gov/technicalpublications/Documents/3169759.pdf. Chicago, Illinois, April 25-29, 2004, on CD-ROM, American Nuclear Society, Lagrange Park, IL.
* Mindlin and Deresiewicz (1953) R. D. Mindlin and H. Deresiewicz. Elastic spheres in contact under varying oblique forces. _ASME J. Applied Mechanics_ , 20:327–344, 1953.
* (37) Rainer Moormann. A safety re-evaluation of the avr pebble bed reactor operation and its consequences for future htr concepts. Berichte des Forschungszentrums Jülich; 4275 ISSN 0944-2952 http://hdl.handle.net/2128/3136.
* Ortensi (2009) Javier Ortensi. _An Earthquake Transient Method for Pebble-Bed Reactors and a Fuel Temperature Model for TRISO Fueled Reactors_. PhD thesis, Idaho State University, 2009.
* Ougouag and Terry (2001) Abderrafi M. Ougouag and William K. Terry. A preliminary study of the effect of shifts in packing fraction on k-effective in pebble-bed reactors. Proceeding of Mathemetics & Computation 2001, September 2001. Salt Lake City, Utah, USA.
* Ougouag et al. (2004) Abderrafi M. Ougouag, Hans D. Gougar, William K. Terry, Ramatsemela Mphahlele, and Kostadin N. Ivanov. Optimal moderation in the pebble-bed reactor for enhanced passive safety and improved fuel utilization. PHYSOR 2004 – The Physics of Fuel Cycle and Advanced Nuclear Systems: Global Developments, April 2004.
* Payne (2003) S. J. Payne. Development of design basis earthquake (dbe) parameters for moderate and high hazard facilities at tra. 2003\. INEEL/EXT-03-00942, http://www.inl.gov/technicalpublications/Documents/2906939.pdf.
* Ristow (1998) Gerald H. Ristow. Flow properties of granual materials in three-dimensional geometries. Master’s thesis, Philipps-Universität Marburg, Marburg/Lahn, January 1998\.
* Rycroft et al. (2006a) Chris H. Rycroft, Martin Z. Bazant, Gary S. Grest, and James W. Landry. Dynamics of random packings in granular flow. _Physical Review E_ , 73:051306, 2006a.
* Rycroft et al. (2006b) Chris H. Rycroft, Gary S. Crest, James W. Landry, and Martin Z. Bazant. Analysis of granular flow in a pebble-bed nuclear reactor. _Physical Review E_ , 74:021306, 2006b.
* Sheng et al. (2003) X. Yu Sheng, X. S., Luo, and S. He. Wear behavior of graphite studies in an air-conditioned environment. _Nuclear Engineering and Design_ , 223:109–115, 2003.
* Silbert et al. (2001) Leonardo E. Silbert, Deniz Ertas, Gary S. Grest, Thomas C. Halsey, Dov Levine, and Steven J. Plimpton. Granular flow down an inclined plane: Bagnold scaling and rheology. _Physical Review E_ , 64:051302, 2001.
* Soppe (1990) W. Soppe. Computer simulation of random packings of hard spheres. _Powder Technology_ , 62:189–196, 1990.
* Sperl (2006) Matthias Sperl. Experiments on corn pressure in silo cells – translation and comment of janssen’s paper from 1895. _Granular Matter_ , 8:59–65, 2006.
* Stansfield (1969) O. M. Stansfield. Friction and wear of graphite in dry helium at 25, 400, and 800∘ c. _Nuclear Applications_ , 6:313–320, 1969.
* Terry et al. (2006) William K. Terry, Soon Sam Kim, Leland M. Montierth, Joshua J. Cogliati, and Abderrafi M. Ougouag. Evaluation of the htr-10 reactor as a benchmark for physics code qa. ANS Topical Meeting on Reactor Physics, 2006. URL http://www.inl.gov/technicalpublications/Search/Results.asp?ID=INL/CON-%06-11699.
* Vu-Quoc et al. (2000) L. Vu-Quoc, X. Zhang, and O. R. Walton. A 3-d discrete-element method for dry granular flows of ellipsoidal particles. _Computer Methods in Applied Mechanics and Engineering_ , 187:483–528, 2000.
* Vu-Quoc and Zhang (1999) Loc Vu-Quoc and Xiang Zhang. An accurate and effcient tangential force-displacement model for elastic frictional contact in particle-flow simulations. _Mechanics of Materials_ , 31:235–269, 1999.
* Wahsweiler (1989) Dr. Wahsweiler. Bisherige erkentnisse zum graphitstaub, 1989. HRB BF3535 26.07.1989.
* Wait (2001) R. Wait. Discrete element models of particle flows. _Mathematical Modeling and Analysis I_ , 6:156–164, 2001\.
* Walker (1966) D. M. Walker. An approximate theory for pressures and arching in hoppers. _Chemical Engineering Science_ , 21:975–997, 1966.
* Wu et al. (2002) Zongxin Wu, Dengcai Lin, and Daxin Zhong. The design features of the htr-10. _Nuclear Engineering and Design_ , 218:25–32, 2002.
* Xiaowei et al. (2005) Luo Xiaowei, Yu Suyaun, Zhang Zhensheng, and He Shuyan. Estimation of graphite dust quantity and size distribution of graphite particle in htr-10. _Nuclear Power Engineering_ , 26, 2005. ISSN 0258-0926(2005)02-0203-06.
* Xu and Zuo (2002) Yuanhui Xu and Kaifen Zuo. Overview of the 10 mw high temperature gas cooled reactor—test module project. _Nuclear Engineering and Design_ , 218:13–23, 2002.
## References
* mpi (2009) Mpi: A message-passing interface standard, version 2.2. 2009\. URL http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf.
* ope (2008) Openmp application program interface, version 3.0. 2008\. URL http://www.openmp.org/mp-documents/spec30.pdf.
* opr (2009) Oprofile - a system profiler for linux. 2009\. URL http://oprofile.sourceforge.net.
* tht (a) The commissioning of the thtr 300. a performance report, a. D HRB 129288E.
* tht (b) Hkg hochtemperatur-kernkraftwerk gesellschaft mit beschränkter haftung, b. http://www.thtr.de/ Accessed Oct 27, 2009. Technical data on: http://www.thtr.de/technik-tht.htm.
* Atomwirtschaft-Atomtechnik-atw (1966) Atomwirtschaft-Atomtechnik-atw. Avr-atomversuchskraftwerk mit kugelhaufen-hochtemperatur-reaktor in juelich. _Atomwirtschaft_ , May 1966. Sonderdruck aus Heft (Available as part of NEA-1739: IRPhE/AVR http://www.nea.fr/abs/html/nea-1739.html. Pages particularly useful are 230 and 240).
* Bäumer et al. (1990) R. Bäumer, H. Barnert, E. Baust, A. Bergerfurth, H. Bonnenberg, H. Bülling, St. Burger, C.-B. von der Decken, W. Delle, H. Gerwin, K.-G. Hackstein, H.-J. Hantke, H. Hohn, G. Ivens, N. Kirch, A. I. Kirjushin, W. Kröger, K. Krüger, N. G. Kuzavkov, G. Lange, C. Marnet, H. Nickel, P. Pohl, W. Scherer, J. Schöning, R. Schulten, J. Singh, W. Steinwarz, W. Theymann, E. Wahlen, U. Wawrzik, I. Weisbrodt, H. Werner, M. Wimmers, and E. Ziermann. _AVR: Experimental High Temperature Reactor; 21 years of sucessful operation for a future energy technology_. Association of German Engineers (VDI), The Society for Energy Technologies, 1990. ISBN 3-18-401015-5.
* Bedenig et al. (1968) D. Bedenig, W. Rausch, and G. Schmidt. Parameter studies concerning the flow behaviour of a pebble with reference to the fuel element movement in the core of the thtr 300 mwe prototype reactor. _Nuclear Engineering and Design_ , 7:367–378, 1968.
* Benenati and Brosilow (1962) R.F. Benenati and C. B. Brosilow. Void fraction distribution in beds of spheres. _A. I. Ch. E. Journal_ , 8:359–361, 1962.
* Bernal et al. (1960) J. D. Bernal, J. Mason, and G. David Scott. Packing of spheres. _Nature_ , 188:908–911, 1960.
* Bhushan (2000) Bharat Bhushan. _Modern Tribology Handbook_. CRC Press, Boca Raton, Florida, USA, 2000. Chap. 7.5.
* Bratberg et al. (2005) I. Bratberg, K.J. Maløy, and A. Hansen. Validity of the janssen law in narrow granular columns. _The European Physical Journal E_ , 18:245–252, 2005.
* Cogliati and Ougouag (2008) J. J. Cogliati and A. M. Ougouag. Pebble bed reactor dust production model. Proceedings of the 4th International Topical Meeting on High Temperature Reactor Technology, 2008. Washington, D.C., USA, September 28 – October 1.
* Cohen et al. (1995) Jonathan D. Cohen, Ming C. Lin, Dinesh Manocha, and Madhav K. Ponamgi. I–collide: an interactive and exact collision detection system for large scale environments. Proceedings of the 1995 Symposium on Interactive 3D Graphics, 1995. Monterey, CA, April 9-12, pp. 19-24.
* Cundall and Strack (1979) P. A. Cundall and O. D. L. Strack. A discrete numerical model for granular assemblies. _Géotechniqe_ , 29:47–65, 1979.
* Drepper (2007) Ulrich Drepper. What every programmer should know about memory. 2007\. URL http://people.redhat.com/drepper/cpumemory.pdf.
* Duran (1999) Jacques Duran. _Sands, Powders, and Grains: An Introduction to the Physics of Granular Materials_. Springer, New York, New York, USA, 1999. ISBN 978-0387986562.
* Freund et al. (2003) Hannsjörg Freund, Thomas Zeiser, Florian Huber, Elias Klemm, Gunther Brenner, Franz Durst, and Gerhand Emig. Numerical simulations of single phase reacting flows in randomly packed fixed-bed reactors and experimental validation. _Chemical Engineering Science_ , 58:903–910, 2003.
* Goodjohn (1991) A. J. Goodjohn. Summary of gas-cooled reactor programs. _Energy_ , 16:79–106, 1991.
* Gotoh et al. (1997) Keishi Gotoh, Hiroaki Masuda, and Ko Higashitanti. _Powder Technology Handbook, 2nd ed._ Marcel Dekker, Inc, New York, New York, 1997.
* Gougar et al. (2004) Hans D. Gougar, Abderrafi M. Ougouag, and William K. Terry. Advanced core design and fuel management for pebble-bed reactors. 2004\. INEEL/EXT-04-02245.
* Haile (1997) J. M. Haile. _Molecular Dynamics Simulation_. John Wiley & Sons, Inc, New York, 1997.
* Johnson (1985) K.L. Johnson. _Contact Mechanics_. Cambridge University Press, 1985. ISBN 0-521-34796-3. Section 4.2.
* Jullien et al. (1992) Rémi Jullien, André Pavlovitch, and Paul Meakin. Random packings of spheres built with sequential models. _Journal Phys. A: Math. Gen._ , 25:4103–4113, 1992.
* Kadak and Bazant (2004) Andrew C. Kadak and Martin Z. Bazant. Pebble flow experiments for pebble bed reactors. Bejing, China, September 22-24 2004. 2nd International Topical Meeting on High Temperature Reactor Technology.
* Kloosterman and Ougouag (2005) J. L. Kloosterman and A. M. Ougouag. Computation of dancoff factors for fuel elements incorporating randomly packed triso particles. Technical report, 2005. INEEL/EXT-05-02593, Idaho National Laboratory.
* Kohring (1995) G. A. Kohring. Studies of diffusional mixing in rotating drums via computer simulations. _Journal de Physique I_ , 5:1551, 1995.
* Lamarsh (1983) J.R. Lamarsh. _Introduction to Nuclear Engineering 2nd Ed._ Addison-Wesley Publishing Company, Reading, Massaschusetts, 1983. pp. 591-593.
* Lancaster and Pritchard (1980) J.K. Lancaster and J.R. Pritchard. On the ‘dusting’ wear regime of graphite sliding against carbon. _J. Phys. D: Appl. Phys._ , 13:1551–1564, 1980.
* Landry et al. (2003) James W. Landry, Gary S. Grest, Leonardo E. Silbert, and Steven J. Plimpton. Confined granular packings: Structure, stress, and forces. _Physical Review E_ , 67:041303, 2003.
* Lu et al. (2001) Zi Lu, Mohamed Abdou, and Alice Ying. 3d micromechanical modeling of packed beds. _Journal of Nuclear Materials_ , 299:101–110, 2001.
* Luo et al. (2004) Xiaowei Luo, Lihong Zhang, and Yu Suyuan. The wear properties of nuclear grade graphite ig-11 under different loads. _International Journal of Nuclear Energy Science and Technology_ , 1(1):33–43, 2004.
* Luo et al. (2005) Xiaowei Luo, Yu Suyuan, Sheng Xuanyu, and He Shuyan. Temperature effects on ig-11 graphite wear performance. _Nuclear Engineering and Design_ , 235:2261–2274, 2005\.
* Marion and Thornton (2004) Jerry B. Marion and Stephen T. Thornton. _Classical Dynamics of Particles and Systems, 5th Ed._ Saunders College Publishing, 2004. ISBN 0-534-40896-6.
* Miller et al. (2002) G. K. Miller, D. A. Petti, and J. T. Maki. Development of an integrated performance model for triso-coated gas reactor particle fuel. High Temperature Reactor 2002 Conference, April 2002. URL http://www.inl.gov/technicalpublications/Documents/3169759.pdf. Chicago, Illinois, April 25-29, 2004, on CD-ROM, American Nuclear Society, Lagrange Park, IL.
* Mindlin and Deresiewicz (1953) R. D. Mindlin and H. Deresiewicz. Elastic spheres in contact under varying oblique forces. _ASME J. Applied Mechanics_ , 20:327–344, 1953.
* (37) Rainer Moormann. A safety re-evaluation of the avr pebble bed reactor operation and its consequences for future htr concepts. Berichte des Forschungszentrums Jülich; 4275 ISSN 0944-2952 http://hdl.handle.net/2128/3136.
* Ortensi (2009) Javier Ortensi. _An Earthquake Transient Method for Pebble-Bed Reactors and a Fuel Temperature Model for TRISO Fueled Reactors_. PhD thesis, Idaho State University, 2009.
* Ougouag and Terry (2001) Abderrafi M. Ougouag and William K. Terry. A preliminary study of the effect of shifts in packing fraction on k-effective in pebble-bed reactors. Proceeding of Mathemetics & Computation 2001, September 2001. Salt Lake City, Utah, USA.
* Ougouag et al. (2004) Abderrafi M. Ougouag, Hans D. Gougar, William K. Terry, Ramatsemela Mphahlele, and Kostadin N. Ivanov. Optimal moderation in the pebble-bed reactor for enhanced passive safety and improved fuel utilization. PHYSOR 2004 – The Physics of Fuel Cycle and Advanced Nuclear Systems: Global Developments, April 2004.
* Payne (2003) S. J. Payne. Development of design basis earthquake (dbe) parameters for moderate and high hazard facilities at tra. 2003\. INEEL/EXT-03-00942, http://www.inl.gov/technicalpublications/Documents/2906939.pdf.
* Ristow (1998) Gerald H. Ristow. Flow properties of granual materials in three-dimensional geometries. Master’s thesis, Philipps-Universität Marburg, Marburg/Lahn, January 1998\.
* Rycroft et al. (2006a) Chris H. Rycroft, Martin Z. Bazant, Gary S. Grest, and James W. Landry. Dynamics of random packings in granular flow. _Physical Review E_ , 73:051306, 2006a.
* Rycroft et al. (2006b) Chris H. Rycroft, Gary S. Crest, James W. Landry, and Martin Z. Bazant. Analysis of granular flow in a pebble-bed nuclear reactor. _Physical Review E_ , 74:021306, 2006b.
* Sheng et al. (2003) X. Yu Sheng, X. S., Luo, and S. He. Wear behavior of graphite studies in an air-conditioned environment. _Nuclear Engineering and Design_ , 223:109–115, 2003.
* Silbert et al. (2001) Leonardo E. Silbert, Deniz Ertas, Gary S. Grest, Thomas C. Halsey, Dov Levine, and Steven J. Plimpton. Granular flow down an inclined plane: Bagnold scaling and rheology. _Physical Review E_ , 64:051302, 2001.
* Soppe (1990) W. Soppe. Computer simulation of random packings of hard spheres. _Powder Technology_ , 62:189–196, 1990.
* Sperl (2006) Matthias Sperl. Experiments on corn pressure in silo cells – translation and comment of janssen’s paper from 1895. _Granular Matter_ , 8:59–65, 2006.
* Stansfield (1969) O. M. Stansfield. Friction and wear of graphite in dry helium at 25, 400, and 800∘ c. _Nuclear Applications_ , 6:313–320, 1969.
* Terry et al. (2006) William K. Terry, Soon Sam Kim, Leland M. Montierth, Joshua J. Cogliati, and Abderrafi M. Ougouag. Evaluation of the htr-10 reactor as a benchmark for physics code qa. ANS Topical Meeting on Reactor Physics, 2006. URL http://www.inl.gov/technicalpublications/Search/Results.asp?ID=INL/CON-%06-11699.
* Vu-Quoc et al. (2000) L. Vu-Quoc, X. Zhang, and O. R. Walton. A 3-d discrete-element method for dry granular flows of ellipsoidal particles. _Computer Methods in Applied Mechanics and Engineering_ , 187:483–528, 2000.
* Vu-Quoc and Zhang (1999) Loc Vu-Quoc and Xiang Zhang. An accurate and effcient tangential force-displacement model for elastic frictional contact in particle-flow simulations. _Mechanics of Materials_ , 31:235–269, 1999.
* Wahsweiler (1989) Dr. Wahsweiler. Bisherige erkentnisse zum graphitstaub, 1989. HRB BF3535 26.07.1989.
* Wait (2001) R. Wait. Discrete element models of particle flows. _Mathematical Modeling and Analysis I_ , 6:156–164, 2001\.
* Walker (1966) D. M. Walker. An approximate theory for pressures and arching in hoppers. _Chemical Engineering Science_ , 21:975–997, 1966.
* Wu et al. (2002) Zongxin Wu, Dengcai Lin, and Daxin Zhong. The design features of the htr-10. _Nuclear Engineering and Design_ , 218:25–32, 2002.
* Xiaowei et al. (2005) Luo Xiaowei, Yu Suyaun, Zhang Zhensheng, and He Shuyan. Estimation of graphite dust quantity and size distribution of graphite particle in htr-10. _Nuclear Power Engineering_ , 26, 2005. ISSN 0258-0926(2005)02-0203-06.
* Xu and Zuo (2002) Yuanhui Xu and Kaifen Zuo. Overview of the 10 mw high temperature gas cooled reactor—test module project. _Nuclear Engineering and Design_ , 218:13–23, 2002.
## Appendix A Calculation of Packing Fractions
For determining the volume of a sphere that is inside a vertical slice, the
following formula can be used
$\displaystyle a$ $\displaystyle=max(-r,bot-z)$ (A.1) $\displaystyle b$
$\displaystyle=min(r,top-z)$ (A.2) $\displaystyle v$
$\displaystyle=\pi\left[r^{2}(b-a)+\frac{1}{3}(a^{3}-b^{3})\right]$ (A.3)
where $r$ is the pebble radius, $bot$ is the bottom of the vertical
slice,$top$ is the top of the vertical slice and $z$ is the vertical location
of the pebble center.
To determine the area inside a vertical and radial slice, two auxiliary
functions are defined, one which has the area inside a radial 2d slice, and
another which has the area outside a radial 2d slice.
Figure A.1: Area Inside Geometry Figure A.2: Area Outside Geometry
Figure A.1 shows the area that is inside both a circle of radius $c$ and a
radial slice of $I$. The circle is $r$ from the center of the radial circle.
Auxiliary terms are defined, which include $f$, the distance from the
intersection of the segment of the radial circle perpendicularly to the center
line, $j$ the distance to the intersection of $f$, $\theta$ the angle of
segment, and $\phi$ the angle from the segment intersection on the circle
side. The area_inside function has the following definition:
$\displaystyle a_{i}$ $\displaystyle=0.0\qquad\mathrm{if}\,I<r-c$ (A.4)
$\displaystyle a_{i}$ $\displaystyle=\pi c^{2}\qquad\mathrm{if}\,r+c<I$ (A.5)
$\displaystyle a_{i}$ $\displaystyle=\pi
I^{2}\qquad\mathrm{if}\,I<r+c\,\mathrm{and}\,I<c-r$ (A.6)
$\displaystyle\mathrm{otherwise}$ (A.7) $\displaystyle j$
$\displaystyle=\frac{r^{2}+c^{2}-I^{2}}{2r}$ (A.8) $\displaystyle f$
$\displaystyle=\sqrt{c^{2}-j^{2}}$ (A.9) $\displaystyle\theta$
$\displaystyle=2\arccos\frac{I^{2}+r^{2}-c^{2}}{2Ir}$ (A.10)
$\displaystyle\phi$ $\displaystyle=2\arccos\frac{r^{2}+c^{2}-I^{2}}{2rc}$
(A.11) $\displaystyle a_{i}$
$\displaystyle=\frac{1}{2}c^{2}\phi+\frac{1}{2}I^{2}\theta-fr$ (A.12)
Figure A.2 shows the area that is outside the radial slice, but inside the
circle. The radial slice has a radius of $O$. The new auxiliary term $k$ is
the distance from the circle’s center to the perpendicular intercept. The
area_outside function has the following definition:
$\displaystyle a_{o}$ $\displaystyle=0.0\qquad\mathrm{if}\,O>c+r$ (A.13)
$\displaystyle a_{o}$ $\displaystyle=\pi c^{2}\qquad\mathrm{if}\,c-r>O$ (A.14)
$\displaystyle a_{o}$ $\displaystyle=\pi c^{2}-\pi
O^{2}\qquad\mathrm{if}\,O<r+c\,\mathrm{and}\,O<c-r$ (A.15)
$\displaystyle\mathrm{otherwise}$ (A.16) $\displaystyle k$
$\displaystyle=\frac{O^{2}-r^{2}-c^{2}}{2r}$ (A.17) $\displaystyle m$
$\displaystyle=\sqrt{c^{2}-k^{2}}$ (A.18) $\displaystyle\theta$
$\displaystyle=2\arccos\frac{k+r}{O}$ (A.19) $\displaystyle\phi$
$\displaystyle=2\arccos\frac{k}{c}$ (A.20) $\displaystyle a_{o}$
$\displaystyle=(\frac{1}{2}c^{2}\phi-mk)-(\frac{1}{2}O^{2}\theta-m(k+r))$
(A.21)
Then, the total volume in a radial slice can be determined from the
computation:
$\displaystyle a$ $\displaystyle=max(-r,bot-z)$ (A.22) $\displaystyle b$
$\displaystyle=min(r,top-z)$ (A.23) $\displaystyle v_{t}$
$\displaystyle=\pi\left[R^{2}(b-a)+\frac{1}{3}(a^{3}-b^{3})\right]$ (A.24)
$\displaystyle v_{i}$
$\displaystyle=\int_{a}^{b}\mathrm{area\\_inside}(c=\sqrt{R^{2}-z^{2}})dz$
(A.25) $\displaystyle v_{o}$
$\displaystyle=\int_{a}^{b}\mathrm{area\\_outside}(c=\sqrt{R^{2}-z^{2}})dz$
(A.26) $\displaystyle v$ $\displaystyle=v_{t}-v_{i}-v_{o}$ (A.27)
## Appendix B Determination of dust production coefficients
One potential use of the PEBBLES code is to predict the dust production of a
pebble bed reactor. This section discusses the features that make this
possible and work that has been done to determine the necessary coefficients.
Unfortunately, the following literature review discovered a lack of robust
wear coefficients, which prevents prediction of dust production.
There are essentially four contact wear mechanisms. Adhesive wear is from the
contacting surfaces adhesively bonding together, and part of the material is
pulled away. Abrasive wear is when one of the contacting materials is harder
than the other, and plows (or shears) away material. Fatigue wear is when the
surfaces repeatedly contact each other causing fracture of the material. The
last mechanism is corrosive wear, when chemical corrosion causes the surface
to behave with increased wear (Bhushan, 2000). For pebble bed reactors,
adhesive wear is expected to be the dominate wear mechanism.
As a first order approximation the adhesive dust production volume is
(Bhushan, 2000):
$V=K_{ad}\frac{N}{H}L$ (B.1)
In this equation $V$ is the wear volume, $K_{ad}$ is the wear coefficient for
adhesive wear, $L$ is the length slide and $\frac{N}{H}$ is the real contact
area (with $N$ the normal force and $H$ the hardness). Typically, the hardness
and the wear coefficient for adhesive wear are combined with the units of
either mass or volume over force times distance. For two blocks, the length
slide is the distance that one of the blocks travels over the other while in
contact. Note that this formula is only an approximation since the wear volume
is only approximately linear with respect to both the normal force and the
distance traveled. Abrasive wear also can be approximated by this model, but
fatigue and corrosive wear will not be modeled well by this. To the extent
that these wear mechanisms are present in the pebble bed reactor, this model
may be less valid.
The wear coefficient is typically measured by grinding or stroking two pieces
of graphite against each other, and then measuring the decrease in mass. The
details of the experiment such as the contact shape and the orientation of the
relative motion affect the wear coefficient.
The wear that occurs with graphite depends on multiple factors. A partial list
includes the normal force of contact (load), the temperature of the graphite
and the past wear history (since wear tends to polish the contact surfaces and
remove loose grains). The atmosphere that the graphite is in affects the wear
rates since some molecules chemically interact with the carbon or are adsorbed
on the surface. Neutron damage and other radiation effects can damage the
structure of the graphite and affect the wear. The type and processing of the
graphite can affect wear rates. As a related effect, if harder and softer
graphites interact, the harder one can ‘plow’ into the softer and increase
wear rates.
For graphite on graphite, depending on conditions there can be over three
orders of magnitude difference in the wear. For example Sheng et al. (2003)
experimentally determined graphite on graphite in air at room temperature can
exhibit wear rates of 3.3e-8 g/(Nm) but in the dusting regime111In air, above
a certain temperature graphite wear transitions to dusting wear, which has
much greater wear rates. Increased water vapor decreases or eliminates the
dusting wear. at 200∘C the wear coefficient was determined by Lancaster and
Pritchard (1980) to be 2e-5 g/(Nm) which is about a thousand times greater.
For this reason, conditions as close to the in-core conditions are needed for
determining a better approximation of the wear coefficients.
For tests using nuclear graphite near in-core conditions, the best data
available to the author is from two independent sets of experiments. One data-
set emerged from the experiments by Stansfield (1969) and the other is from a
series of experiments performed at the Tsinghua University(Sheng et al., 2003;
Luo et al., 2004, 2005).
O.M. Stansfield measured friction and wear with different types of graphite in
helium at different temperatures (Stansfield, 1969). In the experiments, two
pieces of graphite were slid against each other linearly with a 0.32 cm
stroke. Two different loads were used, one 2-kg mass, and another 8-kg mass.
The data for wear volumes is only provided graphically, that is, not
tabulated, therefore only order of magnitude results are available. The wear
values were about an order of magnitude higher at 25∘C than at 400∘C and
800∘C. There was a reduction of friction with increased length slide, but no
explanation was provided222Possibly this was due to a lubrication effect or
the removal of rough or loose surfaces.. Typical values for the wear rates are
10e-3 cm3/kg for the 25∘C case and 10e-4 cm3/kg for the 400∘C and 800∘C for 12
500 cm distance slide. With a density of 1.82 g/cm3, these work out to about
1.5e-6 g/(Nm) and 1.5e-7 g/(Nm). These are only about an order of magnitude
above room temperature wear.
The second set of experiments were done at the Tsinghua university. The first
paper measures the wear coefficient of graphite KG-11 via pressing a static
specimen against a revolving specimen. The wear is measured by weighing the
difference in mass before the experiment and after the experiment. At room
temperature in air they measured wear rates of 7.32e-9 g/(Nm) with 31 N load
with surface contact, 3.29e-8 g/(Nm) with 31 N load with line contact and
3.21e-8 g/(Nm) with 62 N load(Sheng et al., 2003). The second paper measures
the wear coefficient of graphite IG-11 on graphite and on steel at varying
loads(Luo et al., 2004). Unfortunately, there are inconsistencies in the units
used in the paper. For example, in Table 2 the mean wear rate for the lower
specimen is listed as 3.0e3 $\mu$g/m, but in the text it is listed as 0.3e-3
$\mu$g/m, seven orders of magnitude different. The 30 N of load upper specimen
wear coefficient for the first 30 minutes is listed as 1.4e-3 $\mu$g/m, which
works out to 4.7e-10 g/(Nm). If 1.4e3 $\mu$g/m is used, this works out to
4.7e-4 g/(Nm). Neither of these matches the first paper’s results. It seems
that the units of $\mu$g, (or micrograms or 1.0e-6 g) are used where mg (or
milligrams or 1.0e-3 g) should be. Also, the sign for the exponent is
inconsistent, where sometimes the negative sign is dropped. These two mistakes
would make the correct exponents 1.0e-3 mg/m and the measured coefficient
1.4e-3 mg/m or 4.7e-7 g/(Nm), which match reasonably well to the first paper’s
values on the order of 1.0e-8 g/(Nm). For the rest of this report, it is
assumed that these corrections should be used for the Xiaowei et al. papers.
The third paper measures the temperature effects in helium(Luo et al., 2005).
The experimental setup is similar to the setup in the second paper, but the
atmosphere is a helium atmosphere and the temperatures used are 100∘C to 400∘C
with a load of 30 N. In Figure 2 of that paper, it can be qualitatively
determined that as the temperature increases, the amount of wear increases. As
well, the wear tends to have a higher rate initially, and then decrease. Since
the wear experiment was performed using a 2 mm long stroke, it seems plausible
that wear rates in an actual pebble bed might be closer to the initially
higher rates since the pebble flow might be able to expose more fresh surfaces
of the pebbles to wear. From the graph, there does not seem to be a clear
trend in the wear as a function of temperature. This makes it difficult to
estimate wear rates since pebble bed reactor cores can have temperatures over
1000∘C in normal operation. The highest wear rate in Table 2 of the paper is
31.3e-3 mg/m at 30 N, so the highest wear rate measured is 1.04e-6 g/(Nm).
This is about 20 times lower than wear in the dusting regime. Since the total
amount of wear (from Fig. 2) between 200∘C and 400∘C roughly doubles in the
upper specimen and increases by approximately 35% in the lower specimen,
substantially higher wear rates in over 1000∘C environments are hard to rule
out. Note, however, that the opposite temperature trend was observed in the
Stansfield paper.
### B.1 Calculation of Force in Reactor Bed
In order to calculate the dust produced in the reactor, the force acting on
the pebbles is needed. Several different approximations can be used to
calculate this with varying accuracy. The simplest (but least accurate) method
of approximating the pressure in the reactor is using the hydrostatic
pressure, or
$P=\rho fgh$ (B.2)
where $P$ is the pressure at a point, $\rho$ is the density of the pebbles,
$f$ is the packing fraction of the pebbles (typical values are near 0.61 or
0.60), $g$ is the gravitational acceleration and $h$ is the height below the
top of the pebble bed. With knowledge of how many contacts there are per unit
area or per unit volume, this can be converted into pebble to surface or
pebble to pebble contact forces. This formula is not correct when static
friction occurs since the static friction allows forces to be transferred to
the walls. Therefore, Equation B.2 over-predicts the actual pressures in the
pebble bed.
In the presence of static friction, more complicated calculations are
required. The fact that static friction transfers force to the wall was
observed by the German engineer H.A. Janssen in 1895 (Sperl, 2006). Formulas
for the pressure on the wall for cylindrical vessels with conical exit chutes
were derived by Walker (1966). Essentially, when the upward force on the wall
from static friction for a given segment matches the downward gravitational
force from the additional pebbles in that segment, the pressure stops
increasing.
For a cylinder, the horizontal pressure equation is (Gotoh et al., 1997):
$P_{h}=\frac{\gamma
D}{4\mu_{w}}\left[1-\exp\left(\frac{-4\mu_{w}K}{D}x\right)\right]$ (B.3)
where $\gamma$ is the bulk weight (or $f\rho g$), $D$ is the diameter of the
cylinder, $\mu_{w}$ is the static friction coefficient between the pebbles and
the wall, $K$ is the Janssen Coefficient, and $x$ is the distance below the
top of the pile.
The Janssen coefficient is dependent upon the pebble to pebble static friction
coefficient and can be calculated from:
$K=\frac{1-\sin\phi}{1+\sin\phi}$ (B.4)
where $\tan\phi=\mu_{p}$ and $\mu_{p}$ is the pebble to pebble static
friction. Since
$\tan^{-1}\mu=\sin^{-1}\left(\frac{\mu}{\sqrt{\mu^{2}+1}}\right)$ then $K$ can
also be written as:
$K=2\mu_{p}^{2}-2\mu_{p}\sqrt{\mu_{p}^{2}+1}+1$ (B.5)
The Janssen formula derivations make assumptions that are not necessarily true
for granular materials. These include assuming the granular material is a
continuum and that the shear forces on the wall are at the Coulomb limit
(Bratberg et al., 2005). The static friction force ranges from zero at first
contact up to $\mu N$ (the Coulomb limit) when sufficient shear force has
occurred. If the force is not at the Coulomb limit, then an effective $\mu$
may be able to be found and used instead. In general, this assumption will not
be true when the pebbles are freshly loaded as they will not have slid against
the wall enough to fully load the static friction. Even after the pebbles have
been recirculated, they may not reach the Coulomb limit and effective values
for the static friction constant may be needed instead for predicting the wall
pressure. Finally, real reactors have more complicated geometries than a
smooth cylinder above a cone exit chute.
### B.2 Prior data on dust production
Figure B.1: AVR Dimensions
The 46 MW thermal pebble bed reactor Arbeitsgemeinschaft VersuchsReaktor (AVR)
was created in the 1960s in Germany and operated for 21 years. The pebbles
were added into the reactor through four feeding tubes spaced around the
reactor and one central feeding tube at the top of the reactor. There was one
central outlet chute below. Into the reactor cavity there were four noses of U
shaped graphite with smooth sides for inserting the control rods. The cylinder
walls contained dimples about 1/2 a pebble diameter deep and that alternated
location periodically. All the structural graphite was a needle coke graphite.
Dimensions are shown in Figure B.1 and design and measured data is provided in
Table B.1. The measured dust production rate was 3 kg per year. No real
conclusions were inferred because of a water ingress, an oil ingress, the
uncertainty in the composition of the dust (i.e., metallic components) and the
uncertainty of the location of dust production(Bäumer et al., 1990;
Atomwirtschaft-Atomtechnik-atw, 1966). The interior of the AVR reactor reached
over 1280∘C as determined by melt wire experiments(Moormann, ).
Table B.1: AVR Data Name | Value
---|---
Average Inlet Temperature | 250∘C
Average Outlet Temperature | 950∘C
Pebble Circulation Rate | 300-500 per day
Dust Produced | 3 kg per year
Pebbles in Reactor Core | 100,000
Reactor Radius | 1.5 m
Outlet Chute Radius | 0.25 m
Angle of Outlet Cone | 30∘
Control Rod Nose Thickness | 0.3 m
Radius of Control Rod Nose | 0.15 m
Feed tube to outlet chute | 2.83 m
The THTR-300 reactor was a thorium and uranium powered pebble bed reactor that
first went critical in 1983 and ran through 1988. THTR-300 produced 16 kg of
dust per Full Power Year (FPY), and an estimated 6 kg of that was produced in
the core of the reactor(Wahsweiler, 1989). The control rods in the THTR-300
actually pushed into the pebble bed. On a per pebble basis, the amount of dust
produced in the THTR-300 is lower than in the AVR. Further data on the
THTR-300 is summarized in Table B.2(tht, a, b).
Table B.2: THTR Data Name | Value
---|---
Average Inlet Temperature | 250∘C
Average Outlet Temperature | 750∘C
Core Height | 6.0 m
Pebbles Circulated | 1,300,000 per FPY
Core Diameter | 5.6 m
Pebbles in Full Core | 657,000
Total Dust Produced | 16 kg per FPY
Estimated In-core Dust | 6 kg per FPY
### B.3 Prior Prediction Work
There are two papers published that attempt to predict the in-core pebble dust
production. The first paper is “Estimation of Graphite Dust Quantity and Size
Distribution of Graphite Particle in HTR-10” (Xiaowei et al., 2005) and was
created to estimate the dust production that the core of the HTR-10 reactor
would produce. The second is co-authored by this author and attempts to
estimate the dust that the AVR reactor produced.
The HTR-10 paper started by calculating from the hydrostatic pressure the
force between the pebbles at the bottom of the reactor. The force was
approximated to be 30N. The remainder of the paper uses 30N as the force for
conservatism. Note that the HTR-10 paper is in Chinese, so this review may
contain mistakes in understanding due to language differences.
The dust production is calculated in three regions, the core of the reactor,
the outlet chute of the reactor and the fuel loading pipe. As with the other
papers, the assumption is made that $\mu$g should actually be mg.
For the core of the reactor the temperature used is 550∘C with pebble to
pebble wear rates of 4.2e-3 mg/m extrapolated from 400∘C data. The pebble to
wall wear rates are extrapolated to 480∘C to 12.08e-3 mg/m from the 400∘C
data. The pebble to pebble wear is estimated to occur for333This is the length
slide and is multiplied by 4.2e-3 mg/m to get per pass dust production 2.06 m
and 3.85% of pebbles are estimated to wear against the wall. From this data
the average pebble dust production per pass in the core is determined to be
8.65e-3 mg for pebble to pebble wear and 0.99e-3 mg from pebble to wall. The
total in-core graphite dust produced per pebble pass is 9.64e-3 mg.
The outlet chute wear is estimated to occur for 2.230 m in the graphite
portion and 1.530 m in the stainless steel portion, and that 44.16% of the
pebbles wear against the chute. Both these portions are estimated to be at
400∘C. Wear rates of 3.5e-3 mg/m are used for the pebble to pebble wear, and
10.4e-3 mg/m for the pebble to graphite chute and 9.7e-3 mg/m for pebble to
steel. Thus for the outlet chute the upper portion has 18.05e-3 mg of dust
produced per average pebble and the lower portion has 11.91e-3 mg produced for
a total outlet chute amount of 29.96e-3 mg.
The fuel loading pipe is approximately 25 m long and the temperature is 200∘C
which gives a wear value of 2.1e-3 mg/m and 52.50e-3 mg. Thus, for an
estimated average pebble pass, 10.5% of the dust is produced in core, 32.5% is
produced in the outlet chute and 57.0% is produced in the loading pipes. The
paper estimates that 50% of the outlet chute graphite dust enters the core and
that 75% of the graphite dust produced in the fuel loading pipes enters the
reactor core, for a total amount of graphite dust entering the core of 64.0e-3
mg per pebble pass. Since there are 125 pebbles entering the reactor a day,
and 365 days in a year, this works out to 2.92 g/year of pebble dust per year
(reported in the paper as 2.74 kg/year due to a precision loss and unit
errors)(Xiaowei et al., 2005).
HTR-10 has 27 thousand pebbles compared to AVR’s 100 thousand and a rate of
125 pebbles per day compared to about 400 pebbles per day. A crude scaling
factor estimate of 35 grams of dust per year would be produced per year in
AVR. Measured values of dust generation rates from HTR-10 would provide
valuable information on pebble bed reactor dust production but appear to be
unavailable.
|
arxiv-papers
| 2011-03-31T03:29:13 |
2024-09-04T02:49:18.035694
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Joshua J. Cogliati",
"submitter": "Joshua Cogliati",
"url": "https://arxiv.org/abs/1103.6082"
}
|
1103.6215
|
Also at ]Dokuz Eylül University, Graduate School of Natural and Applied
Sciences, Turkey
# Effective field theory analysis of $3D$ random field Ising model on
isometric lattices
Ümit Akıncı Yusuf Yüksel [ Department of Physics, Dokuz Eylül University,
TR-35160 Izmir, Turkey Hamza Polat hamza.polat@deu.edu.tr Department of
Physics, Dokuz Eylül University, TR-35160 Izmir, Turkey
###### Abstract
Ising model with quenched random magnetic fields is examined for single
Gaussian, bimodal and double Gaussian random field distributions by
introducing an effective field approximation that takes into account the
correlations between different spins that emerge when expanding the
identities. Random field distribution shape dependencies of the phase diagrams
and magnetization curves are investigated for simple cubic, body centered and
face centered cubic lattices. The conditions for the occurrence of reentrant
behavior and tricritical points on the system are also discussed in detail.
PACS numbers
75.10.Hk, 75.30.Kz, 75.50.Lk
††preprint: APS/123-QED
###### Contents
1. I Introduction
2. II Formulation
3. III Results and Discussion
1. III.1 Phase diagrams of single Gaussian distribution
2. III.2 Phase diagrams of bimodal distribution
3. III.3 Phase diagrams of double Gaussian distribution
4. IV Conclusions
## I Introduction
Ising model ising ; onsager which was originally introduced as a model
describing the phase transition properties of ferromagnetic materials has been
widely examined in statistical mechanics and condensed matter physics. In the
course of time, basic conclusions of this simple model has been improved by
introducing new concepts such as disorder effects on the critical behavior of
the systems in question. Ising model in a quenched random field (RFIM) which
has been studied over three decades is an example of this situation. The model
which is actually based on the local fields acting on the lattice sites which
are taken to be random according to a given probability distribution was
introduced for the first time by Larkin larkin for superconductors and later
generalized by Imry and Ma imry_ma . Lower critical dimension $d_{c}$ of RFIM
has been remained as an unsolved mystery for many years and various
theoretical methods have been introduced. For example, domain wall argument of
Imry and Ma imry_ma suggests that a transition should exist in three and
higher dimensions for finite temperature and randomness which means that
$d_{c}=2$ grinstein ; fernandez ; imbrie ; bricmont . On the contrary,
dimensional reduction arguments parisi conclude that the system should not
have a phase transition at finite temperature in three dimensions or fewer, so
$d_{c}=3$ binder ; mukamel2 ; mukamel ; niemi . On the other hand, Frontera
and Vives frontera showed that a two dimensional ferromagnetic RFIM with
Gaussian random field distribution exhibits order at zero temperature.
At the same time, a great many of theoretical and experimental works have paid
attention on the RFIM and quite noteworthy results have been obtained. For
instance, it has been shown that diluted antiferromagnets such as
$\mathrm{Fe}_{x}\mathrm{Zn}_{1-x}\mathrm{F}_{2}$belanger ; king ,
$\mathrm{Rb}_{2}\mathrm{Co}_{x}\mathrm{Mg}_{1-x}\mathrm{F}_{4}$ferreira ;
yoshizawa and $\mathrm{Co}_{x}\mathrm{Zn}_{1-x}\mathrm{F}_{2}$yoshizawa in a
uniform magnetic field just correspond to a ferromagnet in a random uniaxial
magnetic field fishman ; cardy . Following studies have been devoted to
investigate the phase diagrams of these systems in depth and in the mean field
level it was found that different random field distributions lead to different
phase diagrams for infinite dimensional models. For example, using a Gaussian
probability distribution Schneider and Pytteschneider_pytte have shown that
phase diagrams of the model exhibit only second order phase transition
properties. On the other hand, Aharony aharony and Matthis matthis have
introduced bimodal and trimodal distributions, respectively and they have
reported the observation of tricritical behavior. In a recent series of
papers, phase transition properties of infinite dimensional RFIM with
symmetric double crokidakis and triple salmon Gaussian random fields have
also been studied by means of a replica method and a rich variety of phase
diagrams have been presented. The situation has also been handled on three
dimensional lattices with nearest neighbor interactions by a variety of
theoretical works such as effective field theory (EFT) borges ; liang2 ;
sarmento ; sebastianes ; kaneyoshi , Monte Carlo (MC) simulations landau ;
machta ; fytas1 ; fytas2 , pair approximation (PA) albayrak and series
expansion (SE) method gofman . By using EFT, Borges and Silva borges studied
the system for square $(q=4)$ and simple cubic $(q=6)$ lattices and they
observed a tricritical point only for $(q\geq 6)$. Similarly, Sarmento and
Kaneyoshi sarmento investigated the phase diagrams of RFIM by means of EFT
with correlations for a bimodal field distribution and they concluded that
reentrant behavior of second order is possible for the system with $(q\geq
6)$. Recently, Fytas et al. fytas1 applied Monte Carlo simulations on a
simple cubic lattice. They found that the transition remains continuous for a
bimodal field distribution, while Hadjiagapiou hadjiagapiou observed
reentrant behavior and confirmed the existence of tricritical point for an
asymmetric bimodal probability distribution within the mean-field
approximation based on Landau expansion.
Conventional EFT approximations include spin-spin correlations resulting from
the usage of the Van der Waerden identities and provide results that are
superior to those obtained within the traditional MFT. However, these
conventional EFT approximations are not sufficient enough to improve the
results, due to the usage of a decoupling approximation (DA) that neglects the
correlations between different spins that emerge when expanding the
identities. Therefore, taking these correlations into consideration will
improve the results of conventional EFT approximations. In order to overcome
this point, recently we proposed an approximation that takes into account the
correlations between different spins in the cluster of considered lattice
polat ; canpolat ; yuksel_1 ; yuksel_2 ; yuksel_3 . Namely, an advantage of
the approximation method proposed by these studies is that no decoupling
procedure is used for the higher order correlation functions. On the other
hand, as far as we know, EFT studies in the literature dealing with RFIM are
based only on discrete probability distributions (bimodal or trimodal). Hence,
in this work we intent to study the phase diagrams of the RFIM with single
Gaussian, bimodal and double Gaussian random field distributions on isometric
lattices.
Organization of the paper is as follows: In section II we briefly present the
formulations. The results and discussions are presented in section III, and
finally section IV contains our conclusions.
## II Formulation
In this section, we give the formulation of present study for simple cubic
lattice with $q=6$. A detailed explanation of the method for body centered and
face centered cubic lattices can be found in Appendix. As a simple cubic
lattice, we consider a three-dimensional lattice which has $N$ identical spins
arranged. We define a cluster on the lattice which consists of a central spin
labeled $S_{0}$, and $q$ perimeter spins being the nearest neighbors of the
central spin. The cluster consists of $(q+1)$ spins being independent from the
spin operator $\hat{S}$. The nearest neighbor spins are in an effective field
produced by the outer spins, which can be determined by the condition that the
thermal average of the central spin is equal to that of its nearest neighbor
spins. The Hamiltonian describing our model is
$H=-J\sum_{<i,j>}S_{i}^{z}S_{j}^{z}-\sum_{i}h_{i}S_{i}^{z},$ (1)
where the first term is a summation over the nearest neighbor spins with
$S_{i}^{z}=\pm 1$ and the second term represents the Zeeman interactions on
the lattice. Random magnetic fields are distributed according to a given
probability distribution function. Present study deals with three kinds of
field distribution. Namely, a normal distribution which is defined as
$P(h_{i})=\left(\frac{1}{2\pi\sigma^{2}}\right)^{1/2}\exp\left[-\frac{h_{i}^{2}}{2\sigma^{2}}\right],$
(2)
with a width $\sigma$ and zero mean, a bimodal discrete distribution
$P(h_{i})=\frac{1}{2}\left[\delta(h_{i}-h_{0})+\delta(h_{i}+h_{0})\right],$
(3)
where half of the lattice sites subject to a magnetic field $h_{0}$ and the
remaining lattice sites have a field $-h_{0}$, and a double peaked Gaussian
distribution
$P(h_{i})=\frac{1}{2}\left(\frac{1}{2\pi\sigma^{2}}\right)^{1/2}\left\\{\exp\left[-\frac{(h_{i}-h_{0})^{2}}{2\sigma^{2}}\right]+\exp\left[-\frac{(h_{i}+h_{0})^{2}}{2\sigma^{2}}\right]\right\\}.$
(4)
In a double peaked distribution defined in equation (4), random fields $\pm
h_{0}$ are distributed with equal probability and the form of the distribution
depends on the $h_{0}$ and $\sigma$ parameters, where $\sigma$ is the width of
the distribution.
According to Callen identity callen for the spin-1/2 Ising ferromagnetic
system, the thermal average of the spin variables at the site $i$ is given by
$\left\langle\\{f_{i}\\}S_{i}^{z}\right\rangle=\left\langle\\{f_{i}\\}\tanh\left[\beta\left(J\sum_{j}S_{j}+h_{i}\right)\right]\right\rangle,$
(5)
where $j$ expresses the nearest neighbor sites of the central spin and
$\\{f_{i}\\}$ can be any function of the Ising variables as long as it is not
a function of the site. From equation (5) with $f_{i}=1$, the thermal and
random-configurational averages of a central spin can be represented on a
simple cubic lattice by introducing the differential operator technique
honmura_kaneyoshi ; kaneyoshi_1
$m_{0}=\left\langle\left\langle
S_{0}^{z}\right\rangle\right\rangle_{r}=\left\langle\left\langle\prod_{j=1}^{q=6}\left[\cosh(J\nabla)+S_{j}^{z}\sinh(J\nabla)\right]\right\rangle\right\rangle_{r}F(x)|_{x=0},$
(6)
where $\nabla$ is a differential operator, $q$ is the coordination number of
the lattice and the inner $\langle...\rangle$ and the outer
$\langle...\rangle_{r}$ brackets represent the thermal and configurational
averages, respectively. The function $F(x)$ in equation (6) is defined by
$F(x)=\int dh_{i}P(h_{i})\tanh[\beta(x+h_{i})],$ (7)
and it has been calculated by numerical integration and by using the
distribution functions defined in equations (2), (3) and (4). By expanding the
right hand side of equation (6) we get the longitudinal spin correlation as
$\displaystyle\langle\langle S_{0}\rangle\rangle_{r}$ $\displaystyle=$
$\displaystyle k_{0}+6k_{1}\langle\langle
S_{1}\rangle\rangle_{r}+15k_{2}\langle\langle
S_{1}S_{2}\rangle\rangle_{r}+20k_{3}\langle\langle
S_{1}S_{2}S_{3}\rangle\rangle_{r}+15k_{4}\langle\langle
S_{1}S_{2}S_{3}S_{4}\rangle\rangle_{r}$ (8)
$\displaystyle+6k_{5}\langle\langle
S_{1}S_{2}S_{3}S_{4}S_{5}\rangle\rangle_{r}+k_{6}\langle\langle
S_{1}S_{2}S_{3}S_{4}S_{5}S_{6}\rangle\rangle_{r}.$
The coefficients in equation (8) are defined as follows
$\displaystyle k_{0}$ $\displaystyle=$
$\displaystyle\cosh^{6}(J\nabla)F(x)|_{x=0},$ $\displaystyle k_{1}$
$\displaystyle=$ $\displaystyle\cosh^{5}(J\nabla)\sinh(J\nabla)F(x)|_{x=0},$
$\displaystyle k_{2}$ $\displaystyle=$
$\displaystyle\cosh^{4}(J\nabla)\sinh^{2}(J\nabla)F(x)|_{x=0},$ $\displaystyle
k_{3}$ $\displaystyle=$
$\displaystyle\cosh^{3}(J\nabla)\sinh^{3}(J\nabla)F(x)|_{x=0},$ $\displaystyle
k_{4}$ $\displaystyle=$
$\displaystyle\cosh^{2}(J\nabla)\sinh^{4}(J\nabla)F(x)|_{x=0},$ $\displaystyle
k_{5}$ $\displaystyle=$
$\displaystyle\cosh(J\nabla)\sinh^{5}(J\nabla)F(x)|_{x=0},$ $\displaystyle
k_{6}$ $\displaystyle=$ $\displaystyle\sinh^{6}(J\nabla)F(x)|_{x=0}.$ (9)
Next, the average value of the perimeter spin in the system can be written as
follows and it is found as
$\displaystyle m_{1}=\langle\langle S_{1}\rangle\rangle_{r}$ $\displaystyle=$
$\displaystyle\langle\langle\cosh(J\nabla)+S_{0}\sinh(J\nabla)\rangle\rangle_{r}F(x+\gamma),$
(10) $\displaystyle=$ $\displaystyle a_{1}+a_{2}\langle\langle
S_{0}\rangle\rangle_{r}.$
For the sake of simplicity, the superscript $z$ is omitted from the left and
right hand sides of equations (8) and (10). The coefficients in equation (10)
are defined as
$\displaystyle a_{1}$ $\displaystyle=$
$\displaystyle\cosh(J\nabla)F(x+\gamma)|_{x=0},$ $\displaystyle a_{2}$
$\displaystyle=$ $\displaystyle\sinh(J\nabla)F(x+\gamma)|_{x=0}.$ (11)
In equation (II), $\gamma=(q-1)A$ is the effective field produced by the
$(q-1)$ spins outside of the system and $A$ is an unknown parameter to be
determined self-consistently. Equations (8) and (10) are the fundamental
correlation functions of the system. When the right-hand side of equation (6)
is expanded, the multispin correlation functions appear. The simplest
approximation, and one of the most frequently adopted is to decouple these
correlations according to
$\left\langle\left\langle
S_{i}S_{j}...S_{l}\right\rangle\right\rangle_{r}\cong\left\langle\left\langle
S_{i}\right\rangle\right\rangle_{r}\left\langle\left\langle
S_{j}\right\rangle\right\rangle_{r}...\left\langle\left\langle
S_{l}\right\rangle\right\rangle_{r},$ (12)
for $i\neq j\neq...\neq l$ tamura_kaneyoshi . The main difference of the
method used in this study from the other approximations in the literature
emerges in comparison with any decoupling approximation (DA) when expanding
the right-hand side of equation (6). In other words, one advantage of the
approximation method used in this study is that such a kind of decoupling
procedure is not used for the higher order correlation functions. For spin-1/2
Ising system in a random field, taking equations (8) and (10) as a basis we
derive a set of linear equations of the spin correlation functions in the
system. At this point, we assume that $(i)$ the correlations depend only on
the distance between the spins and $(ii)$ the average values of a central spin
and its nearest-neighbor spin (it is labeled as the perimeter spin) are equal
to each other with the fact that, in the matrix representations of spin
operator $\hat{S}$, the spin-1/2 system has the property $(\hat{S})^{2}=1$ .
Thus, the number of linear equations obtained for a simple cubic lattice
$(q=6)$ reduces to twelve and the complete set is as follows
$\displaystyle\left\langle\left\langle S_{1}\right\rangle\right\rangle_{r}$
$\displaystyle=$ $\displaystyle a_{1}+a_{2}\langle\langle
S_{0}\rangle\rangle_{r},$ $\displaystyle\left\langle\left\langle
S_{1}S_{2}\right\rangle\right\rangle_{r}$ $\displaystyle=$ $\displaystyle
a_{1}\langle\langle S_{1}\rangle\rangle_{r}+a_{2}\langle\langle
S_{0}S_{1}\rangle\rangle_{r},$ $\displaystyle\left\langle\left\langle
S_{1}S_{2}S_{3}\right\rangle\right\rangle_{r}$ $\displaystyle=$ $\displaystyle
a_{1}\langle\langle S_{1}S_{2}\rangle\rangle_{r}+a_{2}\langle\langle
S_{0}S_{1}S_{2}\rangle\rangle_{r},$ $\displaystyle\left\langle\left\langle
S_{1}S_{2}S_{3}S_{4}\right\rangle\right\rangle_{r}$ $\displaystyle=$
$\displaystyle a_{1}\langle\langle
S_{1}S_{2}S_{3}\rangle\rangle_{r}+a_{2}\langle\langle
S_{0}S_{1}S_{2}S_{3}\rangle\rangle_{r},$
$\displaystyle\left\langle\left\langle
S_{1}S_{2}S_{3}S_{4}S_{5}\right\rangle\right\rangle_{r}$ $\displaystyle=$
$\displaystyle a_{1}\langle\langle
S_{1}S_{2}S_{3}S_{4}\rangle\rangle_{r}+a_{2}\langle\langle
S_{0}S_{1}S_{2}S_{3}S_{4}\rangle\rangle_{r},$
$\displaystyle\left\langle\left\langle
S_{1}S_{2}S_{3}S_{4}S_{5}S_{6}\right\rangle\right\rangle_{r}$ $\displaystyle=$
$\displaystyle a_{1}\langle\langle
S_{1}S_{2}S_{3}S_{4}S_{5}\rangle\rangle_{r}+a_{2}\langle\langle
S_{0}S_{1}S_{2}S_{3}S_{4}S_{5}\rangle\rangle_{r},$
$\displaystyle\langle\langle S_{0}\rangle\rangle_{r}$ $\displaystyle=$
$\displaystyle k_{0}+6k_{1}\langle\langle
S_{1}\rangle\rangle_{r}+15k_{2}\langle\langle
S_{1}S_{2}\rangle\rangle_{r}+20k_{3}\langle\langle
S_{1}S_{2}S_{3}\rangle\rangle_{r}+15k_{4}\langle\langle
S_{1}S_{2}S_{3}S_{4}\rangle\rangle_{r}$ $\displaystyle+6k_{5}\langle\langle
S_{1}S_{2}S_{3}S_{4}S_{5}\rangle\rangle_{r}+k_{6}\langle\langle
S_{1}S_{2}S_{3}S_{4}S_{5}S_{6}\rangle\rangle_{r},$
$\displaystyle\left\langle\left\langle
S_{0}S_{1}\right\rangle\right\rangle_{r}$ $\displaystyle=$ $\displaystyle
6k_{1}+(k_{0}+15k_{2})\langle\langle
S_{1}\rangle\rangle_{r}+20k_{3}\langle\langle
S_{1}S_{2}\rangle\rangle_{r}+15k_{4}\langle\langle
S_{1}S_{2}S_{3}\rangle\rangle_{r}$ $\displaystyle+6k_{5}\langle\langle
S_{1}S_{2}S_{3}S_{4}\rangle\rangle_{r}+k_{6}\langle\langle
S_{1}S_{2}S_{3}S_{4}S_{5}\rangle\rangle_{r},$
$\displaystyle\left\langle\left\langle
S_{0}S_{1}S_{2}\right\rangle\right\rangle_{r}$ $\displaystyle=$
$\displaystyle(6k_{1}+20k_{3})\langle\langle
S_{1}\rangle\rangle_{r}+(k_{0}+15k_{2}+15k_{4})\langle\langle
S_{1}S_{2}\rangle\rangle_{r}$ $\displaystyle+6k_{5}\langle\langle
S_{1}S_{2}S_{3}\rangle\rangle_{r}+k_{6}\langle\langle
S_{1}S_{2}S_{3}S_{4}\rangle\rangle_{r},$
$\displaystyle\left\langle\left\langle
S_{0}S_{1}S_{2}S_{3}\right\rangle\right\rangle_{r}$ $\displaystyle=$
$\displaystyle(6k_{1}+20k_{3}+6k_{5})\langle\langle
S_{1}S_{2}\rangle\rangle_{r}+(k_{0}+15k_{2}+15k_{4}+k_{6})\langle\langle
S_{1}S_{2}S_{3}\rangle\rangle_{r},$ $\displaystyle\left\langle\left\langle
S_{0}S_{1}S_{2}S_{3}S_{4}\right\rangle\right\rangle_{r}$ $\displaystyle=$
$\displaystyle(6k_{1}+20k_{3}+6k_{5})\langle\langle
S_{1}S_{2}S_{3}\rangle\rangle_{r}+(k_{0}+15k_{2}+15k_{4}+k_{6})\langle\langle
S_{1}S_{2}S_{3}S_{4}\rangle\rangle_{r},$
$\displaystyle\left\langle\left\langle
S_{0}S_{1}S_{2}S_{3}S_{4}S_{5}\right\rangle\right\rangle_{r}$ $\displaystyle=$
$\displaystyle(6k_{1}+20k_{3}+6k_{5})\langle\langle
S_{1}S_{2}S_{3}S_{4}\rangle\rangle_{r}+(k_{0}+15k_{2}+15k_{4}+k_{6})\langle\langle
S_{1}S_{2}S_{3}S_{4}S_{5}\rangle\rangle_{r}$ (13)
If equation (II) is written in the form of a $12\times 12$ matrix and solved
in terms of the variables $x_{i}[(i=1,2,...,12)(e.g.,x_{1}=\langle\langle
S_{1}\rangle\rangle_{r},x_{2}=\langle\langle
S_{1}S_{2}\rangle\rangle_{r},x_{3}=\langle\langle
S_{1}S_{2}S_{3}\rangle\rangle_{r},...)]$ of the linear equations, all of the
spin correlation functions can be easily determined as functions of the
temperature and Hamiltonian/random field parameters. Since the thermal and
configurational average of the central spin is equal to that of its nearest-
neighbor spins within the present method, the unknown parameter $A$ can be
numerically determined by the relation
$\langle\langle S_{0}\rangle\rangle_{r}=\langle\langle
S_{1}\rangle\rangle_{r}\qquad{\rm{or}}\qquad x_{7}=x_{1}.$ (14)
By solving equation (14) numerically at a given fixed set of
Hamiltonian/random field parameters we obtain the parameter $A$. Then we use
the numerical values of $A$ to obtain the spin correlation functions which can
be found from equation (II). Note that $A=0$ is always the root of equation
(14) corresponding to the disordered state of the system. The nonzero root of
$A$ in equation (14) corresponds to the long range ordered state of the
system. Once the spin correlation functions have been evaluated then we can
give the numerical results for the thermal and magnetic properties of the
system. Since the effective field $\gamma$ is very small in the vicinity of
$k_{B}T_{c}/J$, we can obtain the critical temperature for the fixed set of
Hamiltonian/random field parameters by solving equation (14) in the limit of
$\gamma\rightarrow 0$ then we can construct the whole phase diagrams of the
system. Depending on the Hamiltonian/random field parameters, there may be two
solutions (i.e. two critical temperature values satisfy the equation (14))
corresponding to the first/second and second order phase transition points,
respectively. We determine the type of the transition by looking at the
temperature dependence of magnetization for selected values of system
parameters.
## III Results and Discussion
In this section, we discuss how the type of random field distribution effects
the phase diagrams of the system. Also, in order to clarify the type of the
transitions in the system, we give the temperature dependence of the order
parameter.
### III.1 Phase diagrams of single Gaussian distribution
The form of single Gaussian distribution which is defined in equation (2) is
governed by only one parameter $\sigma$ which is the width of the
distribution. In Fig. 1, we show the phase diagram of the system for simple
cubic, body centered cubic and face centered cubic lattices in
$(k_{B}T_{c}/J-\sigma)$ plane. We can clearly see that as $\sigma$ increases
then the width of the distribution function gets wider and randomness effect
of magnetic field distribution on the system becomes significantly important.
Therefore, increasing $\sigma$ value causes a decline in the critical
temperature $k_{B}T_{c}/J$ of the system. We note that the critical
temperature of the system reaches to zero at $\sigma=3.8501$, $5.450$ and
$8.601$ for simple cubic $(q=6)$, body centered $(q=8)$ and face centered
cubic $(q=12)$ lattices, respectively. Besides, we have not observed any
reentrant/tricritical behavior for single Gaussian distribution, or in other
words, the system undergoes only a second order phase transition.
$k_{B}T_{c}/J$ value in the absence of any randomness i.e. when $\sigma=0$ is
obtained as $k_{B}T_{c}/J=4.5274$, $6.5157$ and $10.4986$ for $q=6,8,12$,
respectively. These values can be compared with the other works in the
literature. In Table 1, we see that present work improves the results of the
other works. Hence, it is obvious that the present method is superior to the
other EFT methods in the literature. The reason is due to the fact that, in
contrast to the previously published works mentioned above, there is no
uncontrolled decoupling procedure used for the higher order correlation
functions within the present approximation.
Table 1: Critical temperature $k_{B}T_{c}/J$ at $h_{0}/J=0$ and $\sigma=0$
obtained by several methods and the present work for $q=6,8,12$.
---
Lattice | EBPAdu | CEFTkaneyoshi_2 | PAbalcerzak | EFTkaneyoshi_3 | BAkikuchi | EFRGsousa | MFRGreinerhr | MClandau2 | SEfisher | Present Work
SC | 4.8108 | 4.9326 | 4.9328 | 5.0732 | 4.6097 | 4.85 | 4.93 | 4.51 | 4.5103 | 4.5274
BCC | - | 6.9521 | - | - | - | 6.88 | 6.95 | 6.36 | 6.3508 | 6.5157
FCC | - | 10.9696 | - | - | - | - | - | - | 9.7944 | 10.4986
Magnetization surfaces and their projections on $(k_{B}T/J-\sigma)$ plane
corresponding to the phase diagrams shown in Fig. 1 are depicted in Fig. 2
with $q=6,8$ and $12$. We see that as the temperature increases starting from
zero, magnetization of the system decreases continuously and it falls rapidly
to zero at the critical temperature for all $\sigma$ values. Moreover,
critical temperature of the system decreases and saturation value of
magnetization curves remains constant for a while then reduces as $\sigma$
value increases. This is a reasonable result since, if $\sigma$ value
increases, then the randomness effects increasingly play an important role on
the system and random fields have a tendency to destruct the long range
ferromagnetic order on the system, and hence magnetization weakens. These
observations are common properties of both three lattices.
Figure 1: Phase diagrams of simple cubic $(q=6)$, body centered cubic $(q=8)$
and face centered cubic $(q=12)$ lattices in $(k_{B}T_{c}/J-\sigma)$ plane
corresponding to single Gaussian distribution of random fields. The numbers on
each curve denote the coordination numbers.
Figure 2: Variation of magnetization with $k_{B}T/J$ and $\sigma$
corresponding to the phase diagrams in Fig. 1 for simple cubic, body centered
and face centered cubic lattices. (a)-(c) 3D contour plot surfaces, (d)-(f)
projections on $(k_{B}T/J-\sigma)$ plane.
### III.2 Phase diagrams of bimodal distribution
Next, in order to investigate the effect of the bimodal random fields defined
in equation (3) on the phase diagrams of the system, we show the phase
diagrams in $(k_{B}T_{c}/J-h_{0}/J)$ plane and corresponding magnetization
profiles with coordination numbers $q=6,8,12$ in Figs. 3 and 4. In these
figures solid and dashed lines correspond to second and first order phase
transition points, respectively and hollow circles in Fig. 3 denote
tricritical points. As seen in Fig. 3, increasing values of $h_{0}/J$ causes
the critical temperature to decrease for a while, then reentrant behavior of
first order occurs at a specific range of $h_{0}/J$. According to our
calculations, reentrant phenomena and the first order phase transitions can be
observed within the range of $2.0<h_{0}/J<3.0$ for $q=6$, $3.565<h_{0}/J<3.95$
for $q=8$ and $4.622<h_{0}/J<5.81$ for $q=12$. If $h_{0}/J$ value is greater
than the upper limits of these field ranges, the system exhibits no phase
transition. The tricritical temperatures $k_{B}T_{t}/J$ which are shown as
hollow circles in Fig. 2 are found as $k_{B}T_{t}/J=1.5687,2.4751$ and
$4.3769$ for $q=6,8,12$, respectively.
Figure 3: Phase diagrams of the system $(q=6,8,12)$ in
$(k_{B}T_{c}/J-h_{0}/J)$ plane, corresponding to bimodal random field
distribution. Solid and dashed lines correspond to second and first order
phase transitions, respectively. Hollow circles denote the tricritical points.
Figure 4: Temperature dependence of magnetization corresponding to Fig. 3 with
$\sigma=0$ and $h_{0}/J=1.0,2.5$ for simple cubic lattice (left panel),
$h_{0}/J=1.0,3.8$ for body centered cubic lattice (middle panel) and
$h_{0}/J=1.0,5.5$ for face centered cubic lattice (right panel). Solid and
dashed lines correspond to second and first order phase transitions,
respectively.
In Fig. 4, we show two typical magnetization profiles of the system. Namely,
the system always undergoes a second order phase transition for $h_{0}/J=1.0$.
On the other hand, two successive phase transitions (i.e. a first order
transition is followed by a second order phase transition) occur at the values
of $h_{0}/J=2.5,3.8$ and $5.5$ for $q=6,8$ and $12$, respectively, which puts
forward the existence of a first order reentrant phenomena on the system. We
observe that the increasing $h_{0}/J$ values do not effect the saturation
values of magnetization curves.
### III.3 Phase diagrams of double Gaussian distribution
Figure 5: Phase diagrams of the system for a double Gaussian random field
distribution with $q=6,8,12$ in $(k_{B}T_{c}/J-h_{0}/J)$ and
$(k_{B}T_{c}/J-\sigma)$ planes.
Double Gaussian distribution in equation (4) with nearest neighbor
interactions have not yet been examined in the literature. Therefore, it would
be interesting to investigate the phase diagrams of the system with random
fields corresponding to equation (4). Now the shape of the random fields is
governed by two parameters $h_{0}/J$ and $\sigma$. As shown in preceding
results, increasing $\sigma$ value tends to reduce the saturation value of the
order parameter and destructs the second order phase transitions by decreasing
the critical temperature of the system without exposing any reentrant
phenomena for $h_{0}/J=0$. Besides, as $h_{0}/J$ value increases then the
second order phase transition temperature decreases again and the system may
exhibit a reentrant behavior for $\sigma=0$, while saturation value of
magnetization curves remains unchanged. Hence, the presence of both $h_{0}/J$
and $\sigma$ on the system should produce a cooperative effect on the phase
diagrams of the system. Fig. 5 shows the phase diagrams of the system in
$(k_{B}T_{c}/J-h_{0}/J)$ and $(k_{B}T_{c}/J-\sigma)$ planes for $q=6,8$ and
$12$. As we can see on the left panels in Fig. 5, the system exhibits
tricritical points and reentrant phenomena for narrow width of the random
field distribution and as the width $\sigma$ of the distribution gets wider
then the reentrant phenomena and tricritical behavior disappear. In other
words, both the reentrant behavior and tricritical points disappear as
$\sigma$ parameter becomes significantly dominant on the system. Our results
predict that tricritical points depress to zero at $\sigma=1.421,2.238$ and
$3.985$ for $q=6,8$ and $12$, respectively. For distribution widths greater
than these values, all transitions are of second order and as we increase
$\sigma$ value further, then ferromagnetic region gets narrower. Similarly, on
the right panels in Fig. 5, we investigate the phase diagrams of the system in
$(k_{B}T_{c}/J-\sigma)$ plane with selected values of $h_{0}/J$. We note that
for the values of $h_{0}/J\leq 2.0$ $(q=6)$, $h_{0}/J\leq 3.565$ $(q=8)$ and
$h_{0}/J\leq 4.622$ $(q=12)$, system always undergoes a second order phase
transition between paramagnetic and ferromagnetic phases at a critical
temperature which decreases with increasing values of $h_{0}/J$ as in Fig. 1,
where $h_{0}/J=0$. Moreover, for the values of $h_{0}/J$ greater than these
threshold values the system exhibits a reentrant behavior of first order and
the transition lines exhibit a bulge which gets smaller with increasing values
of $h_{0}/J$ which again means that ferromagnetic phase region gets narrower.
Besides, for $h_{0}/J>2.9952$ $(q=6)$, $h_{0}/J>3.9441$ $(q=8)$ and
$h_{0}/J>5.8085$ $(q=12)$ tricritical points appear on the system. In Fig. 6,
we show magnetization curves corresponding to the phase diagrams shown in Fig.
5 for simple cubic lattice. Fig. 6a shows the temperature dependence of
magnetization curves for $q=6$ with $h_{0}/J=0.5$ (left panel) and
$h_{0}/J=2.5$ (right panel) with selected values of $\sigma$. As we can see in
Fig. 6a, as $\sigma$ increases then the critical temperature of the system
decreases and first order phase transitions disappear (see the right panel in
Fig. 6a). Moreover, rate of decrease of the saturation value of magnetization
curves increases as $h_{0}/J$ increases. On the other hand, in Fig. 6b,
magnetization versus temperature curves have been plotted with $\sigma=0.5$
(left panel) and $\sigma=2.5$ (right panel) with some selected values of
$h_{0}/J$. In this figure, it is clear that saturation values of magnetization
curves remain unchanged for $\sigma=0.5$ and tend to decrease rapidly to zero
with increasing values of $h_{0}/J$ when $\sigma=2.5$. In addition, as
$h_{0}/J$ increases when the value of $\sigma$ is fixed then the critical
temperature of the system decreases and ferromagnetic phase region of the
system gets narrower. These observations show that there is a collective
effect originating from the presence of both $h_{0}/J$ and $\sigma$ parameters
on the phase diagrams and magnetization curves of the system.
Figure 6: Magnetization curves for simple cubic lattice corresponding to the
phase diagrams in Fig. 5 for a double Gaussian distribution with some selected
values of $h_{0}/J$ and $\sigma$.
Finally, Fig. 7 represents the variation of the tricritical point
$(k_{B}T_{t}/J,h_{t}/J)$ with $\sigma$ for $q=6,8$ and $12$. As seen from Fig.
7, $k_{B}T_{t}/J$ value decreases monotonically as $\sigma$ increases and
reaches to zero at a critical value $\sigma_{t}$. According to our
calculations, the critical distribution width $\sigma_{t}$ value can be given
as $\sigma_{t}=1.421,2.238,3.985$ for $q=6,8$ and $12$, respectively. This
implies that $\sigma_{t}$ value depends on the coordination number of the
lattice. Besides, $h_{t}/J$ curves exhibit relatively small increment with
increasing $\sigma$ value.
Figure 7: Variations of tricritical temperature $k_{B}T_{t}/J$ and tricritical
field $h_{t}/J$ as function of distribution width $(\sigma)$ for (a) simple
cubic, (b) body centered cubic and (c) face centered cubic lattices.
## IV Conclusions
In this work, we have studied the phase diagrams of a spin-$1/2$ Ising model
in a random magnetic field on simple cubic, body centered cubic and face
centered cubic lattices. RFIM with nearest neighbor interactions with double
Gaussian distribution as well as its single Gaussian counterpart has not been
examined within the EFT approximation before. Hence, we have introduced an
effective field approximation that takes into account the correlations between
different spins in the cluster of considered lattice and examined the phase
diagrams as well as magnetization curves of the system for different types of
random field distributions, namely single Gaussian, bimodal and double
Gaussian distributions. For a single Gaussian distribution, we have found that
the system always undergoes a second order phase transition between
paramagnetic and ferromagnetic phases. In the absence of any randomness (i.e.
$h_{0}/J=0,\sigma=0$) critical temperature values corresponding to the
coordination numbers $q=6,8$ and $12$ are found to be in an excellent
agreement with those of the other theoretical methods such as MC and SE. For
bimodal and double Gaussian distributions, we have given the proper phase
diagrams, especially the first-order transition lines that include reentrant
phase transition regions. Our numerical analysis clearly indicate that such
field distributions lead to a tricritical behavior. Moreover, we have
discussed a collective effect which arise from the presence of both $h_{0}/J$
and $\sigma$ parameters together and we have observed that saturation values
of magnetization curves are strongly related to this collective effect.
As a result, we can conclude that all of the points mentioned above show that
our method is superior to the conventional EFT methods based on decoupling
approximation and therefore we think that the calculation results are more
accurate. Therefore, we hope that the results obtained in this work may be
beneficial from both theoretical and experimental points of view.
## Acknowledgements
One of the authors (YY) would like to thank the Scientific and Technological
Research Council of Turkey (TÜBİTAK) for partial financial support. This work
has been completed at the Dokuz Eylül University, Graduate School of Natural
and Applied Sciences and is the subject of the forthcoming Ph.D. thesis of Y.
Yüksel. The partial financial support from SRF (Scientific Research Fund) of
Dokuz Eylül University (2009.KB.FEN.077) is also acknowledged.
## Appendix
The number of distinct correlations for the spin-$1/2$ system with $q$ nearest
neighbors is $2q-1$. With the magnetization of the central spin
$\langle\langle S_{0}\rangle\rangle_{r}$ and its nearest neighbor spin
$\langle\langle S_{1}\rangle\rangle_{r}$, there has to be $2q+1$ unknowns
which forms a system of linear equations. $q$ of the correlations which
include central spin are derived from the central spin magnetization
$\langle\langle S_{0}\rangle\rangle_{r}$, and rest of the $q-1$ correlations
which include only perimeter spin are derived from perimeter spin
magnetization expression $\langle\langle S_{1}\rangle\rangle_{r}$. Let us
label these correlations with $x_{i},i=1,2,\ldots,2q+1$, such that first $q$
of them include only perimeter spins and last $q+1$ of them include central
spin. We can represent all of the spin correlations and centeral/perimeter
spin magnetization with $x_{i}$ as $x_{1}=\langle\langle
S_{1}\rangle\rangle_{r}$, $x_{2}=\langle\langle
S_{1}S_{2}\rangle\rangle_{r}$,…$x_{q}=\langle\langle S_{1}S_{2}\ldots
S_{q}\rangle\rangle_{r}$, $x_{q+1}=\langle\langle S_{0}\rangle\rangle_{r}$,
$x_{q+2}=\langle\langle S_{0}S_{1}\rangle\rangle_{r},\ldots$ and
$x_{2q+1}=\langle\langle S_{0}S_{1}\ldots S_{q}\rangle\rangle_{r}$. The
complete set of correlation functions for $q=8,12$ can be obtained as the same
procedure given between equations (6)-(II) in section II.
The complete set of correlation functions for body centered cubic lattice
$(q=8)$:
$\displaystyle x_{1}$ $\displaystyle=$ $\displaystyle a_{1}+a_{2}x_{9},$
$\displaystyle x_{2}$ $\displaystyle=$ $\displaystyle a_{1}x_{1}+a_{2}x_{10},$
$\displaystyle x_{3}$ $\displaystyle=$ $\displaystyle a_{1}x_{2}+a_{2}x_{11},$
$\displaystyle x_{4}$ $\displaystyle=$ $\displaystyle a_{1}x_{3}+a_{2}x_{12},$
$\displaystyle x_{5}$ $\displaystyle=$ $\displaystyle a_{1}x_{4}+a_{2}x_{13},$
$\displaystyle x_{6}$ $\displaystyle=$ $\displaystyle a_{1}x_{5}+a_{2}x_{14},$
$\displaystyle x_{7}$ $\displaystyle=$ $\displaystyle a_{1}x_{6}+a_{2}x_{15},$
$\displaystyle x_{8}$ $\displaystyle=$ $\displaystyle a_{1}x_{7}+a_{2}x_{16},$
$\displaystyle x_{9}$ $\displaystyle=$ $\displaystyle
k_{0}+8k_{1}x_{1}+28k_{2}x_{2}+56k_{3}x_{3}+70k_{4}x_{4}+56k_{5}x_{5}+28k_{6}x_{6}+8k_{7}x_{7}+k_{8}x_{8},$
$\displaystyle x_{10}$ $\displaystyle=$ $\displaystyle
8k_{1}+(28k_{2}+k_{0})x_{1}+56k_{3}x_{2}+70k_{4}x_{3}+56k_{5}x_{4}+28k_{6}x_{5}+8k_{7}x_{6}+k_{8}x_{7},$
$\displaystyle x_{11}$ $\displaystyle=$
$\displaystyle(56k_{3}+8k_{1})x_{1}+(70k_{4}+28k_{2}+k_{0})x_{2}+56k_{5}x_{3}+28k_{6}x_{4}+8k_{7}x_{5}+k_{8}x_{6},$
$\displaystyle x_{12}$ $\displaystyle=$
$\displaystyle(56k_{3}+56k_{5}+8k_{1})x_{2}+(70k_{4}+28k_{2}+k_{0}+28k_{6})x_{3}+8k_{7}x_{4}+k_{8}x_{5},$
$\displaystyle x_{13}$ $\displaystyle=$
$\displaystyle(56k_{3}+56k_{5}+8k_{7}+8k_{1})x_{3}+(70k_{4}+k_{8}+28k_{2}+k_{0}+28k_{6})x_{4},$
$\displaystyle x_{14}$ $\displaystyle=$
$\displaystyle(56k_{3}+56k_{5}+8k_{7}+8k_{1})x_{4}+(70k_{4}+k_{8}+28k_{2}+k_{0}+28k_{6})x_{5},$
$\displaystyle x_{15}$ $\displaystyle=$
$\displaystyle(56k_{3}+56k_{5}+8k_{7}+8k_{1})x_{5}+(70k_{4}+k_{8}+28k_{2}+k_{0}+28k_{6})x_{6},$
$\displaystyle x_{16}$ $\displaystyle=$
$\displaystyle(56k_{3}+56k_{5}+8k_{7}+8k_{1})x_{6}+(70k_{4}+k_{8}+28k_{2}+k_{0}-28k_{6})x_{7},$
$\displaystyle x_{17}$ $\displaystyle=$
$\displaystyle(56k_{3}+56k_{5}+8k_{7}+8k_{1})x_{7}+(70k_{4}+k_{8}+28k_{2}+k_{0}+28k_{6})x_{8}.$
Similarly, correlation functions for face centered cubic lattice $(q=12)$:
$\displaystyle x_{1}$ $\displaystyle=$ $\displaystyle a_{1}+a_{2}x_{13},$
$\displaystyle x_{2}$ $\displaystyle=$ $\displaystyle a_{1}x_{1}+a_{2}x_{14},$
$\displaystyle x_{3}$ $\displaystyle=$ $\displaystyle a_{1}x_{2}+a_{2}x_{15},$
$\displaystyle x_{4}$ $\displaystyle=$ $\displaystyle a_{1}x_{3}+a_{2}x_{16},$
$\displaystyle x_{5}$ $\displaystyle=$ $\displaystyle a_{1}x_{4}+a_{2}x_{17},$
$\displaystyle x_{6}$ $\displaystyle=$ $\displaystyle a_{1}x_{5}+a_{2}x_{18},$
$\displaystyle x_{7}$ $\displaystyle=$ $\displaystyle a_{1}x_{6}+a_{2}x_{19},$
$\displaystyle x_{8}$ $\displaystyle=$ $\displaystyle a_{1}x_{7}+a_{2}x_{20},$
$\displaystyle x_{9}$ $\displaystyle=$ $\displaystyle a_{1}x_{8}+a_{2}x_{21},$
$\displaystyle x_{10}$ $\displaystyle=$ $\displaystyle
a_{1}x_{9}+a_{2}x_{22},$ $\displaystyle x_{11}$ $\displaystyle=$
$\displaystyle a_{1}x_{10}+a_{2}x_{23},$ $\displaystyle x_{12}$
$\displaystyle=$ $\displaystyle a_{1}x_{11}+a_{2}x_{24},$ $\displaystyle
x_{13}$ $\displaystyle=$ $\displaystyle
k_{0}+12k_{1}x_{1}+66k_{2}x_{2}+220k_{3}x_{3}+495k_{4}x_{4}+792k_{5}x_{5}+924k_{6}x_{6}$
$\displaystyle+792k_{7}x_{7}+495k_{8}x_{8}+220k_{9}x_{9}+66k_{10}x_{10}+12k_{11}x_{11}+k_{12}x_{12},$
$\displaystyle x_{14}$ $\displaystyle=$ $\displaystyle
12k_{1}+(k_{0}+66k_{2})x_{1}+220k_{3}x_{2}+495k_{4}x_{3}+792k_{5}x_{4}+924k_{6}x_{5},$
$\displaystyle+792k_{7}x_{6}+495k_{8}x_{7}+220k_{9}x_{8}+66k_{10}x_{9}+12k_{11}x_{10}+k_{12}x_{11},$
$\displaystyle x_{15}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3})x_{1}+(k_{0}+66k_{2}+495k_{4})x_{2}+792k_{5}x_{3}+924k_{6}x_{4},$
$\displaystyle+792k_{7}x_{5}+495k_{8}x_{6}+220k_{9}x_{7}+66k_{10}x_{8}+12k_{11}x_{9}+k_{12}x_{10},$
$\displaystyle x_{16}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+792k_{5})x_{2}+(k_{0}+66k_{2}+495k_{4}+924k_{6})x_{3},$
$\displaystyle+792k_{7}x_{4}+495k_{8}x_{5}+220k_{9}x_{6}+66k_{10}x_{7}+12k_{11}x_{8}+k_{12}x_{9},$
$\displaystyle x_{17}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+792k_{5}+792k_{7})x_{3}+(k_{0}+66k_{2}+495k_{4}+495k_{8}+924k_{6})x_{4},$
$\displaystyle+220k_{9}x_{5}+66k_{10}x_{6}+12k_{11}x_{7}+k_{12}x_{8},$
$\displaystyle x_{18}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+220k_{9}+792k_{5}+792k_{7})x_{4},$
$\displaystyle+(k_{0}+66k_{2}+495k_{4}+495k_{8}+66k_{10}+924k_{6})x_{5}+12k_{11}x_{6}+k_{12}x_{7},$
$\displaystyle x_{19}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+220k_{9}+792k_{5}+12k_{11}+792k_{7})x_{5},$
$\displaystyle+(k_{0}+66k_{2}+k_{12}+495k_{4}+495k_{8}+66k_{10}+924k_{6})x_{6},$
$\displaystyle x_{20}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+220k_{9}+792k_{5}+12k_{11}+792k_{7})x_{6},$
$\displaystyle+(k_{0}+66k_{2}+k_{12}+495k_{4}+495k_{8}+66k_{10}+924k_{6})x_{7},$
$\displaystyle x_{21}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+220k_{9}+792k_{5}+12k_{11}+792k_{7})x_{7},$
$\displaystyle+(k_{0}+66k_{2}+k_{12}+495k_{4}+495k_{8}+66k_{10}+924k_{6})x_{8},$
$\displaystyle x_{22}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+220k_{9}+792k_{5}+12k_{11}+792k_{7})x_{8},$
$\displaystyle+(k_{0}+66k_{2}+k_{12}+495k_{4}+495k_{8}+66k_{10}+924k_{6})x_{9},$
$\displaystyle x_{23}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+220k_{9}+792k_{5}+12k_{11}+792k_{7})x_{9},$
$\displaystyle+(k_{0}+66k_{2}+k_{12}+495k_{4}+495k_{8}+66k_{10}+924k_{6})x_{10},$
$\displaystyle x_{24}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+220k_{9}+792k_{5}+12k_{11}+792k_{7})x_{10},$
$\displaystyle+(k_{0}+66k_{2}+k_{12}+495k_{4}+495k_{8}+66k_{10}+924k_{6})x_{11},$
$\displaystyle x_{25}$ $\displaystyle=$
$\displaystyle(12k_{1}+220k_{3}+220k_{9}+792k_{5}+12k_{11}+792k_{7})x_{11},$
$\displaystyle+(k_{0}+66k_{2}+k_{12}+495k_{4}+495k_{8}+66k_{10}+924k_{6})x_{12}.$
where the coefficients are given by
$\displaystyle a_{0}=\cosh(J\nabla)F(x+\gamma)|_{x=0},\ \ \ \ \ \ \ \ \ \ $
$\displaystyle a_{1}=\sinh(J\nabla)F(x+\gamma)|_{x=0},$
with $\gamma=(q-1)A$,
$k_{i}=\cosh^{q-i}(J\nabla)\sinh^{i}(J\nabla)F(x)|_{x=0},\ \ i=0,\ldots,q$
(15)
where $q$ is the coordination number. Since the first $2q$ equations are
independent of the correlation labeled $x_{2q+1}$ it is not necessary to
include this correlation function in the set of linear equations. Therefore it
is adequate to take $2q$ linear equations for calculations.
## References
* (1)
* (2) E. Ising, Z. Phys. 31, 253 (1925).
* (3) L. Onsager, Phys. Rev. 65, 117 (1944) .
* (4) A. I. Larkin, Sov. Phys. JETP 31, 784 (1970).
* (5) Y. Imry and S. Ma, Phys. Rev. Lett. 35, 1399 (1975).
* (6) G. Grinstein and S. Ma, Phys. Rev. Lett. 49, 685 (1982).
* (7) J. F. Fernandez, G. Grinstein, Y. Imry, and S. Kirkpatrick, Phys. Rev. Lett. 51, 203 (1983).
* (8) J. Z. Imbrie, Phys. Rev. Lett. 53, 1747 (1984).
* (9) J. Bricmont and A. Kupiainen, Phys. Rev. Lett. 59, 1829 (1987).
* (10) G. Parisi and N. Sourlas, Phys. Rev. Lett. 43, 744 (1979).
* (11) K. Binder, Y. Imry, and E. Pytte, Phys. Rev. B 24, 6736 (1981).
* (12) E. Pytte, Y. Imry, and D. Mukamel, Phys. Rev. Lett. 46, 1173 (1981).
* (13) D. Mukamel and E. Pytte, Phys. Rev. B 25, 4779 (1982).
* (14) A. Niemi, Phys. Rev. Lett. 49, 1808 (1982).
* (15) C. Frontera and E. Vives, Phys. Rev. E 59, R1295 (1998).
* (16) D. P. Belanger, A. R. King, and V. Jaccarino, Phys. Rev. B 31, 4538 (1985).
* (17) A. R. King, V. Jaccarino, D. P. Belanger, and S. M. Rezende, Phys. Rev. B 32, 503 (1985).
* (18) J. B. Ferreira, A. R. King, V. Jaccarino, J. L. Cardy, and H. J. Guggenheim, Phys. Rev. B 28, 5192 (1983).
* (19) H. Yoshizawa, R. A. Cowley, G. Shirane, R. J. Birgenau, H. J. Guggenheim, and H. Ikeda, Phys. Rev. Lett. 48, 438 (1982).
* (20) S. Fishman and A. Aharony, J. Phys. C Solid State Phys. 12, L729 (1979).
* (21) J. L. Cardy, Phys. Rev. B 29, 505 (1984).
* (22) T. Schneider and E. Pytte, Phys. Rev. B 15, 1519 (1987).
* (23) A. Aharony, Phys. Rev. B 18, 3318 (1978).
* (24) D. C. Matthis, Phys. Rev. Lett. 55, 3009 (1985).
* (25) N. Crokidakis and F. D. Nobre, J. Phys.: Condens. Matter 20, 145211 (2008).
* (26) O. R. Salmon, N. Crokidakis, and F. D. Nobre, J. Phys.: Condens. Matter 21, 056005 (2009).
* (27) H. E. Borges and P. R. Silva, Physica A 144, 561 (1987).
* (28) Y. Q. Liang, G. Z. Wei, Q. Zhang, Z. H. Xin, and G. L. Song, J. Magn. Magn. Mater. 284, 47 (2004).
* (29) E. F. Sarmento and T. Kaneyoshi, Phys. Rev. B 39, 9555 (1989).
* (30) R. M. Sebastianes and W. Figueiredo, Phys. Rev. B 46, 969 (1992).
* (31) T. Kaneyoshi, Physica A 139, 455 (1985).
* (32) D. P. Landau, H. H. Lee, and W. Kao, J. Appl. Phys. 49, 1356 (1978).
* (33) J. Machta, M. E. J. Newman, and L. B. Chayes, Phys. Rev. E 62, 8782 (2000).
* (34) N. G. Fytas, A. Malakis, and K. Eftaxias, J. Stat. Mech. Theory Exp. 2008, 03015 (2008).
* (35) N. G. Fytas and A. Malakis, Eur. Phys. J. B 61, 111 (2008).
* (36) E. Albayrak and O. Canko, J. Magn. Magn. Mater. 270, 333 (2004).
* (37) M. Gofman, J. Adler, A. Aharony, A. B. Harris, and M. Schwartz, Phys. Rev. B 53, 6362 (1996).
* (38) H. Polat, Ü. Akıncı, and İ. Sökmen, Phys. Stat. Sol. B 240, 189 (2003).
* (39) Y. Canpolat, A. Torgürsül, and H. Polat, Phys. Scr. 76, 597 (2007).
* (40) Y. Yüksel, Ü. Akıncı, and H. Polat, Phys. Scr. 79, 045009 (2009).
* (41) Y. Yüksel and H. Polat, J. Magn. Magn. Mater. 322, 3907 (2010).
* (42) Ü. Akıncı, Y. Yüksel, and H. Polat, Physica A 390, 541 (2011).
* (43) I. A. Hadjiagapiou, Physica A 389, 3945 (2010).
* (44) A. Du, H. J. Liu, and Y. Q. Yü, Phys. Stat. Sol. B 241, 175 (2004).
* (45) T. Kaneyoshi, I. P. Fittipaldi, R. Honmura, and T. Manabe, Phys. Rev. B 24, 481 (1981).
* (46) T. Balcerzak, Physica A 317, 213 (2003).
* (47) T. Kaneyoshi, Rev. Solid State Sci. 2, 39 (1988).
* (48) R. Kikuchi, Phys. Rev. 81, 988 (1951).
* (49) J. R. de Sousa and I. G. Araújo, J. Magn. Magn. Mater. 202, 231 (1999); M. A. Neto and J. R. de Sousa, Phys. Rev. E 70, 224436 (2004).
* (50) E. E. Reinerhr and W. Figueiredo, Phys. Lett. A 244, 165 (1998).
* (51) D. P. Landau, Phys. Rev. B 16, 4164 (1977); 14, 255 (1976); A. M. Ferrenberg and D. P. Landau Phys. Rev. B 44, 5081 (1991).
* (52) M. E. Fisher, Rep. Prog. Phys. 30, 615 (1967).
* (53) H. B. Callen, Phys. Lett. 4, I61 (1963).
* (54) R. Honmura and T. Kaneyoshi, J. Phys. C Solid State Phys. 12, 3979 (1979).
* (55) T. Kaneyoshi, Acta Phys. Pol. A 83, 703 (1993).
* (56) I. Tamura and T. Kaneyoshi, Prog. Theor. Phys. 66, 1892 (1981).
|
arxiv-papers
| 2011-03-31T15:36:09 |
2024-09-04T02:49:18.049717
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "\\\"Umit Ak{\\i}nc{\\i}, Yusuf Y\\\"uksel, Hamza Polat",
"submitter": "Yusuf Yuksel",
"url": "https://arxiv.org/abs/1103.6215"
}
|
1103.6240
|
arxiv-papers
| 2011-03-31T17:12:52 |
2024-09-04T02:49:18.054842
|
{
"license": "Public Domain",
"authors": "Xinyu Lou, L. Deng, E.W. Hagley, Kuiyi Gao, Xiaorui Wang, and Ruquan\n Wang",
"submitter": "Lu Deng",
"url": "https://arxiv.org/abs/1103.6240"
}
|
|
1103.6262
|
# Non-standard morphological relic patterns in the cosmic microwave background
Joe Zuntz111Astrophysics Group, Oxford University & Astrophysics Group,
University College London. email: jaz@astro.ox.ac.uk, James P. Zibin222
Department of Physics & Astronomy, University of British Columbia, Caroline
Zunckel333Astrophysics and Cosmology Research Unit, University of Kwazulu-
Natal, Jonathan Zwart444Department of Astronomy, Columbia University
(1 April 2011)
###### Abstract
Statistically anomalous signals in the microwave background have been
extensively studied in general in multipole space, and in real space mainly
for circular and other simple patterns. In this paper we search for a range of
non-trivial patterns in the temperature data from WMAP 7-year observations. We
find a very significant detection of a number of such features and discuss
their consequences for the essential character of the cosmos.
## 1 Introduction
The most significant information contained in maps of the cosmic microwave
background (CMB) and large-scale structure has for the most part been
contained solely in their two-point statistics: the power spectra and angular
correlation functions of the respective fields. But it has long been
recognized that important information can be lost if we exclusively follow
this approach.
One area of fertile or at least extensive research has been searches for non-
gaussian statistics of the CMB. The bispectrum and trispectrum of Wilkinson
Microwave Anisotropy Probe (WMAP) and other data have been analyzed for
consistency with gaussianity, with varying results [1, 2, 3, 4, 5] and even
single point statistics beyond the variance can provide strongly convincing
evidence of non-gaussian behavior [6].
There have also been searches for real-space (morphological) patterns in the
CMB. Various models suggest that there should be ‘circles-in-the-sky’ present
on very large scales: large annular regions of correlated or enhanced
background temperature. These patterns might be generated by non-trivial
cosmic topologies [7, 8, 9, 10, 11], or by more exotic pre-big bang models of
cosmological history. There have been claims of a highly significant detection
of concentric circles in the latter context in WMAP and other data [12], but
they have been contradicted by other analyses [13, 14, 15], which have
suggested that such signals could be caused by systematic error.
There is little work in the literature concerning non-circular relic patterns
in the CMB, apart from the pioneering search for triangular patterns in [13].
Elliptical patterns have been considered, but it has been noted that ellipses
are really just stretchy circles [16].
In this article we search for other morphologies in primary CMB data by
correlating patterns with the WMAP 7-year W-band data [17]. Section 1 contains
the standard description of things you already know. We copied and pasted some
text from a previous paper into section 2. In section 3 we discuss the
theoretical underpinning of the relic patterns for which we search. In section
4 we describe results of searches for such patterns.
Throughout this article we use the convention that happiness and morality are
positive and sadness and evil negative. We assume the standard mind-the-gap
tube announcement.
## 2 Methodology
### 2.1 Posterior statistics & morphology
It has been stated that selection of one’s CMB statistic _a posteriori_ is on
some level bad science [18]. We must object to this on two counts. On a
practical level cosmic variance limits us; if we were to follow this principle
exactly we would be permitted no more hypotheses about the CMB after the first
WMAP data release. Second, there is clearly some level of significance of a
statistic so-selected that overcomes any objection. For example, if the
primary cosmic microwave background anisotropies had the words _We apologise
for the inconvenience_ written in 300$\mu$K hot letters at the Galactic north
pole (see Figure 1) then we would not be persuaded that this has no
significance, even if no one suggested it in advance (though in fact they did:
[19]).
Figure 1: An example of a CMB map for which the application of posterior
statistics would not be wholly unreasonable.
### 2.2 Methods
It has often been noted by theorists that the conceptual simplicity and beauty
of a theory is as important as such quotidian concerns as accurate fits to
data. We adopt this philosophy now and apply it to the data analysis method we
use in this paper. Our analysis will be the computational equivalent of a
theoretical toy model: simple, but we do not want to imply we are not clever
enough to do a better one, just too busy.
We generate template images of the patterns discussed in section 3, and
convert them to Healpix [20] maps at resolution $N_{\mathrm{side}}=512$,
initially centered at the north pole. We compare these template maps to co-
added WMAP 7-year maps of the W-band microwave sky as follows: For each pixel
on a lower resolution $N_{\mathrm{side}}=64$ map, we rotate the positions of
those pixels ‘hit’ in the template image so that their north points to that
pixel. We then take the covariance of these pixels to the WMAP pixels at the
newly rotated positions as the new low-resolution pixel value. We exclude
pixels in the WMAP temperature analysis mask. We normalize each pixel by the
mean temperature of pixels in a disc of fixed radius around it, so that simple
CMB hotspots are not detected.
## 3 Search patterns & theoretical basis
There is some previous literature on whether the information content of the
CMB could contain messages of universal, observer-independent significance
[21, 22]. We can characterize this approach as an extended application of the
Copernican principle, which states that our perspective on the Universe should
be in some sense typical. As an alternative, we can adopt the approach taken
in the study of so-called _void_ cosmologies by otherwise mostly sensible
people [23, 24, 25]. In that approach we drop this assumption and posit that
the Milky Way and Earth are in some central cosmological location.
We can take a similar approach here; the claim that any information content in
the CMB must be universal might be termed the Cultural Copernican Principle.
As an alternative we can look for morphological patterns that do have
parochial cultural relevance. Analyses of the string-theory landscape
typically show that anthropically controlled parameters should take the most
extreme values they can while still resulting in the formation of observers
[26].
Applying a similar argument555Or at least one that is approximately as
meaningful., we may suppose that any parochial information in the CMB is
nonetheless as universal as possible. Since the only cultural sample we have
is our own, we must therefore look for relatively popular local symbols that
frequently appear in ways that seem to suggest the communication of
information.
### 3.1 Patterns
Table 1 shows the morphological patterns for which we search in this paper.
There is usually an element of subjectivity in the choice of such patterns; we
have made strenuous efforts to avoid this by consciously being very very fair.
The most universal information encoding devised so far is the unicode
character set [27], which attempts to include all human communication symbols.
The majority of the symbols we use are thus drawn from this set. We have also
attempted to include symbols with meanings that are as far as possible
opposites, so that we may draw more general statistical conclusions about the
nature of the CMB. We must consider both increments and decrements in
temperature; but note that negative pattern A is not the same as positive
pattern B, for example.
For each pattern, we should as faithful bayesians provide a model prior which
describes the degree of credence we assign to the presence of the pattern,
_before we look at the data_ , or if this is not possible, temporarily
forgetting the data. Fortunately this is rather simple in this case: each
pattern can either be present or not present, so the probability is 0.5.
See Table
## 4 Results
### 4.1 Relative Significance
The level at which we detect patterns as a function of their radii are shown
in Figures 2, 3 and 4 for patterns (A,B), (C,D) and E respectively. We can
immediately pick out several key features.
First, the pattern B curve in Figure 2 has more power than pattern A; this is
particularly prominent for the negative regime. This indicates that the CMB is
somewhat sad.
Similarly, pattern C is more prominent than pattern D in Figure 3, indicating
that the CMB signal is a holy one; this is further evidence against the axis-
of-evil phenomenon reported at large scales [28].
The strongest peak in all the patterns is in the negative regime in Figure 4,
for pattern E, at a radius of $10\deg$, corresponding to an angular multipole
$\ell\approx 18$. This indicates that the CMB is strongly disapproving at
those scales. We might speculate about the cause of this disapproval; it seems
to be associated with the famed ‘cold spot’. The location of the strong
disapproval cold spot is shown in Figure 5.
Figure 2: Detection levels of patterns A (happy face; green) and B (sad face;
blue) with radius. The maximum signal at each radius is the solid line and the
minimum is the dashed line. Figure 3: Detection levels of patterns C (face of
Jesus; yellow) and D (pentagram; red) with radius. Figure 4: Detection levels
of patterns E (look-of-disapproval) with radius. Figure 5: A detection of
pattern E, the look of disapproval.
### 4.2 Absolute significance
We can, as above, easily compare two different signals in the CMB to generate
a relative significance of two detected patterns. But it is perhaps more
instructive to compute an absolute significance level so that we can state a
familiar ‘sigma-number’ that everyone thinks they understand.
We therefore run our pattern search code on simulated data for comparison.
There are of course various levels of sophistication we might use for these
simulations. The most complete simulation would include foregrounds, WMAP scan
strategies, and correlated noise. We might use the method recommended in [29],
where cosmological CMB signal in the maps is excluded and only noise is
considered. It is clear, however, that this runs the risk of over-estimating
the probability of finding a pattern by chance; the more conservative choice
is clearly to analyze a map with neither signal nor noise using the same
pipeline.
We find that the likelihood of finding any such patterns in these simulations
is zero; the absolute detection significance of our discoveries is therefore
approximately $\infty\sigma$.
## 5 Conclusions
We have shown that common methods of CMB circle-searching can be
straightforwardly extended to non-circular patterns, and have applied such a
method. In doing so, we have been able to characterize previously unmeasured
statistics about the microwave background: its mood, characterized by the
difference in detection significance between patterns A and B, and its moral
fiber, characterized by the difference between patterns C and D. We find that
the CMB is sad, good and disapproving, which is perhaps a bittersweet
conclusion.
There are of course several other similar statistical CMB measurements we
could make: its political persuasion, for example, or its gender or
sexuality666The authors have always conceived of the CMB as an elderly lesbian
Tea-Partier; this perhaps says more about them than it.; we leave these to
future work. We expect our findings to inspire theoretical studies to
elucidate the underpinnings of these unanticipated aspects of the CMB. Indeed,
we suggest that such efforts begin immediately, since it is known that the CMB
anisotropies are time-dependent [30], and so follow-up observational studies
of the CMB may indicate that the effects we describe are only transient. If
subsequent reanalyses fail to find the same results as those found here then
that is perhaps the reason.
We have highlighted in this paper the deficiency of two-point statistics alone
in studying the CMB or other fields. It seems clear that this habit, which has
arisen because we have only two eyes and so can see only two data points at
once, needs to be augmented with new methods.
_Acknowledgments_ Many people contributed ideas and help for this manuscript;
for some reason they wished to remain anonymous. The exception was Olaf Davis,
who we thank for useful conversations.
## References
* [1] deBernadis et al (2010) PRD 82 083511
* [2] Mead, J, Lewis, A., & King, L. (2011) PRD 83 023507
* [3] Kamionkowski,M., Smith, T.L., Heavens, A. (2011) PRD 83 023007
* [4] Munshi, D. et l (2011) MNRAS 410 1295
* [5] Elsner, F. & Wandelt, B. (2010) ApJ 724 1262
* [6] Jeong, E. & Smoot, G.F. (2007) arXiv:0710.2371
* [7] Bielewicz, P. & Riazuelo, A. (2009) MNRAS 396 609
* [8] Mota, B., Rebou as, M.J. & Tavakol, R.(2008) PRD 78 083521
* [9] Zunckel, C. Gott, J.R., & Lunnan, R. (2011) MNRAS 412 1401
* [10] Gott, J.R., et al (2007) MNRAS 377 1668
* [11] Niarchou, A. & Jaffe, A. (2007) PRL 99 081302
* [12] Gurzadyan, V.G. & Penrose, R. (2010) arXiv:1011.3706v1
* [13] Moss, A., Scott, D. & Zibin, J.P. (2010) arXiv:1012.1305v2
* [14] Wehus, I.K., & Eriksen H.K. (2010) arXiv:1012.1268v1
* [15] Hajian, A. (2010) arXiv:1012.1656v1
* [16] _Citation Needed_
* [17] Jarosik, N., et al, (2011), ApJS, 192, 14
* [18] Bennett, C., et al, (2011), ApJS, 192, 17
* [19] Pedbost, M.F. et al. arXiv:0903.5377v1
* [20] Gorksi, K.M. et al, (2005) ApJ 622 2 759
* [21] Scott, D. & Zibin, J. arXiv:0511135
* [22] Hsu, S. & Zee, A. MPRLA 21 1495
* [23] Zibin, J. P., Moss, A. & Scott, D. (2008) PRL 101 251303
* [24] Clifton, T., Ferreira, P.G. & Zuntz, J. (2009) JCAP 7 29
* [25] Zuntz, J., Zlosnik, T.G., Zunckel, C., Zwart, J.T.L. (2010) arXiv:1003.6064v1
* [26] Ellis, G.R., Smolin, L. (2009) arXiv:0901.2414v1
* [27] The Unicode Consortium (2011) ISBN 978-1-936213-01-6
* [28] Land, K. & Magueijo, J, MNRAS, 378 153
* [29] Gurzadyan, V.G. & Penrose, R. (2010) arXiv:1012.1486v1
* [30] Zibin, J. P., Moss, A. & Scott, D. (2007) PRD 76 123010
|
arxiv-papers
| 2011-03-31T18:36:16 |
2024-09-04T02:49:18.058124
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Joe Zuntz, James P. Zibin, Caroline Zunckel, Jonathan Zwart",
"submitter": "Joseph Zuntz",
"url": "https://arxiv.org/abs/1103.6262"
}
|
1104.0118
|
# A Comparative Study of Relaying Schemes with Decode-and-Forward over
Nakagami-$m$ Fading Channels
George C. Alexandropoulos, Agisilaos Papadogiannis and Paschalis C. Sofotasios
G. C. Alexandropoulos is with the Department of Telecommunications Science and
Technology, University of Peloponnese, End of Karaiskaki Street, GR-22100
Tripolis, Greece (e-mail: alexandg@ieee.org).A. Papadogiannis is with the
Communications Research Group, Department of Electronics, University of York,
York, YO10 5DD, United Kingdom (e-mail: ap851@ohm.york.ac.uk).P. C. Sofotasios
is with School of Electronic and Electrical Engineering, University of Leeds,
Leeds, LS2 9JT, United Kingdom (e-mail: p.sofotasios@leeds.ac.uk).
###### Abstract
Utilizing relaying techniques to improve performance of wireless systems is a
promising avenue. However, it is crucial to understand what type of relaying
schemes should be used for achieving different performance objectives under
realistic fading conditions. In this paper, we present a general framework for
modelling and evaluating the performance of relaying schemes based on the
decode-and-forward (DF) protocol over independent and not necessarily
identically distributed (INID) Nakagami-$m$ fading channels. In particular, we
present closed-form expressions for the statistics of the instantaneous output
signal-to-noise ratio of four significant relaying schemes with DF; two based
on repetitive transmission and the other two based on relay selection (RS).
These expressions are then used to obtain closed-form expressions for the
outage probability and the average symbol error probability for several
modulations of all considered relaying schemes over INID Nakagami-$m$ fading.
Importantly, it is shown that when the channel state information for RS is
perfect, RS-based transmission schemes always outperform repetitive ones.
Furthermore, when the direct link between the source and the destination nodes
is sufficiently strong, relaying may not result in any gains and in this case
it should be switched-off.
###### Index Terms:
Bit error probability, cooperative diversity, Nakagami-$m$ fading, outage
probability, relay selection, repetitive transmission.
## I Introduction
The significance of multiple-input multiple-output (MIMO) techniques for
modern wireless systems has been well appreciated. Multiple collocated
antennas can improve transmission reliability and the achievable capacity
through diversity, spatial multiplexing and/or interference suppression [1,
2]. However, the cost of mobile devices is proportional to their number of
antennas and this creates a serious practical limitation for the use of MIMO.
Cooperative diversity is a promising new avenue which allows cooperation
amongst a number of wireless nodes which effectively profit from MIMO
techniques without requiring multiple collocated antennas [3, 4].
Dual-hop cooperative diversity entails that the transmission of the source
node towards a destination node is assisted by one or more relay nodes which
can be seen to form a conceptual MIMO array [3]. Relay nodes can either be
fixed, being part of the system infrastructure, or mobile, i.e., mobile nodes
that relay signals intended for other mobile nodes [5]. Cooperative diversity
is shown to improve the transmission reliability and the achievable capacity
while it also extends coverage. Essentially, it can achieve diversity gains
and has the additional advantage over conventional MIMO that the remote
cooperating antennas experience very low or inexisting correlation [3, 6, 7,
8]. The performance of a system exploiting cooperative diversity depends on
the employed relaying protocol and scheme, i.e., the way of utilizing relay
nodes [3, 6, 7, 8]. Consequently, it is crucial to gain insights on which
relay scheme is most suitable for achieving a particular objective; some
common objectives are the minimization of the outage probability (OP) or the
average symbol error probability (ASEP) [9, 10].
In the present work, we consider the decode-and-forward (DF) relaying protocol
under the assumption that the message transmitted by the source is decoded and
retransmitted to destination by one or more relays in a dual-hop fashion. We
also take into account four relaying schemes; two based on repetitive
transmission and the other two based on relay selection (RS). According to
repetitive transmission, all relays that decode source’s message retransmit it
repetitively to the destination node which employs diversity techniques to
combine the different signal copies [11, 12]. One version of RS entails that
amongst relays decoding the source’s message only one is selected to
retransmit it to destination, the one with the strongest relay to destination
channel [6, 13]. Another version of RS utilizes the best relay only in the
case that it results in capacity gains over the direct source to destination
transmission [7, 10]. In the literature such schemes have been considered
partially and mainly assuming independent and identically distributed (IID)
Rayleigh fading channels [11, 6, 13]. Recently, the Nakagami-$m$ fading model
has received a lot of attention as it can describe more accurately the fading
process and helps in understanding the gains of cooperative diversity [14, 9,
15, 16, 10, 17, 18]. However, there has not been a complete study that sheds
light on the question of which relaying scheme is preferable and under which
channel conditions.
In this paper, we present a general analytical framework for modeling and
evaluating performance of relaying schemes with DF under independent and not
necessarily identically distributed (INID) Nakagami-$m$ fading channels.
Further to this, we obtain closed-form expressions for the OP and the ASEP
performance of the RS and repetitive schemes when maximal ratio diversity
(MRD) or selection diversity (SD) are employed at the destination node. We
conclude that the RS-based transmission always performs better in terms of OP
and ASEP than repetitive transmission when channel state information (CSI) for
RS is perfect. In addition, when the direct source to destination link is
sufficiently strong, relaying should be disabled when the objective is the
minimization of OP. Although RS requires only two time slots for transmission
(the repetitive scheme needs as many time slots as the number of decoding
relays) its performance heavily relies on the quality of CSI for RS.
The remainder of this paper is structured as follows: Section II outlines the
system and channel models. Section III presents closed-form expressions for
the statistics of the instantaneous output SNR of the considered DF relaying
schemes over INID Nakagami-$m$ fading channels. In Section IV, closed-from
expressions are derived for the OP and ASEP performance of all relaying
schemes. Section V contains numerical results and relevant discussion, whereas
Section VI concludes the paper.
Notations: Throughout this paper, $\left|\mathbb{A}\right|$ represents the
cardinality of the set $\mathbb{A}$ and ${\mathbb{E}}\langle\cdot\rangle$
denotes the expectation operator. $\Pr\left\\{\cdot\right\\}$ denotes
probability, $\mathbb{L}^{-1}\\{\cdot;\cdot\\}$ denotes the inverse Laplace
transform and $X\sim\mathcal{C}\mathcal{N}\left(\mu,\sigma^{2}\right)$
represents a random variable (RV) following the complex normal distribution
with mean $\mu$ and variance $\sigma^{2}$. $\Gamma\left(\cdot\right)$ is the
Gamma function [19, eq. (8.310/1)] and
$\mathsf{\Gamma}\left(\cdot,\cdot\right)$ is the lower incomplete Gamma
function [19, eq. (8.350/1)]. Moreover, $\delta\left(\cdot\right)$ is the
Dirac function, $u\left(\cdot\right)$ is the unit step function and
$\delta\left(\cdot,\cdot\right)$ is the Kronecker Delta function.
## II System and Channel Model
We consider a dual-hop cooperative wireless system, as illustrated in Fig. 1,
consisting of $L+2$ wireless nodes: one source node $S$, a set $\mathbb{A}$ of
$L$ relay nodes each denoted by $R_{k}$, $k=1,2,\ldots,L$, and one destination
node $D$. All $R_{k}$’s are assumed to operate in half-duplex mode, i.e., they
cannot transmit and receive simultaneously, and node $D$ is assumed to possess
perfectly $S\rightarrow D$ and all $R_{k}\rightarrow D$ CSI. We consider
orthogonal DF (ODF) relaying [3] for which each $R_{k}$ that successfully
decodes $S$’s signal retransmits it to $D$; during each $R_{k}$’s transmission
to $D$ node $S$ remains silent. Assuming repetitive transmission [3], $L+1$
time slots are used to forward $S$’s signal to $D$ in a predetermined order,
whereas only two time slots are needed with RS-based transmission [6]. In
particular, during the first time slot for both transmission strategies, $S$
broadcasts its signal to all $R_{k}$’s and also to $D$. Considering quasi-
static fading channels, the received signal at $R_{k}$’s and $D$,
respectively, during the first time slot can be mathematically expressed as
$y_{D}^{(1)}=h_{SD}s+n_{SD}$ (1a) $y_{R_{k}}=h_{SR_{k}}s+n_{SR_{k}}$ (1b)
where $h_{SD}$ and $h_{SR_{k}}$ denote the $S\rightarrow D$ and $S\rightarrow
R_{k}$, respectively, complex-valued channel coefficients and $s$ is the
transmitted complex message symbol with average symbol energy $E_{s}$.
Moreover, the notations $n_{SD}$ and $n_{SR_{k}}$ in (1) represent the
additive white Gaussian noise (AWGN) of the $S\rightarrow D$ and $S\rightarrow
R_{k}$ channel, respectively, with
$n_{SD},n_{SR_{k}}\sim\mathcal{C}\mathcal{N}\left(0,N_{0}\right)$. For both
$n_{SD}$ and $n_{SR_{k}}$, it is assumed that they are statistically
independent of $s$.
Let us assume that set $\mathbb{B}\subseteq\mathbb{A}$ contains the relay
nodes that have successfully decoded $S$’s signal during the first time slot
of transmission. When repetitive transmission is utilized, $L$ more time slots
are used for $R_{k}$’s belonging to $\mathbb{B}$ to forward $s$ to $D$; each
$R_{k}$ retransmits $s$ during the $k$-th time slot111Note that the assignment
of each time slot to $R_{k}$’s is performed in a predetermined order [3].
Thus, due to different $R_{k}\rightarrow D$ $\forall\,k$ channel conditions
and ODF relaying, there might be some unused time slots.. Hence, for quasi-
static fading, the received signal at $D$ at the $k$-th time slot can be
expressed as
$y_{D}^{(k)}=h_{R_{k}D}s+n_{R_{k}D}$ (2)
with $h_{R_{k}D}$ representing the $R_{k}\rightarrow D$ complex-valued channel
coefficient and $n_{R_{k}D}\sim\mathcal{C}\mathcal{N}\left(0,N_{0}\right)$ is
AWGN of this channel that is assumed statistically independent of $s$.
When RS-based transmission is used, one time slot is needed for the relay node
$R_{\rm best}$ with the most favourable $R_{\rm best}\rightarrow D$ channel
conditions to forward $S$’s signal to $D$. Thus, for this transmission
strategy and during the second time slot, the received signal at $D$ for
quasi-static fading can be expressed as
$y_{D}^{(2)}=h_{R_{\rm best}D}s+n_{R_{\rm best}D}$ (3)
where $h_{R_{\rm best}D}$ and $n_{R_{\rm
best}D}\sim\mathcal{C}\mathcal{N}\left(0,N_{0}\right)$ denote the
$R_{k}\rightarrow D$ complex-valued channel coefficient and the AWGN for this
channel, respectively. As (1) and (2), it is assumed that $n_{R_{\rm best}D}$
is statistically independent of $s$.
The quasi-static fading channels $h_{SD}$, $h_{SR_{k}}$ and $h_{R_{k}D}$
$\forall\,k$, and $h_{R_{\rm best}D}$ are assumed to be modeled as INID
Nakagami-$m$ RVs [20]. Let
$\gamma_{0}=\left|h_{SD}\right|^{2}E_{\text{s}}/N_{0}$ and
$\gamma_{k}=\left|h_{R_{k}D}\right|^{2}E_{\text{s}}/N_{0}$, $k=1,2,\ldots,L$,
be the instantaneous received SNRs of the $S\rightarrow D$ and
$R_{k}\rightarrow D$ link, respectively, with corresponding average values
given by $\overline{\gamma}_{0}=\mathbb{E}\left|\langle
h_{SD}\right|^{2}\rangle E_{s}/N_{0}$ and
$\overline{\gamma}_{k}=\mathbb{E}\langle\left|h_{R_{k}D}\right|^{2}\rangle
E_{s}/N_{0}$, respectively. Clearly, each $\gamma_{\ell}$,
$\ell=0,1,\ldots,L$, is gamma distributed with probability density function
(PDF) given by [21, Table 2.2]
$f_{\gamma_{\ell}}\left(x\right)=\frac{C_{\ell}^{m_{\ell}}}{\Gamma\left(m_{\ell}\right)}x^{m_{\ell}-1}\exp\left(-C_{\ell}x\right)$
(4)
where $m_{\ell}\geq 1/2$ denotes the Nakagami-$m$ fading parameter and
$C_{\ell}=m_{\ell}/\overline{\gamma}_{\ell}$. Integrating (4), the cumulative
distribution function (CDF) of each $\gamma_{\ell}$ is easily obtained as
$F_{\gamma_{\ell}}\left(x\right)=\frac{\mathsf{\Gamma}\left(m_{\ell},C_{\ell}x\right)}{\Gamma\left(m_{\ell}\right)}.$
(5)
The PDFs and CDFs of the instantaneous received SNRs of the first hop,
$\gamma_{L+k}=\left|h_{SR_{k}}\right|^{2}E_{\text{s}}/N_{0}$ $\forall\,k$, are
given using (4) and (5) by $f_{\gamma_{L+k}}\left(x\right)$ and
$F_{\gamma_{L+k}}\left(x\right)$, respectively, with fading parameters and
average SNRs denoted by $m_{L+k}$ and
$\overline{\gamma}_{L+k}=\mathbb{E}\langle\left|h_{SR_{k}}\right|^{2}\rangle
E_{s}/N_{0}$, respectively.
## III Statistics of ODF Relaying Schemes
Relay station nodes that are able to decode the transmitted signal from $S$
constitute the decoding set $\mathbb{B}$. Based on [3] and [13], for both
transmission strategies the elements of $\mathbb{B}$ are obtained as
$\mathbb{B}=\left\\{R_{k}\in\mathbb{A}:\log_{2}\left(1+\gamma_{L+k}\right)\geq\alpha\mathcal{R}\right\\}$
(6)
where $\mathcal{R}$ is $S$’s transmit rate and $\alpha=L+1$ for repetitive
transmission, whereas $\alpha=2$ for RS-based transmission. Hence, the
probability that $R_{k}$ does not belong to $\mathbb{B}$ is easily obtained as
$\mathcal{P}_{k}=\text{Pr}\left[\gamma_{L+k}<2^{\alpha\mathcal{R}}-1\right]$.
Substituting $F_{\gamma_{L+k}}\left(x\right)$ after using (5), a closed-form
expression for $\mathcal{P}_{k}$ in INID Nakagami-$m$ fading is given by
$\mathcal{P}_{k}=\frac{\mathsf{\Gamma}\left[m_{L+k},C_{L+k}\left(2^{\alpha\mathcal{R}}-1\right)\right]}{\Gamma\left(m_{L+k}\right)}.$
(7)
To analyze the performance of ODF relaying schemes, the $S\longrightarrow D$
direct channel plus the $S\longrightarrow R_{k}\longrightarrow D$ $\forall\,k$
RS-assisted channels are effectively considered as $L+1$ paths between $S$ and
$D$ [11, 15]. In particular, let the zero-th path represent the
$S\longrightarrow D$ direct link and the $k$-th path the $S\longrightarrow
R_{k}\longrightarrow D$ cascaded link. We define the instantaneous received
SNRs at $D$ of these paths as $g_{0}$ and $g_{k}$, respectively. By
substituting (4) into [11, eq. (4)], the PDFs of $g_{\ell}$’s,
$\ell=0,1,\ldots,L$, are given by
$f_{g_{\ell}}\left(x\right)=\mathcal{P}_{\ell}\delta\left(x\right)+\frac{\left(1-\mathcal{P}_{\ell}\right)C_{\ell}^{m_{\ell}}}{\Gamma\left(m_{\ell}\right)}x^{m_{\ell}-1}\exp\left(-C_{\ell}x\right)$
(8)
where $\mathcal{P}_{0}$ is the probability that $D$ belongs to $\mathbb{B}$.
Clearly, the direct $S\longrightarrow D$ path is not linked via a relay, i.e.,
$\mathcal{P}_{0}=0$, yielding
$f_{g_{0}}\left(x\right)=f_{\gamma_{0}}\left(x\right)$. Integrating (8) and
using (5), yields the following expression for the CDF of the $\ell$-th
cascaded path:
$F_{g_{\ell}}\left(x\right)=\mathcal{P}_{\ell}u\left(x\right)+\left(1-\mathcal{P}_{\ell}\right)\frac{\mathsf{\Gamma}\left(m_{\ell},C_{\ell}x\right)}{\Gamma\left(m_{\ell}\right)}.$
(9)
Note again that for the direct $S\longrightarrow D$ path yields
$F_{g_{0}}\left(x\right)=F_{\gamma_{0}}\left(x\right)$.
### III-A Repetitive Transmission
The incoming signals at $D$ from $S$ and $R_{k}\in\mathbb{B}$ $\forall\,k$ may
be combined using a time-diversity version of MRD [3] and SD [12]. In
particular, $D$ combines $S$’s signal received at time slot one with the $S$’s
replicas received from all $R_{k}\in\mathbb{B}$ $\forall\,k$ at the $L$
subsequent slots using either the rules of MRD or SD.
#### III-A1 Repetitive with MRD
With MRD the instantaneous SNR at $D$’s output is expressed as
$g_{\text{end}}=g_{0}+\sum_{i=1}^{\left|\mathbb{B}\right|}g_{i}.$ (10)
Since $g_{\ell}$’s, $\ell=0,1,\ldots,L$, are independent, the moment
generating function (MGF) of $g_{\text{end}}$ can be easily obtained as the
product of the MGFs of $g_{\ell}$’s. As shown in [15] for INID Nakagami-$m$
fading, using (8) and the definition of the MGF of $g_{k}$, $k=1,2,\ldots,L$,
$M_{g_{k}}\left(s\right)=\int_{0}^{\infty}\exp\left(-sx\right)f_{g_{k}}\left(x\right)dx,$
(11)
yields
$\mathcal{M}_{g_{k}}\left(s\right)=\mathcal{P}_{k}+\left(1-\mathcal{P}_{k}\right)C_{k}^{m_{k}}\left(s+C_{k}\right)^{-m_{k}}$.
Similarly, using (4) the MGF of $g_{0}$ is easily obtained as
$\mathcal{M}_{g_{0}}\left(s\right)=C_{0}^{m_{0}}\left(s+C_{0}\right)^{-m_{0}}$.
Hence, the following closed-form expression for the MGF of $g_{\text{end}}$ in
INID Nakagami-$m$ is deduced
$\mathcal{M}_{g_{\text{end}}}\left(s\right)=C_{0}^{m_{0}}\left(s+C_{0}\right)^{-m_{0}}\prod_{k=1}^{L}\left[\mathcal{P}_{k}+\left(1-\mathcal{P}_{k}\right)C_{k}^{m_{k}}\left(s+C_{k}\right)^{-m_{k}}\right].$
(12)
Using the MGF-based approach [21], the CDF of $g_{\text{end}}$ can be obtained
as
$F_{g_{\text{end}}}\left(x\right)=\mathbb{L}^{-1}\left\\{\frac{\mathcal{M}_{g_{\text{end}}}\left(s\right)}{s};x\right\\}.$
(13)
Substituting (12) in (13) and similar to the analysis presented in [15], a
closed-form expression for $F_{g_{\text{end}}}\left(x\right)$ of repetitive
transmission with MRD over INID Nakagami-$m$ with integer $m_{\ell}$’s and
distinct $C_{\ell}$’s is given by
$\begin{split}F_{g_{\text{end}}}\left(x\right)=&\left(\prod_{\ell=0}^{L}\mathcal{P}_{\ell}\right)\left\\{1+\sum_{\left\\{\lambda_{k}\right\\}_{k=0}^{L}}\left(\prod_{n=0}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}C_{\lambda_{n}}^{m_{\lambda_{n}}}\right)\right.\\\
&\left.\times\sum_{p=0}^{k}\sum_{q=1}^{m_{\lambda_{p}}}\frac{\psi_{p}}{C_{\lambda_{p}}^{q}}\left[1-\exp\left(-C_{\lambda_{p}}x\right)\sum_{\ell=0}^{q-1}\frac{\left(C_{\lambda_{p}}x\right)^{\ell}}{\ell!}\right]\right\\}\end{split}$
(14)
The symbol $\sum_{\left\\{\alpha_{i}\right\\}_{i=\kappa}^{I}}$ is used for
short-hand representation of multiple summations $\sum_{i=\kappa}^{I}$
$\sum_{\alpha_{\kappa}=\kappa}^{I-i+\kappa}$
$\sum_{\alpha_{\kappa+1}=\alpha_{\kappa}+1}^{I-i+\kappa+1}$ $\cdots$
$\sum_{\alpha_{i}=\alpha_{i-1}+1}^{I}$ and
$\psi_{p}=\Psi_{p}\left(s\right)^{(m_{\lambda_{p}}-q)}|_{s=-C_{\lambda_{p}}}/(m_{\lambda_{p}}-q)!$
with
$\Psi_{p}\left(s\right)=(s+C_{\lambda_{p}})^{m_{\lambda_{p}}}\prod_{n=0}^{k}(s+C_{\lambda_{n}})^{-m_{\lambda_{n}}}$.
For $C_{\ell}=C$ $\forall\,\ell$ with arbitrary values for $m_{\ell}$’s and
following a similar analysis as for the derivation of (14), a closed-form
expression for $F_{g_{\text{end}}}\left(x\right)$ can be obtained as
$F_{g_{\text{end}}}\left(x\right)=\left(\prod_{\ell=0}^{L}\mathcal{P}_{\ell}\right)\left\\{1+\sum_{\left\\{\lambda_{k}\right\\}_{k=0}^{L}}\left(\prod_{n=0}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\mathsf{\Gamma}\left(\sum_{n=0}^{k}m_{\lambda_{n}},Cx\right)/\Gamma\left(\sum_{n=0}^{k}m_{\lambda_{n}}\right)\right\\}.$
(15)
#### III-A2 Repetitive with SD
Alternatively to MRD, $D$ may use a time-diversity version of SD to combine
the signals from $S$ and $R_{k}\in\mathbb{B}$ $\forall\,k$ [12]. With this
diversity technique the instantaneous SNR at $D$’s output is given by
$g_{\text{end}}=\max_{R_{k}\in\mathbb{C}}g_{k}$ (16)
where $\mathbb{C}=\\{S\\}\cup\mathbb{B}$. The $g_{\ell}$’s,
$\ell=0,1,\ldots,L$, are assumed independent, therefore
$F_{g_{\text{end}}}\left(x\right)$ of (16) can be easily obtained as the
product of the CDFs of $g_{\ell}$’s. Substituting (5) and (9) for $g_{0}$ and
$g_{k}$ $\forall\,k=1,2,\ldots,L$, respectively, a closed-form expression for
$F_{g_{\text{end}}}\left(x\right)$ of repetitive transmission with SD over
INID Nakagami-$m$ fading can be derived as
$F_{g_{\text{end}}}\left(x\right)=F_{\gamma_{0}}\left(x\right)F_{g_{\text{best}}}\left(x\right)$
(17)
where $F_{g_{\text{best}}}\left(x\right)$ is the CDF of the instantaneous SNR
of the $R_{\rm best}\rightarrow D$ channel, i.e., of
$g_{\text{best}}=\max_{R_{k}\in\mathbb{B}}g_{k},$ (18)
which is easily obtained using (9) for INID Nakagami-$m$ fading as
$F_{g_{\text{best}}}\left(x\right)=\prod_{k=1}^{L}\left[\mathcal{P}_{k}u\left(x\right)+\left(1-\mathcal{P}_{k}\right)\frac{\mathsf{\Gamma}\left(m_{k},C_{k}x\right)}{\Gamma\left(m_{k}\right)}\right].$
(19)
Differentiating (17), the PDF of $g_{\text{end}}$ is given by
$f_{g_{\text{end}}}\left(x\right)=f_{\gamma_{0}}\left(x\right)F_{g_{\text{best}}}\left(x\right)+F_{\gamma_{0}}\left(x\right)f_{g_{\text{best}}}\left(x\right)$
(20)
where $f_{g_{\text{best}}}\left(x\right)$ is the PDF of $g_{\text{best}}$. To
obtain an expression for $f_{g_{\text{best}}}\left(x\right)$, we first use
[19, eq. (8.352/1)] to obtain (19) for integer $m_{\ell}$’s yielding
$F_{g_{\text{best}}}\left(x\right)=\prod_{k=1}^{L}\left\\{\mathcal{P}_{k}u\left(x\right)+\left(1-\mathcal{P}_{k}\right)\left[1-\exp\left(-C_{k}x\right)\sum_{i=0}^{m_{k}-1}\frac{\left(C_{k}x\right)^{i}}{i!}\right]\right\\}.$
(21)
Then, differentiating (21) and using the formula
$\prod_{i=\kappa}^{I}\left(\chi_{i}+\psi_{i}\right)=\prod_{i=\kappa}^{I}\chi_{i}+\sum_{\left\\{\alpha_{i}\right\\}_{i=\kappa}^{I}}\prod_{s=\kappa}^{i}\psi_{\alpha_{s}}\prod_{\begin{subarray}{c}t=\kappa\\\
t\neq\left\\{\alpha_{u}\right\\}_{u=\kappa}^{i}\end{subarray}}^{I}\chi_{t},$
(22)
where symbol $\sum_{\left\\{\alpha_{i}\right\\}_{i=\kappa}^{I}}$ is used for
short-hand representation of multiple summations $\sum_{i=\kappa}^{I}$
$\sum_{\alpha_{\kappa}=\kappa}^{I-i+\kappa}\sum_{\alpha_{\kappa+1}=\alpha_{\kappa}+1}^{I-i+\kappa+1}\cdots\sum_{\alpha_{i}=\alpha_{i-1}+1}^{I}$,
we obtain after some algebraic manipulations the following closed-form
expression for $f_{g_{\text{best}}}\left(x\right)$ in INID Nakagami-$m$ fading
with integer $m_{\ell}$’s:
$\begin{split}&f_{g_{\text{best}}}\left(x\right)=\left(\prod_{k=1}^{L}\mathcal{P}_{k}\right)\left\\{\delta\left(x\right)+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\\\
&\left.\times\left[-\left(\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)\exp\left(-x\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\\\
&\left.\left.\times
x^{\sum_{s=1}^{p}i_{\nu_{s}}-1}\exp\left(-x\xi_{p,j}\right)\left(1-\frac{\xi_{p,j}x}{\sum_{s=1}^{p}i_{\nu_{s}}}\right)\right]\right\\}.\end{split}$
(23)
In (23), parameters $\xi_{p,j}$’s, with $p$ and $j$ being positive integers,
are given by
$\xi_{p,j}=\sum_{s=1}^{p}C_{\lambda_{\mu_{\nu_{s}}}}+\sum_{\begin{subarray}{c}t=1\\\
t\neq\left\\{\nu_{u}\right\\}_{u=1}^{p}\end{subarray}}^{j}C_{\lambda_{\mu_{t}}}.$
(24)
Substituting (4) and (5) for integer $m_{\ell}$’s, i.e., after using [19, eq.
(8.352/1)] for expressing $\mathsf{\Gamma}\left(\cdot,\cdot\right)$’s, as well
as (21) and (23) to (20), a closed-form expression for
$f_{g_{\text{end}}}\left(x\right)$ over INID Nakagami-$m$ fading with integer
values of $m_{\ell}$’s and distinct $C_{\ell}$’s can be obtained after some
algebraic manipulations as
$\begin{split}&f_{g_{\text{end}}}\left(x\right)=\left(\prod_{k=1}^{L}\mathcal{P}_{k}\right)\left\\{\frac{C_{0}^{m_{0}}}{\Gamma\left(m_{0}\right)}x^{m_{0}-1}\exp\left(-C_{0}x\right)\left\\{u\left(x\right)+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\right.\right.\\\
&\left.\left.\times\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\left[\exp\left(-x\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)-1+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\right.\\\
&\left.\left.\left.\times
x^{\sum_{s=1}^{p}i_{\nu_{s}}}\exp\left(-x\xi_{p,j}\right)\right]\right\\}+\left[1-\exp\left(-C_{0}x\right)\sum_{i=0}^{m_{0}-1}\frac{\left(C_{0}x\right)^{i}}{i!}\right]\right.\\\
&\left.\times\left\\{\delta\left(x\right)+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\right.\\\
&\left.\left.\times\left[-\left(\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)\exp\left(-x\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\right.\\\
&\left.\left.\left.\times
x^{\sum_{s=1}^{p}i_{\nu_{s}}-1}\exp\left(-x\xi_{p,j}\right)\left(1-\frac{\xi_{p,j}x}{\sum_{s=1}^{p}i_{\nu_{s}}}\right)\right]\right\\}\right\\}.\end{split}$
(25)
To obtain the MGF of $g_{\text{end}}$, we substitute the
$f_{g_{\text{end}}}\left(x\right)$ expression given by (25) to the definition
of the MGF given by (11), i.e., after replacing $g_{k}$ with $g_{\text{end}}$
in (11), and use[19, eq. (3.381/4)] to solve the resulting integrals. In
particular, by first deriving using (23) the following closed-form expression
for the MGF of $g_{\text{best}}$ in INID Nakagami-$m$ fading with integer
values of $m_{\ell}$’s and distinct $C_{\ell}$’s:
$\begin{split}&M_{g_{\text{best}}}\left(s\right)=\left(\prod_{k=1}^{L}\mathcal{P}_{k}\right)\left\\{1+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\\\
&\times\left.\left\\{-\left(\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)\left(s+\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)^{-1}+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)!\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\\\
&\left.\left.\times\left(s+\xi_{p,j}\right)^{-\sum_{s=1}^{p}i_{\nu_{s}}}\left[1-\xi_{p,j}\left(s+\xi_{p,j}\right)^{-1}\right]\right\\}\right\\},\end{split}$
(26)
a closed-form expression for the MGF of $g_{\text{end}}$ of repetitive
transmission with SD over INID Nakagami-$m$ fading channels with integer
values of $m_{\ell}$’s and distinct $C_{\ell}$’s is given by
$\begin{split}&\mathcal{M}_{g_{\text{end}}}\left(s\right)=\left(\prod_{k=1}^{L}\mathcal{P}_{k}\right)\left\\{C_{0}^{m_{0}}\left\\{\left(s+C_{0}\right)^{-m_{0}}+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\right.\\\
&\left.\left.\times\left[\left(s+C_{0}+\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)^{-m_{0}}-\left(s+C_{0}\right)^{-m_{0}}+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\right.\\\
&\left.\left.\left.\times\frac{\left(m_{0}+\sum_{s=1}^{p}i_{\nu_{s}}-1\right)!}{\left(m_{0}-1\right)!}\left(s+C_{0}+\xi_{p,j}\right)^{-\left(m_{0}+\sum_{s=1}^{p}i_{\nu_{s}}\right)}\right]\right\\}+\left\\{\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\right.\right.\\\
&\left.\left.\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\left\\{\left(\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)\sum_{q=0}^{m_{0}-1}C_{0}^{q}\left(s+C_{0}+\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)^{-(q+1)}\right.\right.\right.\\\
&\left.\left.\left.+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\sum_{q=0}^{m_{0}-1}\frac{C_{0}^{q}}{q!}\left(s+C_{0}+\xi_{p,j}\right)^{-\left(q+\sum_{s=1}^{p}i_{\nu_{s}}+1\right)}\right.\right.\right.\\\
&\left.\left.\left.\times\left(q+\sum_{s=1}^{p}i_{\nu_{s}}-1\right)!\left[q\xi_{p,j}-\left(s+C_{0}\right)\sum_{s=1}^{p}i_{\nu_{s}}\right]\right\\}\right\\}\right\\}+M_{g_{\text{best}}}\left(s\right).\end{split}$
(27)
For equal $C_{\ell}$’s, i.e., IID Nakagami-$m$ fading with $m_{\ell}=m$ and
$\overline{\gamma}_{\ell}=\overline{\gamma}$ $\forall\,\ell$, following a
similar procedure as for the derivation of (27) and using the binomial and
multinomial theorems [19, eq. (1.111)], we first obtain the following closed-
form expression for $M_{g_{\text{best}}}\left(s\right)$ for integer $m$
$\begin{split}&M_{g_{\text{best}}}\left(s\right)=L\mathcal{P}^{L}\left\\{1/L+\left(1-\mathcal{P}\right)C^{m}\left(s+C\right)^{-m}+\sum_{k=1}^{L-1}\binom{L-1}{k}\frac{\left(1-\mathcal{P}\right)^{k}}{\mathcal{P}^{k+1}}\right.\\\
&\left.\times\left\\{\mathcal{P}+\left(1-\mathcal{P}\right)C^{m}\left(s+C\right)^{-m}+\sum_{j=1}^{k}\binom{k}{j}\left(-1\right)^{j}\right.\right.\\\
&\left.\left.\times\left[\mathcal{P}+\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}\frac{\left(1-\mathcal{P}\right)C^{m+\sigma}}{\prod_{j=2}^{m-1}\left(j!\right)^{n_{j+1}}}\frac{\prod_{i=1}^{m}n_{i}^{-1}}{\left(m-1\right)!}\left[s+(j+1)C\right]^{-\left(m+\sigma\right)}\right]\right\\}\right\\}\end{split}$
(28)
where $C=m/\overline{\gamma}$, $\mathcal{P}=\mathcal{P}_{k}$ $\forall\,k$,
$\sigma=\sum_{i=2}^{m}(i-1)n_{i}$ and symbol
$\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}$ is used for short-hand representation
of multiple summations
$\sum_{n_{1}=0}^{j}\sum_{n_{2}=0}^{j}\cdots\sum_{n_{m}=0}^{j}\delta(\sum_{i=1}^{m}n_{i},j)$.
Using (20) for IID Nakagami-$m$ fading, (28) and after some algebraic
manipulations, a closed-form expression for $M_{g_{\text{end}}}\left(s\right)$
of repetitive transmission with SD in IID Nakagami-$m$ fading channels with
integer $m$ is obtained as
$\begin{split}&M_{g_{\text{end}}}\left(s\right)=L\mathcal{P}^{L}\left\\{C^{m}\left\\{\left[1/L+\left(1-\mathcal{P}\right)\right]\left(s+C\right)^{-m}-(1-\mathcal{P})B_{0,1}\left(s\right)+\sum_{k=1}^{L-1}\binom{L-1}{k}\right.\right.\\\
&\left.\left.\times\frac{\left(1-\mathcal{P}\right)^{k}}{\mathcal{P}^{k+1}}\left\\{\left[\mathcal{P}+\left(1-\mathcal{P}\right)\right]\left(s+C\right)^{-m}-(1-\mathcal{P})B_{0,1}\left(s\right)+\sum_{j=1}^{k}\binom{k}{j}\left(-1\right)^{j}\left[\mathcal{P}\right.\right.\right.\right.\\\
&\left.\left.\left.\left.\times\left(s+C\right)^{-m}+\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}\frac{\left(1-\mathcal{P}\right)}{\prod_{j=2}^{m-1}\left(j!\right)^{n_{j+1}}}\frac{\prod_{i=1}^{m}n_{i}^{-1}}{\left(m-1\right)!}\left\\{\left(s+C\right)^{-m}-B_{\sigma,j+1}\left(s\right)\right\\}\right]\right\\}\right\\}\right.\\\
&\left.-C^{m}\left\\{\left(1-\mathcal{P}\right)B_{0,1}\left(s\right)+\sum_{k=1}^{L-1}\binom{L-1}{k}\frac{\left(1-\mathcal{P}\right)^{k}}{\mathcal{P}^{k+1}}\left\\{\left(1-\mathcal{P}\right)B_{0,1}\left(s\right)+\sum_{j=1}^{k}\binom{k}{j}\left(-1\right)^{j}\right.\right.\right.\\\
&\left.\left.\left.\times\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}\frac{\left(1-\mathcal{P}\right)C^{\sigma}}{\prod_{j=2}^{m-1}\left(j!\right)^{n_{j+1}}}\frac{\prod_{i=1}^{m}n_{i}^{-1}}{\left(m-1\right)!}\sum_{q=0}^{m-1}\frac{C^{q}}{q!(m+\sigma-1)!}(m+q+\sigma-1)!\right.\right.\right.\\\
&\left.\left.\left.\times[s+(\lambda+1)C]^{-(m+q)}\right\\}\right\\}\right\\}+M_{g_{\text{best}}}\left(s\right)\end{split}$
(29)
where function $B_{\kappa,\lambda}\left(s\right)$, with $\kappa$ and $\lambda$
being positive integers, is given by
$B_{\kappa,\lambda}\left(s\right)=\sum_{q=0}^{m+\kappa-1}\frac{(\lambda
C)^{q}}{q!(m-1)!}(m+q-1)![s+(\lambda+1)C]^{-(m+q)}.$ (30)
### III-B RS-based Transmission
When RS-based transmission is utilized, RS [6] is first performed to obtain
$R_{\rm best}$. Relay node $R_{\rm best}$ is the one experiencing the most
favorable $R_{k}\rightarrow D$ $\forall\,k$ channel conditions, i.e., its
instantaneous SNR is given by (18). Using expressions derived in Section
III-A, we next obtain closed-form expressions for the statistics of a pure RS
scheme [13, 10] that combines at $D$ the signals from $S$ and $R_{\rm best}$
using a time-diversity version of MRD as well as of a rate-selective one [10]
that utilizes pure RS only if it is beneficial in terms of achievable rate
over the direct transmission.
#### III-B1 Pure RS
With pure RS $D$ utilizes MRD to combine the signals from $S$ and $R_{\rm
best}$ [13], therefore the instantaneous SNR at $D$’s output is given using
(18) by
$g_{\text{end}}=g_{0}+g_{\text{best}}.$ (31)
Similar to the derivation of (12), the MGF of $g_{\text{end}}$ of pure RS can
be obtained as the following product:
$\mathcal{M}_{g_{\text{end}}}\left(s\right)=C_{0}^{m_{0}}\left(s+C_{0}\right)^{-m_{0}}\mathcal{M}_{g_{\text{best}}}\left(s\right)$
(32)
where $\mathcal{M}_{g_{\text{best}}}\left(s\right)$ is given by (26) for INID
Nakagami-$m$ with integer $m_{\ell}$’s and distinct $C_{\ell}$’s, whereas by
(28) for IID Nakagami-$m$ with integer $m_{\ell}=m$ $\forall\,\ell$ and
$C_{\ell}=C$. Therefore substituting (26) to (32), a closed-form expression
for $\mathcal{M}_{g_{\text{end}}}\left(s\right)$ of pure RS for INID
Nakagami-$m$ fading with integer $m_{\ell}$’s and distinct $C_{\ell}$’s is
given by [10, eq. (7)]
$\begin{split}&\mathcal{M}_{g_{\text{end}}}\left(s\right)=C_{0}^{m_{0}}\left(s+C_{0}\right)^{-m_{0}}\left(\prod_{k=1}^{L}\mathcal{P}_{k}\right)\left\\{1+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\\\
&\times\left.\left\\{-\left(\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)\left(s+\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)^{-1}+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)!\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\\\
&\left.\left.\times\left(s+\xi_{p,j}\right)^{-\sum_{s=1}^{p}i_{\nu_{s}}}\left[1-\xi_{p,j}\left(s+\xi_{p,j}\right)^{-1}\right]\right\\}\right\\}.\end{split}$
(33)
For IID Nakagami-$m$ fading channels with integer $m$, substituting (28) to
(32) a closed-form expression for $\mathcal{M}_{g_{\text{end}}}\left(s\right)$
of pure RS is given by [10, eq. (10)]
$\begin{split}&\mathcal{M}_{g_{\text{end}}}\left(s\right)=L\mathcal{P}^{L}C^{m}\left(s+C\right)^{-m}\left\\{\frac{1}{L}+\left(1-\mathcal{P}\right)C^{m}\left(s+C\right)^{-m}+\sum_{k=1}^{L-1}\binom{L-1}{k}\frac{\left(1-\mathcal{P}\right)^{k}}{\mathcal{P}^{k+1}}\right.\\\
&\left.\times\left\\{\mathcal{P}+\left(1-\mathcal{P}\right)C^{m}\left(s+C\right)^{-m}+\sum_{j=1}^{k}\binom{k}{j}\left(-1\right)^{j}\left[\mathcal{P}+\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}\frac{\left(1-\mathcal{P}\right)C^{m+\sigma}}{\prod_{j=2}^{m-1}\left(j!\right)^{n_{j+1}}}\right.\right.\right.\\\
&\left.\left.\left.\times\frac{\prod_{i=1}^{m}n_{i}^{-1}}{\left(m-1\right)!}\left[s+(j+1)C\right]^{-\left(m+\sigma\right)}\right]\right\\}\right\\}.\end{split}$
(34)
To obtain $F_{g_{\text{end}}}\left(x\right)$ of pure RS for INID Nakagami-$m$
fading with integer $m_{\ell}$’s and distinct $C_{\ell}$’s, we substitute (33)
to (13) and after some algebraic manipulations yields the following closed-
form expression [10, eq. (10)]
$\begin{split}&F_{g_{\text{end}}}\left(x\right)=\left(\prod_{k=1}^{L}\mathcal{P}_{k}\right)\left\\{\mathcal{Z}_{m_{0}}\left(C_{0},x\right)+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\\\
&\left.\left.\left\\{-C_{0}^{m_{0}}\left\\{\mathcal{X}_{2,1}^{(1)}\mathcal{Z}_{0}\left[\chi_{2}^{(1)},x\right]+\chi_{2}^{(1)}\mathcal{Y}_{1}^{(1)}\left(x\right)\right\\}+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\right.\\\
&\left.\left.\times
C_{0}^{m_{0}}\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)!\left[\mathcal{Y}_{2}^{(2)}\left(x\right)-\xi_{p,j}\mathcal{Y}_{2}^{(3)}\left(x\right)\right]\right\\}\right\\}\end{split}$
(35)
where
$\mathcal{Z}_{t}\left(c,x\right)=1-\exp\left(-cx\right)\sum_{i=1}^{t-1}\left(cx\right)^{i}/i!$,
with $c$ being positive real, and
$\mathcal{Y}_{\kappa}^{(u)}\left(x\right)=\sum_{r=1}^{\kappa}\sum_{t=1}^{b_{r}^{(u)}}\mathcal{X}_{r,t}^{(u)}\left[\chi_{r}^{(u)}\right]^{-t}\mathcal{Z}_{t}\left[\chi_{r}^{(u)},x\right]$
(36)
for $u=1,2$ and $3$. In the above two equations $\forall\,u$,
$\mathcal{X}_{r,t}^{(u)}=X_{r}^{(u)}\left(s\right)^{(b_{r}^{(u)}-t)}|_{s=-\chi_{r}^{(u)}}/(b_{r}^{(u)}-t)!$
and
$X_{r}^{(u)}\left(s\right)=(s+\chi_{r}^{(u)})^{b_{r}^{(u)}}\prod_{j=1}^{2}(s+\chi_{j}^{(u)})^{b_{j}^{(u)}}$.
Moreover, $\chi_{1}^{(u)}=C_{0}$ $\forall\,u$,
$\chi_{2}^{(1)}=\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}$ and
$\chi_{2}^{(2)}=\chi_{2}^{(3)}=\xi_{p,j}$ as well as $b_{1}^{(u)}=m_{0}$,
$\forall\,u$, $b_{2}^{(1)}=1$, $b_{2}^{(2)}=\sum_{s=1}^{p}i_{\nu_{s}}$ and
$b_{2}^{(3)}=b_{2}^{(2)}+1$. For $C_{\ell}=C$ $\forall\,\ell$ and integer $m$
and similar to the derivation of (35), substituting (34) to (13), a closed-
form expression for $F_{g_{\text{end}}}\left(x\right)$ of pure RS is given by
[10, eq. (11)]
$\begin{split}&F_{g_{\text{end}}}\left(x\right)=L\mathcal{P}^{L}\left\\{\frac{\mathcal{Z}_{m}\left(C,x\right)}{L}+\left(1-\mathcal{P}\right)\mathcal{Z}_{2m}\left(C,x\right)+\sum_{k=1}^{L-1}\binom{L-1}{k}\frac{\left(1-\mathcal{P}\right)^{k}}{\mathcal{P}^{k+1}}\right.\\\
&\left.\left\\{\mathcal{P}\mathcal{Z}_{m}\left(C,x\right)+\left(1-\mathcal{P}\right)\mathcal{Z}_{2m}\left(C,x\right)+\sum_{j=1}^{k}\binom{k}{j}\left(-1\right)^{j}\left[\mathcal{P}\mathcal{Z}_{m}\left(C,x\right)\right.\right.\right.\\\
&\left.\left.\left.+\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}\left(1-\mathcal{P}\right)\frac{\prod_{i=1}^{m}n_{i}^{-1}C^{2m+\sigma}}{\left(m-1\right)!\prod_{j=2}^{m-1}\left(j!\right)^{n_{j+1}}}\mathcal{Y}_{2}^{(4)}\left(x\right)\right]\right\\}\right\\}\end{split}$
(37)
where $b_{1}^{(4)}=m$, $b_{2}^{(4)}=b_{1}^{(4)}+\sigma$, $\chi_{1}^{(4)}=C$
and $\chi_{2}^{(4)}=(j+1)\chi_{1}^{(4)}$.
#### III-B2 Rate-Selective RS
Dual-hop transmission incurs a pre-log penalty factor of $1/2$. To deal with
this rate loss, pure RS is considered only if it provides higher achievable
rate than that of the direct $S\longrightarrow D$ transmission [7, 22], i.e.,
higher than $\log_{2}\left(1+g_{0}\right)$. Using instantaneous CSI and (31),
rate-selective RS chooses between direct (non-relay assisted) and relay-
assisted transmission based on the following criterion [10, eq. (14)]
$\mathcal{R}_{\text{sel}}=\max\left\\{\frac{1}{2}\log_{2}\left(1+g_{\text{end}}\right),\log_{2}\left(1+g_{0}\right)\right\\}.$
(38)
As shown in [10], the MGF of ${g_{\text{sel}}}$ can be obtained using the
$M_{g_{\text{end}}}(s)$ of pure RS as
$M_{g_{\text{sel}}}(s)=M_{g_{0}}(s)F_{g_{\text{end}}}\left(\alpha\right)+M_{g_{\text{end}}}(s)[1-F_{g_{\text{end}}}\left(\alpha\right)]$
(39)
where $\alpha=g_{0}^{2}+2g_{0}$ is a RV with CDF given by
$F_{\alpha}(x)=F_{g_{0}}\left(\sqrt{x+1}-1\right)$ which can be obtained using
inverse sampling [23]. Substituting (33) and (35) to (39), a closed-form
expression for $M_{g_{\text{sel}}}\left(s\right)$ of rate-selective RS over
INID Nakagami-$m$ fading with integer $m_{\ell}$’s and distinct $C_{\ell}$’s
is derived as
$\begin{split}&M_{g_{\text{sel}}}\left(s\right)=\frac{C_{0}^{m_{0}}}{\left(s+C_{0}\right)^{m_{0}}}\left(\prod_{k=1}^{L}\mathcal{P}_{k}\right)\left\\{\left\\{\mathcal{Z}_{m_{0}}\left(C_{0},\alpha\right)+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\right.\\\
&\left.\left.\left.\left\\{-C_{0}^{m_{0}}\left\\{\mathcal{X}_{2,1}^{(1)}\mathcal{Z}_{0}\left[\chi_{2}^{(1)},\alpha\right]+\chi_{2}^{(1)}\mathcal{Y}_{1}^{(1)}\left(\alpha\right)\right\\}+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)C_{0}^{m_{0}}\right.\right.\right.\right.\\\
&\left.\left.\left.\times\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)!\left[\mathcal{Y}_{2}^{(2)}\left(\alpha\right)-\xi_{p,j}\mathcal{Y}_{2}^{(3)}\left(\alpha\right)\right]\right\\}\right\\}\right.+\left\\{1+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\\\
&\left.\times\left.\left\\{-\left(\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)\left(s+\sum_{p=1}^{j}C_{\lambda_{\mu_{p}}}\right)^{-1}+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)!\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\right.\\\
&\left.\left.\left.\times\left(s+\xi_{p,j}\right)^{-\sum_{s=1}^{p}i_{\nu_{s}}}\left[1-\xi_{p,j}\left(s+\xi_{p,j}\right)^{-1}\right]\right\\}\right.\right.\\\
&\left.\left\\{1-\left(\prod_{k=1}^{L}\mathcal{P}_{k}\right)\left\\{\mathcal{Z}_{m_{0}}\left(C_{0},\alpha\right)+\sum_{\left\\{\lambda_{k}\right\\}_{k=1}^{L}}\sum_{\left\\{\mu_{j}\right\\}_{j=1}^{k}}\left(-1\right)^{j}\left(\prod_{n=1}^{k}\frac{1-\mathcal{P}_{\lambda_{n}}}{\mathcal{P}_{\lambda_{n}}}\right)\right.\right.\right.\\\
&\left.\left.\left.\left.\left\\{-C_{0}^{m_{0}}\left\\{\mathcal{X}_{2,1}^{(1)}\mathcal{Z}_{0}\left[\chi_{2}^{(1)},\alpha\right]+\chi_{2}^{(1)}\mathcal{Y}_{1}^{(1)}\left(\alpha\right)\right\\}+\sum_{\left\\{\nu_{i}\right\\}_{i=1}^{j}}\left(\prod_{s=1}^{p}\sum_{i_{\nu_{s}}=1}^{m_{\lambda_{\mu_{\nu_{s}}}}-1}\frac{C_{\lambda_{\mu_{\nu_{s}}}}^{i_{\nu_{s}}}}{i_{\nu_{s}}!}\right)\right.\right.\right.\right.\right.\\\
&\left.\left.\left.\left.\times
C_{0}^{m_{0}}\left(\sum_{s=1}^{p}i_{\nu_{s}}\right)!\left[\mathcal{Y}_{2}^{(2)}\left(\alpha\right)-\xi_{p,j}\mathcal{Y}_{2}^{(3)}\left(\alpha\right)\right]\right\\}\right\\}\right\\}\right\\}\end{split}$
(40)
while, substituting (34) and (37) to (39), yields the following closed-form
expression for $M_{g_{\text{sel}}}\left(s\right)$ of rate-selective RS over
IID Nakagami-$m$ fading channels with integer $m$:
$\begin{split}&\mathcal{M}_{g_{\text{sel}}}\left(s\right)=\frac{C^{m}L\mathcal{P}^{L}}{\left(s+C\right)^{m}}\left\\{\left\\{\frac{\mathcal{Z}_{m}\left(C,\alpha\right)}{L}+\left(1-\mathcal{P}\right)\mathcal{Z}_{2m}\left(C,\alpha\right)+\sum_{k=1}^{L-1}\binom{L-1}{k}\frac{\left(1-\mathcal{P}\right)^{k}}{\mathcal{P}^{k+1}}\right.\right.\\\
&\left.\left.\times\left\\{\mathcal{P}\mathcal{Z}_{m}\left(C,\alpha\right)+\left(1-\mathcal{P}\right)\mathcal{Z}_{2m}\left(C,\alpha\right)+\sum_{j=1}^{k}\binom{k}{j}\left(-1\right)^{j}\left[\mathcal{P}\mathcal{Z}_{m}\left(C,\alpha\right)\right.\right.\right.\right.\\\
&\left.\left.\left.\left.+\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}\left(1-\mathcal{P}\right)\frac{\prod_{i=1}^{m}n_{i}^{-1}C^{2m+\sigma}}{\left(m-1\right)!\prod_{j=2}^{m-1}\left(j!\right)^{n_{j+1}}}\mathcal{Y}_{2}^{(4)}\left(\alpha\right)\right]\right\\}\right\\}+\left\\{\frac{1}{L}+\left(1-\mathcal{P}\right)C^{m}\right.\right.\\\
&\left.\left.\times\left(s+C\right)^{-m}+\sum_{k=1}^{L-1}\binom{L-1}{k}\frac{\left(1-\mathcal{P}\right)^{k}}{\mathcal{P}^{k+1}}\left\\{\mathcal{P}+\left(1-\mathcal{P}\right)C^{m}\left(s+C\right)^{-m}+\sum_{j=1}^{k}\binom{k}{j}\right.\right.\right.\\\
&\left.\left.\left.\times\left(-1\right)^{j}\left[\mathcal{P}+\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}\frac{\left(1-\mathcal{P}\right)C^{m+\sigma}}{\prod_{j=2}^{m-1}\left(j!\right)^{n_{j+1}}}\frac{\prod_{i=1}^{m}n_{i}^{-1}}{\left(m-1\right)!}\left[s+(j+1)C\right]^{-\left(m+\sigma\right)}\right]\right\\}\right\\}\right.\\\
&\times\left.\left\\{1-L\mathcal{P}^{L}\left\\{\frac{\mathcal{Z}_{m}\left(C,\alpha\right)}{L}+\left(1-\mathcal{P}\right)\mathcal{Z}_{2m}\left(C,\alpha\right)+\sum_{k=1}^{L-1}\binom{L-1}{k}\frac{\left(1-\mathcal{P}\right)^{k}}{\mathcal{P}^{k+1}}\right.\right.\right.\\\
&\left.\left.\left.\times\left\\{\mathcal{P}\mathcal{Z}_{m}\left(C,\alpha\right)+\left(1-\mathcal{P}\right)\mathcal{Z}_{2m}\left(C,\alpha\right)+\sum_{j=1}^{k}\binom{k}{j}\left(-1\right)^{j}\left[\mathcal{P}\mathcal{Z}_{m}\left(C,\alpha\right)\right.\right.\right.\right.\right.\\\
&\left.\left.\left.\left.\left.+\sum_{n_{1}+n_{2}+\cdots+n_{m}=j}^{j}\left(1-\mathcal{P}\right)\frac{\prod_{i=1}^{m}n_{i}^{-1}C^{2m+\sigma}}{\left(m-1\right)!\prod_{j=2}^{m-1}\left(j!\right)^{n_{j+1}}}\mathcal{Y}_{2}^{(4)}\left(\alpha\right)\right]\right\\}\right\\}\right\\}\right\\}.\end{split}$
(41)
## IV Performance Analysis of ODF Relaying Schemes
In this section, the performance of ODF relaying schemes with repetitive and
RS-based transmission over INID Nakagami-$m$ fading channels will be analyzed.
We will present closed-form and analytical expressions, respectively, for the
following performance metrics: _i_) OP and _ii_) ASEP of several modulation
formats.
### IV-A Repetitive Transmission
Using the closed-form expressions for $F_{g_{\text{end}}}(x)$ and
$M_{g_{\text{end}}}(s)$ presented in Section III-A, the OP and ASEP of
repetitive transmission with both MRD and SD are easily obtained as follows.
_OP:_ The end-to-end OP of repetitive transmission is easily obtained using
$F_{g_{\text{end}}}(x)$ as
$P_{\text{out}}=F_{g_{\text{end}}}\left[2^{(L+1)\mathcal{R}}-1\right].$ (42)
Substituting (14) and (15) to (42), closed-form expressions for the
$P_{\text{out}}$ of repetitive transmission with MRD over INID Nakagami-$m$
fading channels with integer $m_{\ell}$’s and distinct $C_{\ell}$’s as well as
with arbitrary $m_{\ell}$’s and $C_{\ell}=C$ $\forall\,\ell$, respectively,
are obtained. Similarly, substituting (17) with (5) and (19) to (42), yields a
closed-form expression for the $P_{\text{out}}$ of repetitive transmission
with SD over INID Nakagami-$m$ fading channels with arbitrary values for
$m_{\ell}$’s.
_ASEP:_ Following the MGF-based approach [21] and using the
$M_{g_{\text{end}}}(s)$ expressions given by (12) for repetitive transmission
with MRD and (27) and (29) for repetitive transmission with SD, the ASEP of
several modulation formats for both relaying schemes over INID Nakagami-$m$
fading channels can be easily evaluated. For example, the ASEP of non-coherent
binary frequency shift keying (NBFSK) and differential binary phase shift
keying (DBPSK) modulation schemes can be directly calculated from
$M_{g_{\text{end}}}(s)$; the average bit error rate probability (ABEP) of
NBFSK is given by $\overline{P}_{\rm b}=0.5\mathcal{M}_{g_{\text{end}}}(0.5)$
and of DBPSK by $\overline{P}_{\rm b}=0.5\mathcal{M}_{g_{\text{end}}}(1)$. For
other schemes, including binary phase shift keying (BPSK), $M$-ary phase shift
keying ($M$-PSK)222It is noted that, for modulation order $M>2$, Gray encoding
is assumed, so that $\overline{P}_{\rm s}=\overline{P}_{\rm b}\log_{2}(M)$.,
quadrature amplitude modulation ($M$-QAM), amplitude modulation ($M$-AM), and
differential phase shift keying ($M$-DPSK), single integrals with finite
limits and integrands composed of elementary functions (exponential and
trigonometric) have to be readily evaluated via numerical integration [21].
For example, the ASEP of $M$-PSK is easily obtained as
$\overline{P}_{\rm
s}=\frac{1}{\pi}\int_{0}^{\pi-\pi/M}\mathcal{M}_{g_{\text{end}}}\left(\frac{g_{\rm
PSK}}{\sin^{2}\varphi}\right){\rm d}\varphi$ (43)
where $g_{\rm PSK}=\sin^{2}\left(\pi/M\right)$, while for $M$-QAM, the ASEP
can be evaluated as
$\begin{split}\overline{P}_{\rm
s}=&\frac{4}{\pi}\left(1-\frac{1}{\sqrt{M}}\right)\left[\int_{0}^{\pi/2}\mathcal{M}_{g_{\text{end}}}\left(\frac{g_{\rm
QAM}}{\sin^{2}\varphi}\right){\rm d}\varphi\right.\\\
&\left.-\left(1-\frac{1}{\sqrt{M}}\right)\int_{0}^{\pi/4}\mathcal{M}_{g_{\text{end}}}\left(\frac{g_{\rm
QAM}}{\sin^{2}\varphi}\right){\rm d}\varphi\right]\end{split}$ (44)
with $g_{\rm QAM}=3/\left[2\left(M-1\right)\right]$.
### IV-B RS-based Transmission
The closed-form expressions for the statistics of pure and rate-selective RS
presented in Section III-B can be easily used to obtain the OP and ASEP of
both RS-based schemes as follows.
_OP:_ Using the $F_{g_{\text{end}}}(x)$ expressions given by (35) and (37)
over INID and IID Nakagami-$m$ fading channels, respectively, with integer
fading parameters, the end-to-end OP of pure RS is easily obtained as
$P_{\text{out}}=F_{g_{\text{end}}}\left(2^{2\mathcal{R}}-1\right).$ (45)
For rate-selective RS, a closed-form expression for $P_{\text{out}}$ over INID
Nakagami-$m$ fading with arbitrary $m_{\ell}$’s can be easily obtained by
substituting (5) and (19) to [10, eq. (15)], yielding
$P_{\text{out}}=\frac{\mathsf{\Gamma}\left[m_{0},C_{0}\left(2^{\mathcal{R}}-1\right)\right]}{\Gamma\left(m_{0}\right)}\prod_{k=1}^{L}\left\\{\mathcal{P}_{k}u\left(2^{2\mathcal{R}}-1\right)+\left(1-\mathcal{P}_{k}\right)\frac{\mathsf{\Gamma}\left[m_{k},C_{k}\left(2^{2\mathcal{R}}-1\right)\right]}{\Gamma\left(m_{k}\right)}\right\\}.$
(46)
_ASEP:_ Following the MGF-based approach and using the $M_{g_{\text{end}}}(s)$
expressions for pure RS given by (33) and (34) over INID and IID Nakagami-$m$
fading channels, respectively, with integer fading parameters, the ASEP of
several modulation formats for pure RS can be easily calculated. Similarly,
the ASEP for rate-selective RS can be easily evaluated using the
$M_{g_{\text{sel}}}(s)$ expressions given by (40) and (41) over INID and IID
Nakagami-$m$ fading channels, respectively, with integer values for the fading
parameters.
## V Numerical Results and Discussions
The analytical expressions of the previous section have been used to evaluate
the performance of ODF cooperative systems utilizing repetitive and RS-based
transmission over INID Nakagami-$m$ fading channels. Without loss of
generality, it has been assumed that $S$’s transmit rate $\mathcal{R}=1$
bps/Hz and that the fading parameters of the first and the second hop of all
$L$ links are equal, i.e., $m_{k}=m_{L+k}$ $\forall\,k$, as well as
$\overline{\gamma}_{k}=\overline{\gamma}_{L+k}$ $\forall\,k$. Moreover, for
the performance evaluation results we have considered the exponentially power
decaying profile
$\overline{\gamma}_{k}=\overline{\gamma}_{0}\exp\left(-\ell\delta\right)$ with
$\delta$ being the power decaying factor. Clearly, wherever ID fading is
considered, $\delta=0$ and
$\overline{\gamma}_{k}=\overline{\gamma}_{0}=\overline{\gamma}$ $\forall\,k$.
In all figures that follow, analytical results for the $P_{\text{out}}$ and
$\overline{P}_{\text{b}}$ match perfectly with equivalent performance
evaluation results obtained by means of Monte Carlo simulations, thus
validating our analysis.
Fig. 2 illustrates $P_{\text{out}}$ of the considered relaying schemes with
both repetitive and RS-based transmission as a function of $L$ for average
transmit SNR $\overline{\gamma}_{0}=5$ dB over IID Nakagami-$m$ fading
channels with different values of $m$. As clearly shown, $P_{\text{out}}$
improves with increasing $L$ for both RS-based transmission schemes whereas it
degrades severely for both repetitive ones. In particular, utilizing
repetitive transmission for $\overline{\gamma}_{0}=5$ dB is unavailing for
$L\geq 3$ irrespective of the fading conditions. As for RS-based transmission,
the gains from RS diminish as $m$ increases; the smaller the $m$, the greater
the $P_{\text{out}}$ gain from relaying. Furthermore, pure RS is inefficient
when $L$ is small and rate-selective RS leads to significant $P_{\text{out}}$
gains over the pure one as $m$ decreases. The impact of increasing $L$ in the
$P_{\text{out}}$ performance of all four relaying schemes is also demonstrated
in Fig. 3 for different values of $\overline{\gamma}_{0}$ and IID Rayleigh
fading conditions. As shown from this figure and Fig. 2 for both repetitive
and RS-based transmission, $P_{\text{out}}$ degrades with decreasing
$\overline{\gamma}_{0}$ and/or $m$. For example, although for
$\overline{\gamma}_{0}=5$ dB repetitive transmission does not benefit from
relaying for $L\geq 3$, this happens for $\overline{\gamma}_{0}=0$ dB when
$L\geq 2$. Moreover, for the latter transmit SNR, the improvement on the
$P_{\text{out}}$ of pure RS with increasing $L$ is very small, while
$P_{\text{out}}$ performance of rate-selective RS is irrespective of $L$; the
smaller the $\overline{\gamma}_{0}$, the smaller the gains from relaying. This
happens because as fading conditions become more severe and transmit SNR
reduces, the gains from RS eventually decrease.
Using (7) in Fig. 4, the decoding probability $1-\mathcal{P}_{k}$ is plotted
versus $L$ for both repetitive and RS-based transmission over IID Nakagami-$m$
fading conditions for the source to relay channels with different values of
$m$ and $\overline{\gamma}_{L+k}=\overline{\gamma}$ $\forall\,k,L$. As
expected, decreasing $m$ and/or $\overline{\gamma}$ decreases the decoding
probability. Moreover, this probability is severely degraded with increasing
$L$ for repetitive transmission, whereas it remains unchanged with increasing
$L$ for the RS-based one. The $P_{\text{out}}$ performance of pure and rate-
selective RS is depicted in Fig. 5 versus $\overline{\gamma}_{0}$ for $L=2$
relays over INID Nakagami-$m$ fading channels with different values of
$m_{0}$, $m_{1}$ and $m_{2}$. As expected, for both RS-based schemes,
$P_{\text{out}}$ improves with increasing $\overline{\gamma}_{0}$ and/or any
of the fading parameters. More importantly, it is shown that, as the fading
conditions of the relay to destination channels become more favorable than
those of the direct source to destination channel, RS-based transmission
improves $P_{\text{out}}$. On the contrary, whenever the fading conditions of
the relay to destination channels are similar to the direct one, non-relay
assisted transmission results in lower $P_{\text{out}}$ than that of the RS-
based one.
In Fig. 6, the $\overline{P}_{\text{b}}$ performance of square $4$-QAM is
plotted as a function of $L$ for average SNR per bit
$\overline{\gamma}_{\text{b}}=5$ dB for all second hop channels and
$\overline{\gamma}_{0}=0$ dB over IID Nakagami-$m$ fading channels with
different values of $m$. It is clearly shown that the
$\overline{P}_{\text{b}}$ of RS-based transmission improves with increasing
$L$ and/or $m$ whereas for both repetitive schemes, although
$\overline{P}_{\text{b}}$ improves with increasing $m$, it does not benefit
from increasing $L$. Interestingly, for RS-based transmission, as $L$ and/or
$m$ increase, the $\overline{P}_{\text{b}}$ improvement of pure compared with
rate-selective RS increases; as shown in (38), rate-selective RS might choose
the direct transmission even in cases where the received SNR from the best
relay is larger that from the source node. Assuming IID Rayleigh fading
conditions, Fig. 7 illustrates $\overline{P}_{\text{b}}$ of DBPSK versus $L$
for all considered relaying schemes and for different values of
$\overline{\gamma}_{0}$. As shown from this figure and Fig. 6 for both
repetitive and RS-based transmission, $\overline{P}_{\text{b}}$ for both
modulations improves with increasing $\overline{\gamma}_{0}$ and/or $m$. It
was shown in Fig. 4 that more favorable fading conditions and larger
$\overline{\gamma}_{0}$ increase the decoding probability, thus both
repetitive transmission schemes become unavailing for larger $L$, whereas the
gains from RS increase. The $\overline{P}_{\text{b}}$ performance of DBPSK of
pure and rate-selective RS is depicted in Fig. 8 versus
$\overline{\gamma}_{\text{b}}$ for all second hop channels with $L=1$, $2$ and
$3$ relays and for $\overline{\gamma}_{0}=0$ dB over INID Nakagami-$m$ fading
channels with different values for the fading parameters. Clearly, for both
RS-based transmission schemes, $\overline{P}_{\text{b}}$ improves with
increasing $\overline{\gamma}_{\text{b}}$ and/or $m$ and/or $L$. It is shown
that, as $\overline{\gamma}_{\text{b}}$ increases and as the fading conditions
of the relay to destination channels become more favorable than those of the
direct source to destination channel, RS-based transmission results in larger
$\overline{P}_{\text{b}}$ improvement. More importantly, even for strong
source to destination channel conditions, as $\overline{\gamma}_{\text{b}}$
increases the improvement on $\overline{P}_{\text{b}}$ with RS-based
transmission becomes larger than that with the non-relay assisted one.
## VI Conclusion
Cooperative diversity is a very promising avenue for future wireless
communications. However, it is necessary to investigate how relays should be
utilized in order to achieve certain objectives. In this paper, we presented a
general analytical framework for modelling and evaluating performance of four
DF relaying schemes under INID Nakagami-$m$ fading channels. Moreover, we
obtained closed-form expressions for the OP and the ASEP performance of the RS
and repetitive schemes when MRD or SD are employed at the destination node. We
concluded that RS performs better than repetitive transmission when CSI for RS
is perfect. Furthermore, it was shown that relaying should be switched-off
when the source to destination direct link is sufficiently strong and the aim
is minimizing OP. However, in terms of ASEP, relaying was shown to be always
beneficial.
## References
* [1] G. J. Foschini and M. J. Gans, “On limits of wireless communications in a fading environment when using multiple antennas,” _IEEE Trans. Veh. Technol._ , vol. 6, no. 3, pp. 311–335, Mar. 1998.
* [2] E. Biglieri, R. Calderbank, A. Constantinides, A. Goldsmith, A. Paulraj, and H. V. Poor, _MIMO wireless communications_ , 1st ed. Cambridge: Cambridge University Press, 2007.
* [3] J. N. Laneman and G. W. Wornell, “Distributed space-time coded protocols for exploiting cooperative diversity in wireless networks,” _IEEE Trans. Inf. Theory_ , vol. 49, no. 10, pp. 2415–2425, Oct. 2003.
* [4] A. Sendonaris, E. Erkip, and B. Aashang, “User coorperation diversity - part I: System description,” _IEEE Trans. Commun._ , vol. 51, no. 11, pp. 1927–1938, Nov. 2003.
* [5] A. Papadogiannis, G. C. Alexandropoulos, A. G. Burr, and D. Grace, “Bringing mobile relays for wireless access networks into practice– learning when to relay,” _submitted to IET Communications_ , Feb. 2011.
* [6] A. Bletsas, A. Khisti, D. P. Reed, and A. Lippman, “A simple cooperative diversity method based on network path selection,” _IEEE J. Sel. Areas Commun._ , vol. 24, no. 8, pp. 659–672, Mar. 2006.
* [7] A. Papadogiannis, E. Hardouin, A. Saadani, D. Gesbert, and P. Layec, “A novel framework for the utilization of dynamic relays in cellular networks,” in _Proc. IEEE ASIMOLAR 2008_ , Pacific Grove, USA, Oct. 2008.
* [8] A. Papadogiannis, A. Saadani, and E. Hardouin, “Exploiting dynamic relays with limited overhead in cellular systems,” in _Proc. IEEE GLOBECOM Workshops 2009_ , Hawaii, USA, Nov. 2009, pp. 1–6.
* [9] K. Yan, J. Jiang, Y. G. Wang, and H. T. Liu, “Outage probability of selection cooperation with MRC in Nakagami-$m$ fading channels,” _IEEE Signal Process. Lett._ , vol. 16, no. 12, pp. 1031–1034, Dec. 2009.
* [10] G. C. Alexandropoulos, A. Papadogiannis, and K. Berberidis, “Performance analysis of cooperative networks with relay selection over Nakagami-$m$ fading channels,” _IEEE Signal Process. Lett._ , vol. 17, no. 5, pp. 1–4, May 2010.
* [11] N. C. Beaulieu and J. Hu, “A closed-form expression for the outage probability of decode-and-forward relaying in dissimilar Rayleigh fading channels,” _IEEE Commun. Lett._ , vol. 10, no. 12, pp. 813–815, Dec. 2006.
* [12] J. Hu and N. C. Beaulieu, “Performance analysis of decode-and-forward relaying with selection combining,” _IEEE Commun. Lett._ , vol. 11, no. 6, pp. 489–491, Jun. 2007.
* [13] E. Beres and R. Adve, “Selection cooperation in multi-source cooperative networks,” _IEEE Trans. Wireless Commun._ , vol. 7, no. 1, pp. 118–127, Jan. 2008.
* [14] F. Xu, F. C. M. Lau, Q. F. Zhou, and D.-W. Yue, “Outage performance of cooperative communication systems using opportunistic relaying and selection combining receiver,” _IEEE Signal Process. Lett._ , vol. 16, no. 4, pp. 237–240, Apr. 2009.
* [15] C. K. Datsikas, N. C. Sagias, F. I. Lazarakis, and G. S. Tombras, “Outage analysis of decode-and-forward relaying over Nakagami-$m$ fading channels,” _IEEE Signal Process. Lett._ , vol. 15, no. 1, pp. 41–44, Jan. 2008.
* [16] A. Papadogiannis and G. C. Alexandropoulos, “System level performance evaluation of dynamic relays in cellular networks over Nakagami-$m$ fading channels,” in _Proc. IEEE PIMRC 2009_ , Tokyo, Japan, Sep. 2009.
* [17] G. C. Alexandropoulos, A. Papadogiannis, and K. Berberidis, “Relay selection vs. repetitive transmission cooperation: Analysis over Nakagami-$m$ fading channels,” in _Proc. IEEE PIMRC 2010_ , Istanbul, Turkey, Sep. 2010.
* [18] Q. T. Duong, G. C. Alexandropoulos, H.-J. Zepernick, and T. A. Tsiftsis, “Orthogonal space-time block codes with CSI-assisted amplify-and-forward relaying in correlated Nakagami-$m$ fading channels,” _IEEE Trans. Veh. Technol._ , vol. 60, no. 3, pp. 882–889, Mar. 2011.
* [19] I. S. Gradshteyn and I. M. Ryzhik, _Table of Integrals, Series, and Products_ , 6th ed. New York: Academic, 2000\.
* [20] M. Nakagami, “The $m$-distribution - A general formula of intensity distribution of rapid fading,” in _Statistical Methods in Radio Wave Propagation_ , W. G. Hoffman, Ed. Oxford, UK: Permagon Press, 1960, pp. 3–36.
* [21] M. K. Simon and M.-S. Alouini, _Digital Communication over Fading Channels_ , 2nd ed. New York: Wiley, 2005\.
* [22] K. Woradit, T. Q. S. Quek, W. Suwansantisuk, H. Wymeersch, L. Wuttisittikulkij, and M. Z. Win, “Outage behavior of cooperative diversity with relay selection,” in _Proc. IEEE GLOBECOM 2008_ , New Orleans, L.A., USA, Dec. 2008.
* [23] W. G. Cochran, _Sampling Techniques_ , 3rd ed. New York: Wiley, 1977.
FIGURES’ CAPTIONS
Fig. 1: Illustration of a dual-hop cooperative wireless system with $L+2$
wireless nodes: a source node $S$, $L$ relay station nodes $R_{k}$,
$k=1,2,\ldots,L$, and a destination node $D$.
Fig. 2: End-to-end OP, $P_{\text{out}}$, versus the number of relay nodes,
$L$, for average transmit SNR $\overline{\gamma}_{0}=5$ dB over IID
Nakagami-$m$ fading channels with different values of $m$: (A) Pure RS, (B)
Rate-Selective RS, (C) Repetitive transmission with MRD and (D) Repetitive
transmission with SD.
Fig. 3: End-to-end OP, $P_{\text{out}}$, versus the number of relay nodes,
$L$, for different average transmit SNRs over IID Rayleigh fading channels:
(A) Pure RS, (B) Rate-Selective RS, (C) Repetitive transmission with MRD and
(D) Repetitive transmission with SD.
Fig. 4: Decoding probability, $1-\mathcal{P}_{k}$, versus the number of relay
nodes, $L$, over IID Nakagami-$m$ fading channels with different values of $m$
and $\overline{\gamma}_{L+k}=\overline{\gamma}$ $\forall\,k,L$: (A) Repetitive
and (B) RS-based transmission.
Fig. 5: End-to-end OP, $P_{\text{out}}$, of RS versus the average transmit SNR
per bit, $\overline{\gamma}_{0}$, for $L=2$ relay nodes over INID Nakagami-$m$
fading channels: (A) $m_{0}=m_{1}=m_{2}=1$ and $\delta=0.3$, (B) $m_{0}=1$,
$m_{1}=m_{2}=6$ and $\delta=0.3$ and (C) $m_{0}=m_{1}=m_{2}=3$ and $\delta=0$.
Fig. 6: ABEP, $\overline{P}_{\text{b}}$, of square $4$-QAM versus the number
of relay nodes, $L$, for average transmit SNR $\overline{\gamma}_{\text{b}}=5$
dB and $\overline{\gamma}_{0}=0$ dB over IID Nakagami-$m$ fading channels with
different values of $m$: (A) Pure RS, (B) Rate-Selective RS, (C) Repetitive
transmission with MRD and (D) Repetitive transmission with SD.
Fig. 7: ABEP, $\overline{P}_{\text{b}}$, of DBPSK versus the number of relay
nodes, $L$, for different average transmit SNRs per bit and
$\overline{\gamma}_{0}=0$ dB, over IID Rayleigh fading channels: (A) Pure RS,
(B) Rate-Selective RS, (C) Repetitive transmission with MRD and (D) Repetitive
transmission with SD.
Fig. 8: ABEP, $\overline{P}_{\text{b}}$, of DBPSK for RS versus the average
relay SNR per bit, $\overline{\gamma}_{\text{b}}$, and for
$\overline{\gamma}_{0}=0$ dB over INID Nakagami-$m$ fading channels: (A)
$L=1$, $m_{0}=0.5$, $m_{1}=1$ and $\delta=0.1$, (B) $L=2$, $m_{0}=1$,
$m_{1}=m_{2}=2$ and $\delta=0$ and (C) $L=3$, $m_{\ell}=3$ for $\ell=0,1,2$
and $3$ and $\delta=0$.
Figure 1: Illustration of a dual-hop cooperative wireless system with $L+2$
wireless nodes: a source node $S$, $L$ relay station nodes $R_{k}$,
$k=1,2,\ldots,L$, and a destination node $D$.
Figure 2: End-to-end OP, $P_{\text{out}}$, versus the number of relay nodes,
$L$, for average transmit SNR $\overline{\gamma}_{0}=5$ dB over IID
Nakagami-$m$ fading channels with different values of $m$: (A) Pure RS, (B)
Rate-Selective RS, (C) Repetitive transmission with MRD and (D) Repetitive
transmission with SD.
Figure 3: End-to-end OP, $P_{\text{out}}$, versus the number of relay nodes,
$L$, for different average transmit SNRs over IID Rayleigh fading channels:
(A) Pure RS, (B) Rate-Selective RS, (C) Repetitive transmission with MRD and
(D) Repetitive transmission with SD.
Figure 4: Decoding probability, $1-\mathcal{P}_{k}$, versus the number of
relay nodes, $L$, over IID Nakagami-$m$ fading channels with different values
of $m$ and $\overline{\gamma}_{L+k}=\overline{\gamma}$ $\forall\,k,L$: (A)
Repetitive and (B) RS-based transmission.
Figure 5: End-to-end OP, $P_{\text{out}}$, of RS versus the average transmit
SNR per bit, $\overline{\gamma}_{0}$, for $L=2$ relay nodes over INID
Nakagami-$m$ fading channels: (A) $m_{0}=m_{1}=m_{2}=1$ and $\delta=0.3$, (B)
$m_{0}=1$, $m_{1}=m_{2}=6$ and $\delta=0.3$ and (C) $m_{0}=m_{1}=m_{2}=3$ and
$\delta=0$.
Figure 6: ABEP, $\overline{P}_{\text{b}}$, of square $4$-QAM versus the number
of relay nodes, $L$, for average transmit SNR $\overline{\gamma}_{\text{b}}=5$
dB and $\overline{\gamma}_{0}=0$ dB over IID Nakagami-$m$ fading channels with
different values of $m$: (A) Pure RS, (B) Rate-Selective RS, (C) Repetitive
transmission with MRD and (D) Repetitive transmission with SD.
Figure 7: ABEP, $\overline{P}_{\text{b}}$, of DBPSK versus the number of relay
nodes, $L$, for different average transmit SNRs per bit and
$\overline{\gamma}_{0}=0$ dB, over IID Rayleigh fading channels: (A) Pure RS,
(B) Rate-Selective RS, (C) Repetitive transmission with MRD and (D) Repetitive
transmission with SD.
Figure 8: ABEP, $\overline{P}_{\text{b}}$, of DBPSK for RS versus the average
relay SNR per bit, $\overline{\gamma}_{\text{b}}$, and for
$\overline{\gamma}_{0}=0$ dB over INID Nakagami-$m$ fading channels: (A)
$L=1$, $m_{0}=0.5$, $m_{1}=1$ and $\delta=0.1$, (B) $L=2$, $m_{0}=1$,
$m_{1}=m_{2}=2$ and $\delta=0$ and (C) $L=3$, $m_{\ell}=3$ for $\ell=0,1,2$
and $3$ and $\delta=0$.
|
arxiv-papers
| 2011-04-01T09:37:13 |
2024-09-04T02:49:18.070188
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "George C. Alexandropoulos, Agisilaos Papadogiannis, Paschalis C.\n Sofotasios",
"submitter": "Agisilaos Papadogiannis",
"url": "https://arxiv.org/abs/1104.0118"
}
|
1104.0153
|
# On the Scaling Limits of Determinantal Point Processes with Kernels Induced
by Sturm–Liouville Operators
Folkmar Bornemann Zentrum Mathematik – M3, Technische Universität München,
80290 München, Germany bornemann@ma.tum.de
###### Abstract.
By applying an idea of ?, we study various scaling limits of determinantal
point processes with trace class projection kernels given by spectral
projections of selfadjoint Sturm–Liouville operators. Instead of studying the
convergence of the kernels as functions, the method directly addresses the
strong convergence of the induced integral operators. We show that, for this
notion of convergence, the Dyson, Airy, and Bessel kernels are universal in
the bulk, soft-edge, and hard-edge scaling limits. This result allows us to
give a short and unified derivation of the known formulae for the scaling
limits of the classical unitary random matrix ensembles (GUE, LUE/Wishart,
JUE/MANOVA).
###### 2010 Mathematics Subject Classification:
15B52, 34B24, 33C45
## 1\. Introduction
We consider determinantal point processes on an interval $\Lambda=(a,b)$ with
trace class projection kernel
$K_{n}(x,y)=\sum_{j=0}^{n-1}\phi_{j}(x)\phi_{j}(y),$ (1.1)
where $\phi_{0},\phi_{1},\ldots,\phi_{n-1}$ are orthonormal in
$L^{2}(\Lambda)$; each $\phi_{j}$ may have some dependence on $n$ that we
suppress from the notation. We recall (see, e.g., ?, §4.2) that for such
processes the joint probability density of the $n$ points is given by
$p_{n}(x_{1},\ldots,x_{n})=\frac{1}{n!}\det_{i,j=1}^{n}K_{n}(x_{i},x_{j}),$
the mean counting probability is given by the density
$\rho_{n}(x)=n^{-1}K_{n}(x,x),$
and the gap probabilities are given, by the inclusion-exclusion principle, in
terms of a Fredholm determinant, namely
$E_{n}(J)={\mathbb{P}}(\\{x_{1},\ldots,x_{n}\\}\cap
J=\emptyset)=\det(I-{\mathbb{1}}_{J}K_{n}{\mathbb{1}}_{J}).$
The various scaling limits are usually derived from a pointwise convergence of
the kernel function $K_{n}(x,y)$ obtained from considering the large $n$
asymptotic of the eigenfunctions $\phi_{j}$, which can be technically _very_
involved.111Based on the two-scale Plancherel–Rotach asymptotic of classical
orthogonal polynomials or, methodically more general, on the asymptotic of
Riemann–Hilbert problems; see, e.g., ?, ?, ?, ?, ?, and ?. ? suggested, for
discrete point processes, a different, conceptually and technically much
simpler approach based on selfadjoint difference operators. We will show that
their method, generalized to selfadjoint Sturm–Liouville operators, allows us
to give a short and unified derivation of the various scaling limits for the
unitary random matrix ensembles (GUE, LUE/Wishart, JUE/MANOVA) that are based
on the classical orthogonal polynomials (Hermite, Laguerre, Jacobi).
### The Borodin–Olshanski Method
The method proceeds along three steps: First, we identify the induced integral
operator $K_{n}$ as the spectral projection
$K_{n}={\mathbb{1}}_{(-\infty,0)}(L_{n})$
of some selfadjoint ordinary differential operator $L_{n}$ on
$L^{2}(\Lambda)$. Any scaling of the point process by
$x=\sigma_{n}\xi+\mu_{n}$ ($\sigma_{n}\neq 0$) yields, in turn, the scaled
objects
$\tilde{E}_{n}(J)=\det(I-{\mathbb{1}}_{J}\tilde{K}_{n}{\mathbb{1}}_{J}),\quad\tilde{K}_{n}={\mathbb{1}}_{(-\infty,0)}(\tilde{L}_{n}),$
where $\tilde{L}_{n}$ is a selfadjoint differential operator on
$L^{2}(\tilde{\Lambda}_{n})$,
$\tilde{\Lambda}_{n}=(\tilde{a}_{n},\tilde{b}_{n})$.
Second, if $\tilde{\Lambda}_{n}\subset\tilde{\Lambda}=(\tilde{a},\tilde{b})$
with $\tilde{a}_{n}\to\tilde{a}$, $\tilde{b}_{n}\to\tilde{b}$, we aim for a
selfadjoint operator $\tilde{L}$ on $L^{2}(\tilde{\Lambda})$ with a core $C$
such that eventually $C\subset D(\tilde{L}_{n})$ and
$\tilde{L}_{n}u\,\to\,\tilde{L}u\qquad(u\in C).$ (1.2)
The point is that, if the test functions from $C$ are particularly nice, such
a convergence is just a simple consequence of the _locally uniform convergence
of the coefficients_ of the differential operators $\tilde{L}_{n}$—a
convergence that is, typically, an easy calculus exercise. Now, given (1.2),
the concept of _strong resolvent convergence_ (see Theorem 2) immediately
yields,222By “$\overset{s}{\longrightarrow}$” we denote the strong convergence
of operators acting on $L^{2}$. if $0\not\in\sigma_{pp}(\tilde{L})$,
$K_{n}{\mathbb{1}}_{\tilde{\Lambda}_{n}}={\mathbb{1}}_{(-\infty,0)}(\tilde{L}_{n}){\mathbb{1}}_{\tilde{\Lambda}_{n}}\,\overset{s}{\longrightarrow}\,{\mathbb{1}}_{(-\infty,0)}(\tilde{L}).$
Third, we take an interval $J\subset\tilde{\Lambda}$, eventually satisfying
$J\subset\tilde{\Lambda}_{n}$, such that the operator
${\mathbb{1}}_{(-\infty,0)}(\tilde{L}){\mathbb{1}}_{J}$ is trace class with
kernel $\tilde{K}(x,y)$ (which can be obtained from the generalized
eigenfunction expansion of $\tilde{L}$, see §A.2). Then, we immediately get
the strong convergence
$\tilde{K}_{n}{\mathbb{1}}_{J}\,\overset{s}{\longrightarrow}\,\tilde{K}\,{\mathbb{1}}_{J}.$
###### Remark 1.1.
? sketches the Borodin–Olshanski method, applied to the bulk and edge scaling
of GUE, as a heuristic device. Because of the microlocal methods that he uses
to calculate the projection ${\mathbb{1}}_{(-\infty,0)}(\tilde{L})$, he puts
his sketch under the headline “The Dyson and Airy kernels of GUE via
semiclassical analysis”.
### Scaling Limits and Other Modes of Convergence
Given that one just has to establish the convergence of the coefficients of a
differential operator (instead of an asymptotic of its eigenfunctions), the
Borodin–Olshanski method is an extremely simple device to determine all the
scalings $x=\sigma_{n}\xi+\mu_{n}$ that would yield some meaningful limit
$\tilde{K}_{n}{\mathbb{1}}_{J}\,\to\,\tilde{K}\,{\mathbb{1}}_{J}$, namely in
the strong operator topology. Other modes of convergence have been studied in
the literature, ranging from some weak convergence of $k$-point correlation
functions over pointwise convergence of the kernel functions to the
convergence of gap probabilities, that is,
$\tilde{E}_{n}(J)=\det(I-{\mathbb{1}}_{J}\tilde{K}_{n}{\mathbb{1}}_{J})\to\det(I-{\mathbb{1}}_{J}\tilde{K}\,{\mathbb{1}}_{J})=\tilde{E}(J).$
From a probabilistic point of view, the latter convergence is of particular
interest and has been shown in at least three ways:
1. (1)
By Hadamard’s inequality, convergence of the determinants follows directly
from the locally uniform convergence of the kernels $K_{n}$ (?, Lemma 3.4.5)
and, for unbounded $J$, from further large deviation estimates (?, Lemma
3.3.2). This way, the limit gap probabilities in the bulk and soft edge
scaling limit of GUE can rigorously be established (see, e.g., ?, §§3.5 and
3.7).
2. (2)
Since $A\mapsto\det(I-A)$ is continuous with respect to the trace class norm
(?, Thm. 3.4),
$\tilde{K}_{n}{\mathbb{1}}_{J}\,\to\,\tilde{K}\,{\mathbb{1}}_{J}$ in trace
class norm would generally suffice. Such a convergence can be proved by
factorizing the trace class operators into Hilbert–Schmidt operators and
obtaining the $L^{2}$-convergence of the factorized kernels once more from
locally uniform convergence, see the work of ? on the scaling limits of the
LUE/Wishart ensembles (?) and that on the limits of the JUE/MANOVA ensembles
(?).
3. (3)
Since ${\mathbb{1}}_{J}\tilde{K}_{n}{\mathbb{1}}_{J}$ and
${\mathbb{1}}_{J}\tilde{K}\,{\mathbb{1}}_{J}$ are selfadjoint and positive
semi-definite, yet another way is by observing that the convergence
$\tilde{K}_{n}{\mathbb{1}}_{J}\,\to\,\tilde{K}\,{\mathbb{1}}_{J}$ in trace
class norm is, for continuous kernels, equivalent (?, Thm. 2.20) to the
combination of both, the convergence
$\tilde{K}_{n}{\mathbb{1}}_{J}\,\to\,\tilde{K}\,{\mathbb{1}}_{J}$ in the weak
operator topology and the convergence of the traces
$\displaystyle\int_{J}\tilde{K}_{n}(\xi,\xi)\,d\xi\to\int_{J}\tilde{K}(\xi,\xi)\,d\xi.$
(T)
Once again, these convergences follow from locally uniform convergence of the
kernels; see ? for an application of this method to the bulk scaling limit of
GUE.
Since convergence in the strong operator topology implies convergence in the
weak one, the Borodin–Olshanski method would thus establish the convergence of
gap probabilities if we were only able to show condition (T) by some
additional, similarly short and simply argument. Note that, by the ideal
property of the trace class, condition (T) implies the same condition for all
$J^{\prime}\subset J$. We fall, however, short of conceiving a proof strategy
for condition (T) that would be independent of all the laborious proofs of
locally uniform convergence of the kernels.
###### Remark 1.2.
Contrary to the discrete case originally considered by Borodin and Olshanski,
it is also not immediate to infer from the strong convergence of the induced
integral operators the pointwise convergence of the kernels,
$\tilde{K}_{n}(\xi,\eta)\to\tilde{K}(\xi,\eta).$
For instance, in §2, we will need just a single such instance, namely
$\displaystyle\tilde{K}_{n}(0,0)\to\tilde{K}(0,0),$ (K0)
to prove a limit law
$\tilde{\rho}_{n}(t)\,dt\overset{w}{\longrightarrow}\tilde{\rho}(t)\,dt$ for
the mean counting probability. Using mollified Dirac deltas, pointwise
convergence would generally follow, for continuously differentiable
$\tilde{K}_{n}(\xi,\eta)$, if were able to bound, locally uniform, the
gradient of $\tilde{K}_{n}(\xi,\eta)$. Then, by dominated convergence,
criterion (T) would already be satisfied if we established an integrable bound
of $\tilde{K}_{n}(\xi,\xi)$ on $J$. Since the scalings laws are, however,
maneuvering just at the edge between trivial cases (i.e., zero limits) and
divergent cases, it is conceivable that a proof of such bounds might not be
significantly simpler than a proof of convergence of the gap probabilities
itself.
### The Main Result
Using the Borodin–Olshanski method, we will prove the following general result
on selfadjoint Sturm–Liouville operators; a result that adds a further class
of problems to the _universality_ (?) of the Dyson, Airy, and Bessel kernel in
the bulk, soft-edge, and hard-edge scaling limits.
###### Theorem 1.
Consider a selfadjoint realization $L_{n}$ of the formally selfadjoint
Sturm–Liouville operator333Since, in this paper, we always consider a
particular selfadjoint realization of a formal differential operator, we will
use the same letter to denote both.
$L_{n}=-\frac{d}{dx}\left(p(x)\frac{d}{dx}\right)+q_{n}(x)-\lambda_{n}$
on $\Lambda=(-\infty,\infty)$, $\Lambda=(0,\infty)$, or $\Lambda=(0,1)$ with
$p,q_{n}\in C^{\infty}(\Lambda)$ and $p(x)>0$ for all $x\in\Lambda$ such that,
for $t\in\Lambda$ and $n\to\infty$,
$n^{-2\kappa^{\prime}}\lambda_{n}\sim\omega,\qquad
n^{-2\kappa^{\prime}}q_{n}(n^{\kappa}t)\sim\tilde{q}(t),\qquad
n^{2\kappa^{\prime\prime}}p(n^{\kappa}t)\sim\tilde{p}(t)>0,$ (1.3)
asymptotically up to an error $O(n^{-1})$; the scaling exponents should
satisfy
$\kappa+\kappa^{\prime}+\kappa^{\prime\prime}=1.$ (1.4)
We assume that these expansions can be differentiated444We say that an
expansion $f_{n}(t)-f(t)=O(1/n)$ can be differentiated if
$f_{n}^{\prime}(t)-f^{\prime}(t)=O(1/n)$. at least twice, that the roots of
$\tilde{q}(t)-\omega$ are simple and that the spectral projection
$K_{n}={\mathbb{1}}_{(-\infty,0)}(L_{n})$ satisfies ${\operator@font
tr}\,K_{n}=n$. Let $x=\sigma_{n}\xi+\mu_{n}$ induce the transformed projection
$\tilde{K}_{n}$ and let $x=n^{\kappa}t$ induce the transformed mean counting
probability density $\tilde{\rho}_{n}$. Then the following scaling limits
hold.
* •
Bulk scaling limit: given $t\in\Lambda$ with $\tilde{q}(t)<\omega$, the
scaling
$\mu_{n}=n^{\kappa}t,\qquad\sigma_{n}=\frac{n^{\kappa-1}}{\tilde{\rho}(t)},\qquad\tilde{\rho}(t)=\frac{1}{\pi}\sqrt{\frac{(\omega-\tilde{q}(t))_{+}}{\tilde{p}(t)}},$
yields, for any bounded interval $J$, the strong operator limit
$\tilde{K}_{n}{\mathbb{1}}_{J}\,\overset{s}{\longrightarrow}\,K_{\text{\rm
Dyson}}\,{\mathbb{1}}_{J}$. Under condition (K0) and if $\tilde{\rho}$ has
unit mass on $\Lambda$, there is the limit law
$\tilde{\rho}_{n}(t)\,dt\overset{w}{\longrightarrow}\tilde{\rho}(t)\,dt.$
* •
Soft-edge scaling limit: given $t_{*}\in\Lambda$ with
$\tilde{q}(t_{*})=\omega$, the scaling
$\mu_{n}x=n^{\kappa}t_{*},\qquad\sigma_{n}=n^{\kappa-\frac{2}{3}}\left(\frac{\tilde{p}(t_{*})}{\tilde{q}^{\prime}(t_{*})}\right)^{1/3},$
yields, for $s>-\infty$ and any interval $J\subset(s,\infty)$, the limit
$\tilde{K}_{n}{\mathbb{1}}_{J}\,\overset{s}{\longrightarrow}\,K_{\text{\rm
Airy}}\,{\mathbb{1}}_{J}$.
* •
Hard-edge scaling limit: given that $\Lambda=(0,\infty)$ or $\Lambda=(0,1)$
with
$p(0)=0,\quad p^{\prime}(0)>0,\quad
q_{n}(x)=q(x)=\gamma^{2}x^{-1}+O(1)\;\,(x\to 0),$ (1.5)
the scaling
$\mu_{n}=0,\qquad\sigma_{n}=\frac{p^{\prime}(0)}{4\omega
n^{2\kappa^{\prime}}},\qquad\alpha=\frac{2\gamma}{\sqrt{p^{\prime}(0)}},$
yields,555Here, if $0\leqslant\alpha<1$, the selfadjoint realization $L_{n}$
is defined by means of the boundary condition $2xu^{\prime}(x)-\alpha
u(x)=o(x^{-\alpha/2})\qquad(x\to 0).$ (1.6) for any bounded interval
$J\subset(0,\infty)$, the limit
$\tilde{K}_{n}{\mathbb{1}}_{J}\,\overset{s}{\longrightarrow}\,K_{\text{\rm
Bessel}}^{(\alpha)}\,{\mathbb{1}}_{J}$.
###### Remark 1.3.
Whether the intervals $J$ can be chosen unbounded or not depends on whether
the limit operator $K\,{\mathbb{1}}_{J}$ would then be trace class or not, see
the explicit formulae for the traces in the appendix: only in the former case
we get the representation of the scaling limit in terms of a particular
integral kernel. Note that we can never use $J=\Lambda$ since ${\operator@font
tr}\,K_{n}=n\to\infty$.
### Outline of the paper
The proof of Theorem 1 is subject of §2. In §3 we apply it to the classical
orthogonal polynomials, which yields a short and unified derivation of the
known formulae for the scaling limits for the classical unitary random matrix
ensembles (GUE, LUE/Wishart, JUE/MANOVA). In fact, by a result of Tricomi, the
only input needed is the weight function $w$ of the orthogonal polynomials;
from there one gets in a purely formula based fashion (by simple manipulations
which can easily be coded in any computer algebra system), first, to the
coefficients $p$ and $q_{n}$ as well as to the eigenvalues $\lambda_{n}$ of
the Sturm–Liouville operator $L_{n}$ and next, by applying Theorem 1, to the
particular scalings limits.
To emphasize that our main result and its application is largely independent
of concretely identifying the kernel $\tilde{K}$ of the spectral projection
${\mathbb{1}}_{(-\infty,0)}(\tilde{L})$, we postpone this identification to
the appendix: there, using generalized eigenfunction expansions, we calculate
the Dyson, Airy, and Bessel kernels directly from the limit differential
operators $\tilde{L}$.
## 2\. Proof of the Main Result for Sturm–Liouville Operators
To begin with, we first state the expansions of the coefficients, which also
motivate the particular regularity assumptions made in Theorem 1.
### Expansions
Since $L_{n}$ is a selfadjoint realization of
$L_{n}=-\frac{d}{dx}\left(p(x)\frac{d}{dx}\right)+q_{n}(x)-\lambda_{n}$
with $p,q_{n}\in C^{\infty}(\Lambda)$ and $p(x)>0$ for $x\in\Lambda$, we have
$C^{\infty}_{0}(\Lambda)\subset D(L_{n})$. The scaling
$x=\sigma_{n}\xi+n^{\kappa}t\qquad(\sigma_{n}\neq 0)$ (2.1)
(with $t\in\Lambda$ a _parameter_) yields $\tilde{\Lambda}=(-\infty,\infty)$
in each of the following cases:
* •
$\Lambda=(-\infty,\infty)$;
* •
$\Lambda=(0,\infty)$, $\kappa\geqslant 0$, $\sigma_{n}\to 0$;
* •
$\Lambda=(0,1)$, $\kappa=0$, $\sigma_{n}\to 0$.
By virtue of $\tilde{K}_{n}(\xi,\eta)\,d\eta=K_{n}(x,y)\,dy$ (with
$y=\sigma_{n}\eta+n^{\kappa}t$), such a scaling induces
$\tilde{K}_{n}(\xi,\eta)=\sigma_{n}K_{n}(\sigma_{n}\xi+n^{\kappa}t,\sigma_{n}\eta+n^{\kappa}t).$
Because of $\sigma_{n}^{2}>0$ and $p(n^{\kappa}t)>0$, we see that
$\tilde{K}_{n}={\mathbb{1}}_{(-\infty,0)}(\tilde{L}_{n})$ with
$\tilde{L}_{n}=-\frac{1}{p(n^{\kappa}t)}\frac{d}{d\xi}\left(p(\sigma_{n}\xi+n^{\kappa}t)\frac{d}{d\xi}\right)+\frac{\sigma_{n}^{2}}{p(n^{\kappa}t)}\left(q_{n}(\sigma_{n}\xi+n^{\kappa}t)-\lambda_{n}\right).$
Now, if $\sigma_{n}=o(n^{\kappa-1/2})$, we get from (1.3) and (1.4) by Taylor
expansion that666If $\sigma_{n}=o(n^{\kappa-1/2})$, the error term is
$O(\max(\sigma_{n}n^{-\kappa},\sigma_{n}^{2}n^{1-2\kappa}))$, which amounts
for $O(n^{-1})$ in the bulk scaling limit and for $O(n^{-1/3})$ in the soft-
edge scaling limit.
$\tilde{L}_{n}u(\xi)\,=\,-u^{\prime\prime}(\xi)+\frac{\sigma_{n}^{2}n^{2-2\kappa}}{\tilde{p}(t)}\Big{(}\tilde{q}(t)-\omega+\sigma_{n}n^{-\kappa}\tilde{q}^{\prime}(t)\cdot\xi\Big{)}u(\xi)+o(1).$
### Bulk Scaling Limit
If $\tilde{q}(t)\neq\omega$, by choosing
$\sigma_{n}=\sigma_{n}(t)=\pi\,n^{\kappa-1}\sqrt{\frac{\tilde{p}(t)}{|\omega-\tilde{q}(t)|}},$
we get $\tilde{L}_{n}u(\xi)\,=\,-u^{\prime\prime}(\xi)-s\pi^{2}u(\xi)+o(1)$;
that is, $\tilde{L}_{n}u\to\tilde{L}u$ with the limit
$\tilde{L}=-\frac{d^{2}}{d\xi^{2}}-s\pi^{2},\qquad s={\operator@font
sign}(\omega-\tilde{q}(t)).$
We recall that $\tilde{L}$ is essentially selfadjoint on
$C^{\infty}_{0}(\tilde{\Lambda})$ and that its unique selfadjoint extension
has absolutely continuous spectrum:
$\sigma(\tilde{L})=\sigma_{\text{ac}}(\tilde{L})=[-s\pi^{2},\infty)$. Thus,
for $s=-1$, the spectral projection ${\mathbb{1}}_{(-\infty,0)}(\tilde{L})$ is
zero. For $s=1$, the spectral projection can be calculated by a generalized
eigenfunction expansion, yielding the _Dyson kernel_ (A.3); see Lemma A.1.
### Limit Law
The result for the bulk scaling limit allows, in passing, to calculate a limit
law of the mean counting probability $\rho_{n}(x)\,dx$. We observe that, by
virtue of $\rho_{n}(x)\,dx=\tilde{\rho}_{n}(t)\,dt$, the scaling
$x=n^{\kappa}t$ transforms $\rho_{n}(x)$ into777It is the cancellation of the
powers of $n$ in the last equality, which called for assumption (1.4).
$\tilde{\rho}_{n}(t)=n^{\kappa}\rho_{n}(n^{\kappa}t)=n^{\kappa-1}K_{n}(n^{\kappa}t,n^{\kappa}t)=\frac{n^{\kappa-1}}{\sigma_{n}(t)}\tilde{K}_{n}(0,0)=\frac{1}{\pi}\sqrt{\frac{|\omega-\tilde{q}(t)|}{\tilde{p}(t)}}\tilde{K}_{n}(0,0).$
Thus, to get to a limit, we have to _assume_ condition (K0), that
is,888Following Knuth, the bracketed notation $[S]$ stands for $1$ if the
statement $S$ is true, $0$ otherwise.
$\tilde{K}_{n}(0,0)\to[\tilde{q}(t)<\omega]\,K_{\text{\rm
Dyson}}(0,0)=[\tilde{q}(t)<\omega].$
We then get, for $\tilde{q}(t)\neq\omega$ (that is, almost everywhere in
$\Lambda$),
$\tilde{\rho}_{n}(t)\to\tilde{\rho}(t)=\frac{1}{\pi}\sqrt{\frac{(\omega-\tilde{q}(t))_{+}}{\tilde{p}(t)}}.$
Hence, by Helly’s selection theorem, the probability measure
$\tilde{\rho}_{n}(t)\,dt$ converges vaguely to $\tilde{\rho}(t)\,dt$, which
is, in general, just a sub-probability measure. If, however, it is checked
that $\tilde{\rho}(t)\,dt$ has unit mass, the convergence is weak.
### Soft-Edge Scaling Limit
If $\tilde{q}(t_{*})=\omega$, by choosing999Note that, by the assumption made
on the simplicity of the roots of $\tilde{q}(t)-\omega$, we have
$\tilde{q}^{\prime}(t_{*})\neq 0$.
$\sigma_{n}=\sigma_{n}(t_{*})=n^{\kappa-2/3}\left(\frac{\tilde{p}(t_{*})}{\tilde{q}^{\prime}(t_{*})}\right)^{1/3},$
we get $\tilde{L}_{n}u(\xi)\,=\,-u^{\prime\prime}(\xi)+\xi u(\xi)+o(1)$; that
is, $\tilde{L}_{n}u\to\tilde{L}u$ with the limit
$\tilde{L}=-\frac{d^{2}}{d\xi^{2}}+\xi.$
We recall that $\tilde{L}$ is essentially selfadjoint on
$C^{\infty}_{0}(\tilde{\Lambda})$ and that its unique selfadjoint extension
has absolutely continuous spectrum:
$\sigma(\tilde{L})=\sigma_{\text{ac}}(\tilde{L})=(-\infty,\infty)$. The
spectral projection ${\mathbb{1}}_{(-\infty,0)}(\tilde{L})$ can
straightforwardly be calculated by a generalized eigenfunction expansion,
yielding the _Airy kernel_ (A.4); see Lemma A.2.
### Hard-Edge Scaling Limit
In the case $a=0$, we take the scaling
$x=\sigma_{n}\xi\qquad(\sigma_{n}>0),$
with $\sigma_{n}=o(1)$ appropriately chosen, to explore the vicinity of this
“hard edge”; note that such a scaling yields $\tilde{\Lambda}=(0,\infty)$. To
simplify we make the assumptions stated in (1.5). We see that
$\tilde{K}_{n}={\mathbb{1}}_{(-\infty,0)}(\tilde{L}_{n})$ with
$\tilde{L}_{n}=-\frac{4}{p^{\prime}(0)\sigma_{n}}\frac{d}{d\xi}\left(p(\sigma_{n}\xi)\frac{d}{d\xi}\right)+\frac{4\sigma_{n}}{p^{\prime}(0)}\left(q(\sigma_{n}\xi)-\lambda_{n}\right).$
If we choose
$\sigma_{n}=n^{-2\kappa^{\prime}}\frac{p^{\prime}(0)}{4\omega},$
we get101010The error term is $O(n^{-\min(1,2\kappa^{\prime})})$, which
amounts for $O(n^{-1})$ if $\kappa^{\prime}\geqslant\frac{1}{2}$ (as
throughout §3). $\tilde{L}_{n}u(\xi)\,=\,-4(\xi
u^{\prime}(\xi))^{\prime}+\left(\alpha^{2}\xi^{-1}-1\right)u(\xi)+o(1)$; that
is, $\tilde{L}_{n}u\to\tilde{L}u$,
$\tilde{L}=-4\frac{d}{d\xi}\left(\xi\frac{d}{d\xi}\right)+\alpha^{2}\xi^{-1}-1,\qquad\alpha=\frac{2\gamma}{\sqrt{p^{\prime}(0)}}.$
We recall that, if $\alpha\geqslant 1$, the limit $\tilde{L}$ is essentially
selfadjoint on $C^{\infty}_{0}(\tilde{\Lambda})$ and that the spectrum of its
unique selfadjoint extension is absolutely continuous:
$\sigma(\tilde{L})=\sigma_{\text{ac}}(\tilde{L})=[-1,\infty)$. The spectral
projection can be calculated by a generalized eigenfunction expansion,
yielding the _Bessel kernel_ (A.5); see Lemma A.3.
###### Remark 2.1.
The theorem also holds in the case $0\leqslant\alpha<1$ if the particular
selfadjoint realization $L_{n}$ is defined by the boundary condition (1.6),
see Remark A.1.
## 3\. Projection Kernels Associated to Classical Orthogonal Polynomials
In this section we apply Theorem 1 to the kernels associated with the
classical orthogonal polynomials, that is, the Hermite, Laguerre, and Jacobi
polynomials. In random matrix theory, the thus induced determinantal processes
are modeled by the spectra of the Gaussian Unitary Ensemble (GUE), the
Laguerre Unitary Ensemble (LUE) or Wishart ensemble, and the Jacobi Unitary
Ensemble (JUE) or MANOVA111111MANOVA = multivariate analysis of variance
ensemble.
To prepare the study of the individual cases, we first discuss their common
structure. Let $P_{n}(x)$ be the sequence of classical orthogonal polynomials
belonging to the weight function $w(x)$ on the interval $(a,b)$. We normalize
$P_{n}(x)$ such that $\langle\phi_{n},\phi_{n}\rangle=1$, where
$\phi_{n}(x)=w(x)^{1/2}P_{n}(x)$. The functions $\phi_{n}$ form a complete
orthogonal set in $L^{2}(a,b)$; conceptual proofs of the completeness can be
found, e.g., in ? (§5.7 for the Jacobi polynomials, §6.5 for the Hermite and
Laguerre polynomials). By a result of Tricomi (see ?, §10.7), the $P_{n}(x)$
satisfy the eigenvalue problem
$-\frac{1}{w(x)}\frac{d}{dx}\left(p(x)w(x)\frac{d}{dx}P_{n}(x)\right)=\lambda_{n}P_{n}(x),\quad\lambda_{n}=-n(r^{\prime}+\tfrac{1}{2}(n+1)p^{\prime\prime}),$
where $p(x)$ is a _quadratic_ polynomial121212With the sign chosen that
$p(x)>0$ for $x\in(a,b)$. and $r(x)$ a _linear_ polynomial such that
$\frac{w^{\prime}(x)}{w(x)}=\frac{r(x)}{p(x)}.$
In terms of $\phi_{n}$, a simple calculation shows that
$-\frac{d}{dx}\left(p(x)\frac{d}{dx}\phi_{n}(x)\right)+q(x)\phi_{n}(x)=\lambda_{n}\phi_{n}(x),\quad
q(x)=\frac{r(x)^{2}}{4p(x)}+\frac{r^{\prime}(x)}{2}.$
Therefore, by the completeness of the $\phi_{n}$, the formally selfadjoint
Sturm–Liouville operator $L=-\frac{d}{dx}p(x)\frac{d}{dx}+q(x)$ has a
particular selfadjoint realization (which we continue to denote by the letter
$L$) with spectrum
$\sigma(L)=\\{\lambda_{0},\lambda_{1},\lambda_{2},\ldots\\}$
and corresponding eigenfunctions $\phi_{n}$. Hence, the projection kernel
(1.1) induces an integral operator $K_{n}$ with ${\operator@font tr}\,K_{n}=n$
that satisfies
$K_{n}={\mathbb{1}}_{(-\infty,0)}(L_{n}),\qquad L_{n}=L-\lambda_{n}.$
Note that this relation remains true if we choose to make some parameters of
the weight $w$ (and, therefore, of the functions $\phi_{j}$) to depend on $n$.
For the scaling limits of $K_{n}$, we are now in the realm of Theorem 1: given
the weight $w(x)$ as the only input, all the other quantities can now simply
be obtained by routine calculations.
### Hermite Polynomials
The weight is $w(x)=e^{-x^{2}}$ on $\Lambda=(-\infty,\infty)$; hence
$p(x)=1,\qquad r(x)=-2x,\qquad q(x)=x^{2}-1,\qquad\lambda_{n}=2n,$
and, therefore,
$\kappa=\kappa^{\prime}=\tfrac{1}{2},\qquad\kappa^{\prime\prime}=0,\qquad\tilde{p}(t)=1,\qquad\tilde{q}(t)=t^{2},\qquad\omega=2.$
Theorem 1 is applicable and we directly read off the following well-known
scaling limits of the GUE (see, e.g., ?, Chap. 3):
* •
bulk scaling limit: if $-\sqrt{2}<t<\sqrt{2}$, the transformation
$x=\frac{\pi\,\xi}{n^{1/2}\sqrt{2-t^{2}}}+n^{1/2}t$
induces $\tilde{K}_{n}$ with a strong limit given by the _Dyson kernel_ ;
* •
limit law: the transformation $x=n^{1/2}t$ induces $\tilde{\rho}_{n}$ with a
weak limit given by the _Wigner semicircle law_
$\tilde{\rho}(t)=\frac{1}{\pi}\sqrt{(2-t^{2})_{+}};$
* •
soft-edge scaling limit: the transformation
$x=\pm(2^{-1/2}n^{-1/6}\xi+\sqrt{2n})$
induces $\tilde{K}_{n}$ with a strong limit given by the _Airy kernel_.
### Laguerre Polynomials
The weight is $w(x)=x^{\alpha}e^{-x}$ on $\Lambda=(0,\infty)$; hence
$p(x)=x,\qquad r(x)=\alpha-x,\qquad
q(x)=\frac{(\alpha-x)^{2}}{4x}-\frac{1}{2},\qquad\lambda_{n}=n.$
In random matrix theory, the corresponding determinantal point process is
modeled by the spectra of complex $n\times n$ Wishart matrices with a
dimension parameter $m\geqslant n$; the Laguerre parameter $\alpha$ is then
given by $\alpha=m-n\geqslant 0$. Of particular interest in statistics (?) is
the simultaneous limit $m,n\to\infty$ with
$\frac{m}{n}\to\theta\geqslant 1,$
for which we get
$\kappa=1,\quad\kappa^{\prime}=\frac{1}{2},\quad\kappa^{\prime\prime}=-\frac{1}{2},\quad\tilde{p}(t)=t,\quad\tilde{q}(t)=\frac{(\theta-1-t)^{2}}{4t},\quad\omega=1.$
Note that
$\omega-\tilde{q}(t)=\frac{(t_{+}-t)(t-t_{-})}{4t},\qquad
t_{\pm}=(\sqrt{\theta}\pm 1)^{2}.$
Theorem 1 is applicable and we directly read off the following well-known
scaling limits of the Wishart ensemble (?):
* •
bulk scaling limit: if $t_{-}<t<t_{+}$,
$x=\frac{2\pi t\,\xi}{\sqrt{(t_{+}-t)(t-t_{-})}}+nt$
induces $\tilde{K}_{n}$ with a strong limit given by the _Dyson kernel_ ;
* •
limit law: the scaling $x=nt$ induces $\tilde{\rho}_{n}$ with a weak limit
given by the _Marčenko–Pastur law_
$\tilde{\rho}(t)=\frac{1}{2\pi t}\sqrt{((t_{+}-t)(t-t_{-}))_{+}};$
* •
soft-edge scaling limit: with signs chosen consistently as either $+$ or $-$,
$x=\pm n^{1/3}\theta^{-1/6}t_{\pm}^{2/3}\xi+nt_{\pm}$ (3.1)
induces $\tilde{K}_{n}$ with a strong limit given by the _Airy kernel_.
###### Remark 3.1.
The scaling (3.1) is better known in the asymptotically equivalent form
$x=\sigma\xi+\mu,\quad\mu=(\sqrt{m}\pm\sqrt{n})^{2},\quad\sigma=(\sqrt{m}\pm\sqrt{n})\left(\frac{1}{\sqrt{m}}\pm\frac{1}{\sqrt{n}}\right)^{1/3},$
which is obtained from (3.1) by replacing $\theta$ with $m/n$ (see ?, p. 305).
In the case $\theta=1$, which implies $t_{-}=0$, the lower soft-edge scaling
(3.1) breaks down and has to be replaced by a scaling at the _hard_ edge:
* •
hard-edge scaling limit: if $\alpha=m-n$ is a constant,131313By Remark 2.1,
there is no need to restrict ourselves to $\alpha\geqslant 1$: since
$\phi_{n}(x)=x^{\alpha}\chi_{n}(x)$ with $\chi_{n}(x)$ extending smoothly to
$x=0$, we have, for $\alpha\geqslant 0$,
$x^{\alpha/2}(2x\phi_{n}^{\prime}(x)-\alpha\phi_{n}(x))=2x^{1+\alpha}\chi_{n}^{\prime}(x)=O(x)\qquad\qquad(x\to
0).$ Hence, the selfadjoint realization $L_{n}$ is compatible with the
boundary condition (1.6). $x=\xi/(4n)$ induces $\tilde{K}_{n}$ with a strong
limit given by the _Bessel kernel_ $K_{\text{\rm Bessel}}^{(\alpha)}$.
### Jacobi Polynomials
The weight is $w(x)=x^{\alpha}(1-x)^{\beta}$ on $\Lambda=(0,1)$; hence
$p(x)=x(1-x),\quad r(x)=\alpha-(\alpha+\beta)x,\quad
q(x)=\frac{(\alpha-(\alpha+\beta)x)^{2}}{4x(1-x)}-\frac{\alpha+\beta}{2},$
and
$\lambda_{n}=n(n+\alpha+\beta+1).$
In random matrix theory, the corresponding determinantal point process is
modeled by the spectra of complex $n\times n$ MANOVA matrices with dimension
parameters $m_{1},m_{2}\geqslant n$; the Jacobi parameters $\alpha$, $\beta$
are then given by $\alpha=m_{1}-n\geqslant 0$ and $\beta=m_{2}-n\geqslant 0$.
Of particular interest in statistics (?) is the simultaneous limit
$m_{1},m_{2},n\to\infty$ with
$\frac{m_{1}}{m_{1}+m_{2}}\to\theta\in(0,1),\qquad\frac{n}{m_{1}+m_{2}}\to\tau\in(0,1/2],$
for which we get
$\kappa=\kappa^{\prime\prime}=0,\;\kappa^{\prime}=1,\;\tilde{p}(t)=t(1-t),\;\tilde{q}(t)=\frac{(\theta-\tau-(1-2\tau)t)^{2}}{4\tau^{2}t(1-t)},\;\omega=\frac{1-\tau}{\tau}.$
Note that
$\omega-\tilde{q}(t)=\frac{(t_{+}-t)(t-t_{-})}{4\tau^{2}t(1-t)},\qquad
t_{\pm}=\left(\sqrt{\theta(1-\tau)}\pm\sqrt{\tau(1-\theta)}\right)^{2}.$
Theorem 1 is applicable and we directly read off the following (less well-
known) scaling limits of the MANOVA ensemble (?, ?):
* •
bulk scaling limit: if $t_{-}<t<t_{+}$,
$x=\frac{2\pi\tau t(1-t)\,\xi}{n\sqrt{(t_{+}-t)(t-t_{-})}}+t$
induces $\tilde{K}_{n}$ with a strong limit given by the _Dyson kernel_ ;
* •
limit law: (because of $\kappa=0$ there is no transformation here) $\rho_{n}$
has a weak limit given by the law (?)
$\rho(t)=\frac{1}{2\pi\tau t(1-t)}\sqrt{((t_{+}-t)(t-t_{-}))_{+}};$
* •
soft-edge scaling limit: with signs chosen consistently as either $+$ or $-$,
$x=\pm n^{-2/3}\frac{(\tau
t_{\pm}(1-t_{\pm}))^{2/3}}{(\tau\theta(1-\tau)(1-\theta))^{1/6}}\,\xi+t_{\pm}$
(3.2)
induces $\tilde{K}_{n}$ with a strong limit given by the _Airy kernel_.
###### Remark 3.2.
? gives the soft-edge scaling in terms of a trigonometric parametrization of
$\theta$ and $\tau$. By putting
$\theta=\sin^{2}\frac{\phi}{2},\qquad\tau=\sin^{2}\frac{\psi}{2},$
we immediately get
$t_{\pm}=\sin^{2}\frac{\phi\pm\psi}{2}$
and (3.2) becomes
$x=\pm\sigma_{\pm}\xi+t_{\pm},\qquad\sigma_{\pm}=n^{-2/3}\left(\frac{\tau^{2}\sin^{4}(\phi\pm\psi)}{4\sin\phi\sin\psi}\right)^{1/3}.$
In the case $\theta=\tau=1/2$, which is equivalent to $m_{1}/n,m_{2}/n\to 1$,
we have $t_{-}=0$ and $t_{+}=1$. Hence, the lower and the upper soft-edge
scaling (3.2) break down and have to be replaced by a scaling at the _hard_
edges:
* •
hard-edge scaling limit: if $\alpha=m_{1}-n$, $\beta=m_{2}-n$ are
constants,141414For the cases $0\leqslant\alpha<1$ and $0\leqslant\beta<1$,
see the justification of the limit given in Footnote 13. $x=\xi/(4n^{2})$
induces $\tilde{K}_{n}$ with a strong limit given by the _Bessel kernel_
$K_{\text{\rm Bessel}}^{(\alpha)}$; by symmetry, the _Bessel kernel_
$K_{\text{\rm Bessel}}^{(\beta)}$ is obtained for $x=1-\xi/(4n^{2})$.
## A. Appendices
### A.1. Generalized Strong Convergence
The notion of strong resolvent convergence (?, §9.3) links the convergence of
differential operators, pointwise for an appropriate class of smooth test
functions, to the strong convergence of their spectral projections. We recall
a slight generalization of that concept, which allows the underlying Hilbert
space to vary.
Specifically we consider, on an interval $(a,b)$ (not necessarily bounded) and
on a sequence of subintervals $(a_{n},b_{n})\subset(a,b)$ with $a_{n}\to a$
and $b_{n}\to b$, selfadjoint operators
$L:D(L)\subset L^{2}(a,b)\to L^{2}(a,b),\qquad L_{n}:D(L_{n})\subset
L^{2}(a_{n},b_{n})\to L^{2}(a_{n},b_{n}).$
By means of the natural embedding (that is, extension by zero) we take
$L^{2}(a_{n},b_{n})\subset L^{2}(a,b)$; the multiplication operator induced by
the characteristic function ${\mathbb{1}}_{(a_{n},b_{n})}$, which we continue
to denoted by ${\mathbb{1}}_{(a_{n},b_{n})}$, constitutes the orthogonal
projection of $L^{2}(a,b)$ onto $L^{2}(a_{n},b_{n})$. Following ?, we say that
$L_{n}$ converges to $L$ in the sense of _generalized strong convergence_
(gsc), if for some $z\in{\mathbb{C}}\setminus{\mathbb{R}}$, and hence, a
forteriori, for all such $z$,
$R_{z}(L_{n}){\mathbb{1}}_{(a_{n},b_{n})}\overset{s}{\longrightarrow}R_{z}(L)\qquad(n\to\infty)$
in the strong operator topology of $L^{2}(a,b)$.151515We denote by
$R_{z}(L)=(L-z)^{-1}$ the resolvent of an operator $L$.
###### Theorem 2 (?).
Let the selfadjoint operators $L_{n}$ and $L$ satisfy the assumptions stated
above and let $C$ be a core of $L$ such that, eventually, $C\subset D(L_{n})$.
* (i)
If $L_{n}u\to Lu$ for all $u\in C$, then
$L_{n}\overset{gsc}{\longrightarrow}L$.
* (ii)
If $L_{n}\overset{gsc}{\longrightarrow}L$ and if the endpoints of the interval
$\Delta\subset{\mathbb{R}}$ do not belong to the pure point spectrum
$\sigma_{\text{pp}}(L)$ of $L$, the spectral projections to $\Delta$ converge
as
${\mathbb{1}}_{\Delta}(L_{n}){\mathbb{1}}_{(a_{n},b_{n})}\overset{s}{\longrightarrow}{\mathbb{1}}_{\Delta}(L).$
### A.2. Generalized Eigenfunction Expansion of Sturm–Liouville Operators
Let $L$ be a formally selfadjoint Sturm–Liouville operator on the interval
$(a,b)$,
$Lu=-(pu^{\prime})^{\prime}+qu,$
with smooth coefficient functions $p>0$ and $q$. We have the _limit point
case_ (LP) at the boundary point $a$ if there is some $c\in(a,b)$ and some
$z\in{\mathbb{C}}$ such that there exists at least one solution of $(L-z)u=0$
in $(a,b)$ for which $u\not\in L^{2}(a,c)$; otherwise, we have the limit
circle case (LC) at $a$. According to the Weyl alternative (?, Thm. 8.27) this
is then true for all such $c$ and $z$ and, moreover, in the LP case there
exists, for every $z\in{\mathbb{C}}\setminus{\mathbb{R}}$, a one-dimensional
space of solutions $u$ of the equation $(L-z)u=0$ for which $u\in L^{2}(a,c)$.
The same structure and notion holds for the boundary point $b$.
###### Theorem 3.
Let $L$ be a formally selfadjoint Sturm–Liouville operator on the interval
$(a,b)$ as defined above. If there is the LP case at $a$ and $b$, then $L$ is
essentially self-adjoint on the domain $C^{\infty}_{0}(a,b)$ and, for
$z\in{\mathbb{C}}\setminus{\mathbb{R}}$, the resolvent $R_{z}(L)=(L-z)^{-1}$
of its unique selfadjoint extension (which we continue to denote by the letter
$L$) is of the form
$R_{z}(L)\phi(x)=\frac{1}{W(u_{a},u_{b})}\left(u_{b}(x)\int_{a}^{x}u_{a}(y)\phi(y)\,dy+u_{a}(x)\int_{x}^{b}u_{b}(y)\phi(y)\,dy\right).$
(A.1)
Here $u_{a}$ and $u_{b}$ are the non-vanishing solutions of the equation
$(L-z)u=0$, uniquely determined up to a factor by the conditions $u_{a}\in
L^{2}(a,c)$ and $u_{b}\in L^{2}(c,b)$ for some $c\in(a,b)$, and $W$ denotes
the Wronskian
$W(u_{a},u_{b})=p(x)(u_{a}^{\prime}(x)u_{b}(x)-u_{a}(x)u_{b}^{\prime}(x)),$
which is a constant for $x\in(a,b)$.
A more general formulation (and a proof) that includes the LC case can be
found, e.g., in ?.161616See ? for a proof that $C^{\infty}_{0}(a,b)$ is a core
if the coefficients are smooth. In the following, we write (A.1) briefly in
the form
$R_{z}(L)\phi(x)=\int_{a}^{b}G_{z}(x,y)\phi(y)\,dy.$
If the imaginary part of the thus defined Green’s kernel $G_{z}(x,y)$ has
finite boundary values as $z$ approaches the real line from above, there is a
simple formula for the spectral projection associated with $L$ that often
applies if the spectrum of $L$ is absolutely continuous.
###### Theorem 4.
(i) Assume that there exits, as $\epsilon\downarrow 0$, the limit
$\pi^{-1}{\operator@font Im\,}G_{\lambda+i\epsilon}(x,y)\to K_{\lambda}(x,y),$
locally uniform in $x,y\in(a,b)$ for each $\lambda\in{\mathbb{R}}$ except for
some isolated points $\lambda$ for which the limit is replaced by
$\epsilon\,{\operator@font Im\,}G_{\lambda+i\epsilon}(x,y)\to 0.$
Then the spectrum is absolutely continuous, $\sigma(L)=\sigma_{\text{ac}}(L)$,
and, for a Borel set $\Delta$,
$\langle{\mathbb{1}}_{\Delta}(L)\phi,\psi\rangle=\int_{\Delta}\langle
K_{\lambda}\,\phi,\psi\rangle\,d\lambda\qquad(\phi,\psi\in
C^{\infty}_{0}(a,b)).$ (A.2)
(ii) Assume further, for some $(a^{\prime},b^{\prime})\subset(a,b)$, that
$\int_{a^{\prime}}^{b^{\prime}}\int_{a^{\prime}}^{b^{\prime}}\left(\int_{\Delta}|K_{\lambda}(x,y)|\,d\lambda\right)^{2}\,dx\,dy\,<\,\infty.$
Then ${\mathbb{1}}_{\Delta}(L){\mathbb{1}}_{(a^{\prime},b^{\prime})}$ is a
Hilbert-Schmidt operator on $L^{2}(a,b)$ with kernel
${\mathbb{1}}_{(a^{\prime},b^{\prime})}(y)\int_{\Delta}K_{\lambda}(x,y)\,d\lambda.$
If $\int_{\Delta}K_{\lambda}(x,y)\,d\lambda$ is a continuous function of
$x,y\in(a^{\prime},b^{\prime})$,
${\mathbb{1}}_{\Delta}(L){\mathbb{1}}_{(a^{\prime},b^{\prime})}$ is a trace
class operator with trace
${\operator@font
tr}\,{\mathbb{1}}_{\Delta}(L){\mathbb{1}}_{(a^{\prime},b^{\prime})}=\int_{a^{\prime}}^{b^{\prime}}\int_{\Delta}K_{\lambda}(x,x)\,d\lambda\,dx.$
###### Proof.
With $E$ denoting the spectral resolution of the selfadjoint operator $L$, we
observe that, for a given $\phi\in C^{\infty}_{0}(a,b)$, the Borel–Stieltjes
transform of the positive measure $\mu_{\phi}(\lambda)=\langle
E(\lambda)\phi,\phi\rangle$ can be simply expressed in terms of the resolvent
as follows (see ?, §32.1):
$\int_{-\infty}^{\infty}\frac{d\mu_{\phi}(\lambda)}{\lambda-z}=\langle
R_{z}(L)\phi,\phi\rangle.$
If we take $z=\lambda+i\epsilon$ and let $\epsilon\downarrow 0$, we obtain by
the locally uniform convergence of the integral kernel of $R_{z}$ that there
exits either the limit
$\pi^{-1}{\operator@font Im\,}\langle
R_{\lambda+i\epsilon}(L)\phi,\phi\rangle\to\langle
K_{\lambda}\phi,\phi\rangle$
or, at isolated points $\lambda$,
$\epsilon\,{\operator@font Im\,}\langle
R_{\lambda+i\epsilon}(L)\phi,\phi\rangle\to 0.$
By a theorem of de la Vallée-Poussin (see ?, Thm. 11.6(ii/iii)), the singular
part of $\mu_{\phi}$ vanishes, $\mu_{\phi,\text{sing}}=0$; by Plemelj’s
reconstruction the absolutely continuous part satisfies (see ?, Thm. 11.6(iv))
$d\mu_{\phi,\text{ac}}(\lambda)=\langle
K_{\lambda}\phi,\phi\rangle\,d\lambda.$
Since $C^{\infty}_{0}(a,b)$ is dense in $L^{2}(a,b)$, approximation shows that
$E_{\text{sing}}=0$, that is, $\sigma(L)=\sigma_{\text{ac}}(L)$. Since
$\langle{\mathbb{1}}_{\Delta}(L)\phi,\phi\rangle=\int_{\Delta}d\mu_{\phi}(\lambda)$,
we thus get, by the symmetry of the bilinear expressions, the representation
(A.2), which finishes the proof of (i). The Hilbert–Schmidt part of part (ii)
follows using the Cauchy–Schwarz inequality and Fubini’s theorem and yet
another density argument; the trace class part follows from ? since
${\mathbb{1}}_{(a^{\prime},b^{\prime})}{\mathbb{1}}_{\Delta}(L){\mathbb{1}}_{(a^{\prime},b^{\prime})}$
is a selfadjoint, positive-semidefinite operator.∎∎
We apply this theorem to the spectral projections used in the proof of Theorem
1. The first two examples could have been dealt with by Fourier techniques (?,
§3.3); applying, however, the same method in all the examples renders the
approach more systematic.
###### Example 1 (Dyson kernel).
Consider $Lu=-u^{\prime\prime}$ on $(-\infty,\infty)$. Since $u\equiv 1$ is a
solution of $Lu=0$, both endpoints are LP; for a given ${\operator@font
Im\,}z>0$ the solutions $u_{a}$ ($u_{b}$) of $(L-z)u=0$ being $L^{2}$ at
$-\infty$ ($\infty$) are spanned by
$u_{a}(x)=e^{-ix\sqrt{z}},\qquad u_{b}(x)=e^{ix\sqrt{z}}.$
Thus, Theorem 3 applies: $L$ is essentially selfadjoint on
$C^{\infty}_{0}(-\infty,\infty)$, the resolvent of its unique selfadjoint
extension is represented, for ${\operator@font Im\,}z>0$, by the Green’s
kernel
$G_{z}(x,y)=\frac{i}{2\sqrt{z}}\begin{cases}e^{i(x-y)\sqrt{z}}&x>y\\\\[2.84526pt]
e^{-i(x-y)\sqrt{z}}&\text{otherwise}.\end{cases}$
For $\lambda>0$ there is the limit
$\pi^{-1}{\operator@font
Im\,}G_{\lambda+i0}(x,y)=K_{\lambda}(x,y)=\frac{\cos((x-y)\sqrt{\lambda})}{2\pi\sqrt{\lambda}},$
for $\lambda<0$ the limit is zero; both limits are locally uniform in
$x,y\in{\mathbb{R}}$. For $\lambda=0$ there would be divergence, but we
obviously have
$\epsilon\,{\operator@font Im\,}G_{i\epsilon}(x,y)\to
0\qquad(\epsilon\downarrow 0),$
locally uniform in $x,y\in{\mathbb{R}}$. Hence, Theorem 4 applies:
$\sigma(L)=\sigma_{\text{ac}}(L)=[0,\infty)$ and (A.2) holds for each Borel
set $\Delta\subset{\mathbb{R}}$. Given a bounded interval $(a,b)$, we may
estimate for the specific choice $\Delta=(-\infty,\pi^{2})$ that
$\int_{a}^{b}\int_{a}^{b}\left(\int_{-\infty}^{\pi^{2}}|K_{\lambda}(x,y)|\,d\lambda\right)^{2}\,dx\,dy\\\\[5.69054pt]
=\int_{a}^{b}\int_{a}^{b}\left(\int_{0}^{\pi^{2}}\left|\frac{\cos((x-y)\sqrt{\lambda})}{2\pi\sqrt{\lambda}}\right|\,d\lambda\right)^{2}\,dx\,dy\leqslant\left(\int_{a}^{b}\int_{0}^{\pi^{2}}\frac{d\lambda}{2\pi\sqrt{\lambda}}\right)^{2}=(b-a)^{2}.$
Therefore, Theorem 4 yields that
${\mathbb{1}}_{(-\infty,\pi^{2})}(L){\mathbb{1}}_{(a,b)}$ is Hilbert–Schmidt
with the _Dyson kernel_
$\int_{-\infty}^{\pi^{2}}K_{\lambda}(x,y)\,d\lambda=\int_{0}^{\pi^{2}}\frac{\cos((x-y)\sqrt{\lambda})}{2\pi\sqrt{\lambda}}\,d\lambda=\frac{\sin(\pi(x-y))}{\pi(x-y)},$
restricted to $x,y\in(a,b)$. Here, the last equality is simply obtained from
$(x-y)\int_{0}^{\pi^{2}}\frac{\cos((x-y)\sqrt{\lambda})}{2\sqrt{\lambda}}\,d\lambda=\int_{0}^{\pi^{2}}\frac{d}{d\lambda}\sin((x-y)\sqrt{\lambda})\,d\lambda=\sin(\pi(x-y)).$
Since the resulting kernel is continuous for $x,y\in(a,b)$, Theorem 4 gives
that ${\mathbb{1}}_{(-\infty,\pi^{2})}(L){\mathbb{1}}_{(a,b)}$ is a trace
class operator with trace
${\operator@font
tr}\,{\mathbb{1}}_{(-\infty,\pi^{2})}(L){\mathbb{1}}_{(a,b)}=b-a.$
To summarize, we have thus obtained the following lemma.
###### Lemma A.1.
The operator $Lu=-u^{\prime\prime}$ is essentially selfadjoint on
$C^{\infty}_{0}(-\infty,\infty)$. The spectrum of its unique selfadjoint
extension is
$\sigma(L)=\sigma_{\rm ac}(L)=[0,\infty).$
Given $(a,b)$ bounded,
${\mathbb{1}}_{(-\infty,\pi^{2})}(L){\mathbb{1}}_{(a,b)}$ is trace class with
trace $b-a$ and kernel
$K_{\text{\rm
Dyson}}(x,y)=\int_{0}^{\pi^{2}}\frac{\cos((x-y)\sqrt{\lambda})}{2\pi\sqrt{\lambda}}\,d\lambda=\frac{\sin(\pi(x-y))}{\pi(x-y)}.$
(A.3)
###### Example 2 (Airy kernel).
Consider the differential operator $Lu=-u^{\prime\prime}+xu$ on
$(-\infty,\infty)$. Since the specific solution $u(x)={\operator@font Bi}(x)$
of $Lu=0$ is not locally $L^{2}$ at each of the endpoints, both endpoints are
LP. For a given ${\operator@font Im\,}z>0$ the solutions $u_{a}$ ($u_{b}$) of
$(L-z)u=0$ being $L^{2}$ at $-\infty$ ($\infty$) are spanned by (see ?, Eq.
(10.4.59–64))
$u_{a}(x)={\operator@font Ai}(x-z)-i\,{\operator@font Bi}(x-z),\qquad
u_{b}(x)={\operator@font Ai}(x-z).$
Thus, Theorem 3 applies: $L$ is essentially selfadjoint on
$C^{\infty}_{0}(-\infty,\infty)$, the resolvent of its unique selfadjoint
extension is represented, for ${\operator@font Im\,}z>0$, by the Green’s
kernel
$G_{z}(x,y)=i\pi\begin{cases}{\operator@font Ai}(x-z)\left({\operator@font
Ai}(y-z)-i\,{\operator@font Bi}(y-z)\right)&x>y\\\\[2.84526pt] {\operator@font
Ai}(y-z)\left({\operator@font Ai}(x-z)-i\,{\operator@font
Bi}(x-z)\right)&\text{otherwise}.\end{cases}$
For $\lambda\in{\mathbb{R}}$ there is thus the limit
$\pi^{-1}{\operator@font
Im\,}G_{\lambda+i0}(x,y)=K_{\lambda}(x,y)={\operator@font
Ai}(x-\lambda){\operator@font Ai}(y-\lambda),$
locally uniform in $x,y\in{\mathbb{R}}$. Hence, Theorem 4 applies:
$\sigma(L)=\sigma_{\text{ac}}(L)={\mathbb{R}}$ and (A.2) holds for each Borel
set $\Delta\subset{\mathbb{R}}$. Given $s>-\infty$, we may estimate for the
specific choice $\Delta=(-\infty,0)$ that
$\left(\int_{s}^{\infty}\int_{s}^{\infty}\left(\int_{\Delta}|K_{\lambda}(x,y)|\,d\lambda\right)^{2}\,dx\,dy\right)^{1/2}\leqslant\int_{s}^{\infty}\int_{0}^{\infty}{\operator@font
Ai}(x+\lambda)^{2}\,d\lambda\,dx=\tau(s)$
with
$\tau(s)=\frac{1}{3}\left(2s^{2}\text{Ai}(s)^{2}-2s\text{Ai}^{\prime}(s)^{2}-\text{Ai}(s)\text{Ai}^{\prime}(s)\right).$
Therefore, Theorem 4 yields that
${\mathbb{1}}_{(-\infty,0)}(L){\mathbb{1}}_{(s,\infty)}$ is Hilbert–Schmidt
with the _Airy kernel_
$\int_{-\infty}^{0}K_{\lambda}(x,y)\,d\lambda=\int_{0}^{\infty}{\operator@font
Ai}(x+\lambda){\operator@font Ai}(y+\lambda)\,d\lambda=\frac{{\operator@font
Ai}(x){\operator@font Ai}^{\prime}(y)-{\operator@font
Ai}^{\prime}(x){\operator@font Ai}(y)}{x-y},$
restricted to $x,y\in(s,\infty)$. Here, the last equality is obtained from a
Christoffel–Darboux type of argument: First, we use the underlying
differential equation,
$x\,{\operator@font Ai}(x+\lambda)={\operator@font
Ai}^{\prime\prime}(x+\lambda)-\lambda\,{\operator@font Ai}(x+\lambda),$
and partial integration to obtain
$x\int_{0}^{\infty}{\operator@font Ai}(x+\lambda){\operator@font
Ai}(y+\lambda)\,d\lambda\\\\[5.69054pt] =\int_{0}^{\infty}{\operator@font
Ai}^{\prime\prime}(x+\lambda){\operator@font
Ai}(y+\lambda)\,d\lambda-\int_{0}^{\infty}\lambda\,{\operator@font
Ai}(x+\lambda){\operator@font Ai}(y+\lambda)\,d\lambda\\\\[5.69054pt]
=-{\operator@font Ai}^{\prime}(x){\operator@font
Ai}(y)-\int_{0}^{\infty}{\operator@font Ai}^{\prime}(x+\lambda){\operator@font
Ai}^{\prime}(y+\lambda)\,d\lambda-\int_{0}^{\infty}\lambda\,{\operator@font
Ai}(x+\lambda){\operator@font Ai}(y+\lambda)\,d\lambda.$
Next, we exchange the roles of $x$ and $y$ and substract to get the assertion.
Since the resulting kernel is continuous, Theorem 4 gives that
${\mathbb{1}}_{(-\infty,0)}(L){\mathbb{1}}_{(s,\infty)}$ is a trace class
operator with trace
${\operator@font
tr}\,{\mathbb{1}}_{(-\infty,0)}(L){\mathbb{1}}_{(s,\infty)}=\tau(s)\to\infty\qquad(s\to-\infty).$
To summarize, we have thus obtained the following lemma.
###### Lemma A.2.
The differential operator $Lu=-u^{\prime\prime}+xu$ is essentially selfadjoint
on $C^{\infty}_{0}(-\infty,\infty)$. The spectrum of its unique selfadjoint
extension is
$\sigma(L)=\sigma_{\rm ac}(L)=(-\infty,\infty).$
Given $s>-\infty$, the operator
${\mathbb{1}}_{(-\infty,0)}(L){\mathbb{1}}_{(s,\infty)}$ is trace class with
kernel
$K_{\text{\rm Airy}}(x,y)=\int_{0}^{\infty}{\operator@font
Ai}(x+\lambda){\operator@font Ai}(y+\lambda)\,d\lambda=\frac{{\operator@font
Ai}(x){\operator@font Ai}^{\prime}(y)-{\operator@font
Ai}^{\prime}(x){\operator@font Ai}(y)}{x-y}.$ (A.4)
###### Example 3 (Bessel kernel).
Given $\alpha>0$, take $Lu=-4(xu^{\prime})^{\prime}+\alpha^{2}x^{-1}u$ on
$(0,\infty)$. Since a fundamental system of solutions of $Lu=0$ is given by
$u(x)=x^{\pm\alpha/2}$, the endpoint $x=0$ is LP for $\alpha\geqslant 1$ and
LC otherwise; the endpoint $x=\infty$ is LP in both cases. Fixing the LP case
at $x=0$, we restrict ourselves to the case $\alpha\geqslant 1$.
For a given ${\operator@font Im\,}z>0$ the solutions $u_{a}$ ($u_{b}$) of
$(L-z)u=0$ being $L^{2}$ at $0$ ($\infty$) are spanned by (see ?, Eq.
(9.1.7–9) and (9.2.5–6))
$u_{a}(x)=J_{\alpha}(\sqrt{xz}),\qquad
u_{b}(x)=J_{\alpha}(\sqrt{xz})+i\,Y_{\alpha}(\sqrt{xz}).$
Thus, Theorem 3 applies: $L$ is essentially selfadjoint on
$C^{\infty}_{0}(0,\infty)$, the resolvent of its unique selfadjoint extension
is represented, for ${\operator@font Im\,}z>0$, by the Green’s kernel
$G_{z}(x,y)=\frac{i\pi}{4}\begin{cases}J_{\alpha}(\sqrt{xz})\left(J_{\alpha}(\sqrt{\smash[b]{yz}})+i\,Y_{\alpha}(\sqrt{\smash[b]{yz}})\right)&x>y\\\\[2.84526pt]
J_{\alpha}(\sqrt{\smash[b]{yz}})\left(J_{\alpha}(\sqrt{xz})+i\,Y_{\alpha}(\sqrt{xz})\right)&\text{otherwise}.\end{cases}$
For $\lambda>0$ there is the limit
$\pi^{-1}{\operator@font
Im\,}G_{\lambda+i0}(x,y)=K_{\lambda}(x,y)=\frac{1}{4}J_{\alpha}(\sqrt{x\lambda})J_{\alpha}(\sqrt{\smash[b]{y\lambda}}),$
for $\lambda\leqslant 0$ the limit is zero; both limits are locally uniform in
$x,y\in{\mathbb{R}}$. Hence, Theorem 4 applies:
$\sigma(L)=\sigma_{\text{ac}}(L)=[0,\infty)$ and (A.2) holds for each Borel
set $\Delta\subset{\mathbb{R}}$. Given $0\leqslant s<\infty$, we may estimate
for the specific choice $\Delta=(-\infty,1)$ that
$\left(\int_{0}^{s}\int_{0}^{s}\left(\int_{\Delta}|K_{\lambda}(x,y)|\,d\lambda\right)^{2}\,dx\,dy\right)^{1/2}\leqslant\frac{1}{4}\int_{0}^{s}\int_{0}^{1}J_{\alpha}(\sqrt{x\lambda})^{2}\,d\lambda\,dx=\tau_{\alpha}(s).$
Therefore, Theorem 4 yields that
${\mathbb{1}}_{(-\infty,1)}(L){\mathbb{1}}_{(0,s)}$ is Hilbert–Schmidt with
the _Bessel kernel_
$\int_{-\infty}^{1}K_{\lambda}(x,y)\,d\lambda=\frac{1}{4}\int_{0}^{1}J_{\alpha}(\sqrt{x\lambda})J_{\alpha}(\sqrt{y\lambda})\,d\lambda\\\\[5.69054pt]
=\frac{J_{\alpha}(\sqrt{x})\sqrt{\smash[b]{y}}\,J_{\alpha}^{\prime}(\sqrt{\smash[b]{y}})-\sqrt{x}\,J_{\alpha}^{\prime}(\sqrt{x})J_{\alpha}(\sqrt{\smash[b]{y}})}{2(x-y)},$
restricted to $x,y\in(0,s)$. Here, the last equality is obtained from a
Christoffel–Darboux type of argument: First, we use the underlying
differential equation,
$x\,J_{\alpha}(\sqrt{x\lambda})=-4\,\frac{d}{d\lambda}\left(\lambda\frac{d}{d\lambda}\,J_{\alpha}(\sqrt{x\lambda})\right)+\alpha^{2}\lambda^{-1}J_{\alpha}(\sqrt{x\lambda}),$
and partial integration to obtain
$\frac{x}{4}\int_{0}^{1}J_{\alpha}(\sqrt{x\lambda})J_{\alpha}(\sqrt{y\lambda})\,d\lambda\\\\[5.69054pt]
=-\int_{0}^{1}\frac{d}{d\lambda}\left(\lambda\frac{d}{d\lambda}\,J_{\alpha}(\sqrt{x\lambda})\right)J_{\alpha}(\sqrt{y\lambda})\,d\lambda+\frac{\alpha^{2}}{4}\int_{0}^{1}\lambda^{-1}J_{\alpha}(\sqrt{x\lambda})J_{\alpha}(\sqrt{y\lambda})\,d\lambda\\\\[5.69054pt]
=-\frac{1}{2}\sqrt{x}J^{\prime}_{\alpha}(\sqrt{x})J_{\alpha}(\sqrt{\smash[b]{y}})\\\\[5.69054pt]
+\int_{0}^{1}\lambda\left(\frac{d}{d\lambda}\,J_{\alpha}(\sqrt{x\lambda})\right)\left(\frac{d}{d\lambda}J_{\alpha}(\sqrt{y\lambda})\right)\,d\lambda+\frac{\alpha^{2}}{4}\int_{0}^{1}\lambda^{-1}J_{\alpha}(\sqrt{x\lambda})J_{\alpha}(\sqrt{y\lambda})\,d\lambda.$
Next, we exchange the roles of $x$ and $y$ and substract to get the assertion.
Since the resulting kernel is continuous, Theorem 4 gives that
${\mathbb{1}}_{(-\infty,1)}(L){\mathbb{1}}_{(0,s)}$ is a trace class operator
with trace
${\operator@font
tr}\,{\mathbb{1}}_{(-\infty,1)}(L){\mathbb{1}}_{(0,s)}=\tau_{\alpha}(s)\to\infty\qquad(s\to\infty).$
To summarize, we have thus obtained the following lemma.
###### Lemma A.3.
Given $\alpha\geqslant 1$, the differential operator
$Lu=-4(xu^{\prime})^{\prime}+\alpha^{2}x^{-1}u$ is essentially selfadjoint on
$C^{\infty}_{0}(0,\infty)$. The spectrum of its unique selfadjoint extension
is
$\sigma(L)=\sigma_{\rm ac}(L)=[0,\infty).$
Given $0\leqslant s<\infty$, the operator
${\mathbb{1}}_{(-\infty,1)}(L){\mathbb{1}}_{(0,s)}$ is trace class with kernel
$K_{\text{\rm
Bessel}}^{(\alpha)}(x,y)=\frac{1}{4}\int_{0}^{1}J_{\alpha}(\sqrt{x\lambda})J_{\alpha}(\sqrt{y\lambda})\,d\lambda\\\\[5.69054pt]
=\frac{J_{\alpha}(\sqrt{x})\sqrt{\smash[b]{y}}\,J_{\alpha}^{\prime}(\sqrt{\smash[b]{y}})-\sqrt{x}\,J_{\alpha}^{\prime}(\sqrt{x})J_{\alpha}(\sqrt{\smash[b]{y}})}{2(x-y)}.$
(A.5)
###### Remark A.1.
Lemma A.3 extends to $0\leqslant\alpha<1$ if we choose the particular
selfadjoint realization of $L$ that is defined by the boundary condition
(1.6), cf. ?.
## References
* [1]
* [2] [] Abramowitz, M. and Stegun, I. A.: 1964, Handbook of mathematical functions with formulas, graphs, and mathematical tables, National Bureau of Standards, Washington.
* [3]
* [4] [] Anderson, G. W., Guionnet, A. and Zeitouni, O.: 2010, An Introduction to Random Matrices, Cambridge University Press.
* [5]
* [6] [] Andrews, G. E., Askey, R. and Roy, R.: 1999, Special Functions, Cambridge University Press, Cambridge.
* [7]
* [8] [] Borodin, A. and Olshanski, G.: 2007, Asymptotics of Plancherel-type random partitions, J. Algebra 313, 40–60.
* [9]
* [10] [] Collins, B.: 2005, Product of random projections, Jacobi ensembles and universality problems arising from free probability, Probab. Theory Related Fields 133, 315–344.
* [11]
* [12] [] Deift, P.: 1999, Orthogonal polynomials and random matrices: a Riemann-Hilbert approach, AMS, Providence.
* [13]
* [14] [] Erdélyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F. G.: 1953, Higher transcendental functions. Vol. II, McGraw-Hill, New York.
* [15]
* [16] [] Gohberg, I., Goldberg, S. and Krupnik, N.: 2000, Traces and determinants of linear operators, Birkhäuser Verlag, Basel.
* [17]
* [18] [] Hutson, V., Pym, J. S. and Cloud, M. J.: 2005, Applications of functional analysis and operator theory, second edn, Elsevier B. V., Amsterdam.
* [19]
* [20] [] Johnstone, I. M.: 2001, On the distribution of the largest eigenvalue in principal components analysis, Ann. Statist. 29, 295–327.
* [21]
* [22] [] Johnstone, I. M.: 2008, Multivariate analysis and Jacobi ensembles: largest eigenvalue, Tracy-Widom limits and rates of convergence, Ann. Statist. 36, 2638–2716.
* [23]
* [24] [] Kuijlaars, A. B. J.: 2011, Universality, in G. Akemann, J. Baik and P. D. Francesco (eds), Oxford Handbook of Random Matrix Theory, Oxford University Press. arXiv:1103.5922.
* [25]
* [26] [] Lax, P. D.: 2002, Functional analysis, Wiley-Interscience, New York.
* [27]
* [28] [] Simon, B.: 2005, Trace ideals and their applications, second edn, American Mathematical Society, Providence, RI.
* [29]
* [30] [] Stolz, G. and Weidmann, J.: 1993, Approximation of isolated eigenvalues of ordinary differential operators, J. Reine Angew. Math. 445, 31–44.
* [31]
* [32] [] Tao, T.: 2012, Topics in random matrix theory, American Mathematical Society.
* [33]
* [34] [] Wachter, K. W.: 1980, The limiting empirical measure of multiple discriminant ratios, Ann. Statist. 8, 937–957.
* [35]
* [36] [] Weidmann, J.: 1980, Linear operators in Hilbert spaces, Springer-Verlag, New York.
* [37]
* [38] [] Weidmann, J.: 1987, Spectral theory of ordinary differential operators, Springer-Verlag, Berlin.
* [39]
|
arxiv-papers
| 2011-04-01T12:17:12 |
2024-09-04T02:49:18.077048
|
{
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"authors": "Folkmar Bornemann",
"submitter": "Folkmar Bornemann",
"url": "https://arxiv.org/abs/1104.0153"
}
|
1104.0435
|
###### Abstract
Experimental data from the National Air Surveillance Network of Japan from
1974 to 1996 and from independent measurements performed simultaneously in the
regions of Ljubljana (Slovenia), Odessa (Ukraine) and the Ukrainian
“Academician Vernadsky” Antarctic station (64∘15W; 65∘15S), where the air
elemental composition was determined by the standard method of atmospheric
particulate matter (PM) collection on nucleopore filters and subsequent
neutron activation analysis, were analyzed. Comparative analysis of different
pairs of atmospheric PM element concentration data sets, measured in different
regions of the Earth, revealed a stable linear (on a logarithmic scale)
correlation, showing a power law increase of every atmospheric PM element mass
and simultaneously the cause of this increase – fractal nature of atmospheric
PM genesis. Within the framework of multifractal geometry we show that the
mass (volume) distribution of atmospheric PM elemental components is a log
normal distribution, which on a logarithmic scale with respect to the random
variable (elemental component mass) is identical to normal distribution. This
means that the parameters of two-dimensional normal distribution with respect
to corresponding atmospheric PM-multifractal elemental components measured in
different regions, are a priory connected by equations of direct and inverse
linear regression, and the experimental manifestation of this fact is the
linear correlation between the concentrations of the same elemental components
in different sets of experimental atmospheric PM data.
###### keywords:
Atmospheric aerosols; Multifractal; Neutron activation analysis; South Pole;
Ukrainian Antarctic station
10.3390/—— xx Received: xx / Accepted: xx / Published: xx On the Fractal
Mechanism of Interrelation Between the Genesis, Size and Composition of
Atmospheric Particulate Matters in Different Regions of the Earth Vitaliy D.
Rusov 1,⋆, Radomir Ilic 2, Radojko R. Jacimovic 2, Vladimir N. Pavlovich 3,
Yuriy A. Bondarchuk 1, Vladimir N. Vaschenko 4, Tatiana N. Zelentsova 1,
Margarita E. Beglaryan 1, Elena P. Linnik 1, Vladimir P. Smolyar1, Sergey I.
Kosenko 1 and Alla A. Gudyma 4 E-Mail: siiis@te.net.ua, Tel.: +38-048-2641672;
Fax: ++38-048-2641672.
## 1 Introduction
Analysis of the concentrations of elements characteristics of the terrestrial
crust, anthropogenic emissions and marine elements used in monitoring of the
levels of atmospheric aerosol contamination indicates that these levels at the
two extremes of: (i) the Antarctic (South Pole ref-1 ), and (ii) the global
constituent of atmospheric contamination measured at continental background
stations ref-2 display similar patterns. A distinction between them is,
however, evident and consists in that the mean element concentrations in the
atmosphere over continental background stations ($C_{CB}$) located in
different regions of the Earth exceed the corresponding concentrations at the
South Pole ($C_{SP}$) by some $20-10^{3}$ times.
Comparing the concentrations of a given element i in atmospheric aerosol from
samples $\left\\{C_{SP,i}\right\\}$ and $\left\\{C_{CB,i}\right\\}$, it became
evident that the dependence of the mean concentrations of any particular
element from the sample $\left\\{C_{SP,i}\right\\}$ or
$\left\\{C_{CB,i}\right\\}$ on a logarithmic scale is described by a linear
one, with good precision for any of the elements from a given pair of sampling
station:
$\ln{C_{CB}^{i}}=a_{CB-SP}+b_{CB-SP}\ln{C_{SP}^{i}},$ (1)
$b_{CB-SP}\approx 1,$ (2)
where $a_{CB-SP}$ and $b_{CB-SP}$ are the intercept and slope (regression
coefficient) of the regression line, respectively.
This unique and rather unexpected result was first established by Pushkin and
Mikhailov ref-2 . It is noteworthy, according to ref-2 , that the reason for
the large enrichment of atmospheric aerosol with those elements which are
exceptions to the linear dependence, is related mostly either to the
anthropogenic contribution produced by extensive technological activity, e.g.
the toxic elements (Sb, Pb, Zn, Cd, As, Hg), or to the nearby sea or ocean or
sea as a powerful source of marine aerosol components (Na, I, Br, Se, S, Hg).
However, our numerous experimental data specify persistently that the Pushkin-
Mikhailov dependence (1) in actual fact is the particular case (at $b_{12}=1$)
of more general of linear regression equation
$\ln\left(C_{1i}/\rho_{i}\right)=a_{12}+b_{12}\ln{\left(C_{2i}/\rho_{i}\right)}$
(3)
where $C_{i}$ and $\rho_{i}$ are the concentration and specific density of
i-th isotope component in atmospheric PM measured in different regions (the
indexes 1 and 2) of the Earth.
It is obvious, if we will be able to prove that the linear relation (3) in
element concentrations between the above mentioned samples reflects the more
general fundamental dependence, it can be used for the theoretical and
experimental comparison of atmospheric PM independently of the given region of
the Earth. Moreover, the linear relation (3) can become a good indicator of
the elements defining the level of atmospheric anthropogenic pollution, and
thereby to become the basis of method for determining a pure air standard or,
to put it otherwise, the standard of the natural level of atmospheric
pollution of different suburban zones. This is also indicated by the power law
character of Eq. (3), reflecting the fact that the total genesis of non-
anthropological (i.e natural) atmospheric aerosols does not depend upon the
geography of their origin and is of a fractal nature.
In our opinion, this does not contradict the existing concepts of microphysics
of aerosol creation and evolution ref-3 , if we consider the fractal structure
of secondary ($D_{p}>1\mu m$) aerosols as structures formed on the prime
inoculating centres, ($D_{p}<1\mu m$). Such a division of aerosols into two
classes – primary and secondary ref-3 – is very important since it plays the
key role for understanding of the fractal mechanism of secondary aerosol
formation, which show scaling structure with well-defined typical scales
during aggregation on inoculating centres (primary aerosols) ref-4 .
The objective of this work was twofold: (i) to prove reliably the linear
validity of Eq. (3) through independent measurements with good statistics
performed at different latitudes and (ii) to substantiate theoretically and to
expose the fractal mechanism of interrelation between the genesis, size and
composition of atmospheric PM measured in different regions of the Earth, in
particular in the vicinity of Odessa (Ukraine), Ljubljana (Slovenia) and the
Ukrainian Antarctic station “Academician Vernadsky” (64∘15W; 65∘15S).
## 2 The linear regression equation and experimental data of National Air
Surveillance Network of Japan
In this study, experimental data ref-5 from the National Air Surveillance
Network (NASN) of Japan for selected crustal elements (Al, Ca, Fe, Mn, Sc and
Ti), anthropogenic elements (As, Cu, Cr, Ni, Pb, V and Zn) and a marine
element (Na) in atmospheric particulate matter obtained in Japan for 23 years
from 1974 to 1996 were evaluated. NASN operated 16 sampling stations (Nopporo,
Sapporo, Nonotake, Sendai, Niigata, Tokyo, Kawasaki, Nagoya, Kyoto-Hachiman,
Osaka, Amagasaki, Kurashiki, Matsue, Ube, Chikugo-Ogori and Ohmuta) in Japan,
at which atmospheric PM were regularly collected every month by a low volume
air sampler and analyzed by neutron activation analysis (NAA) and X-ray
fluorescence (XRF). During the evaluation, the annual average concentration of
each element based on 12 monthly averaged data between April (beginning of
financial year in Japan) and March was taken from NASN data reports. The long-
term (23 years) average concentrations were determined from the annual average
concentration of each element.
Analysis of the NASN data ref-5 shows that the highest average concentrations
were observed in Kawasaki (Fe, Ti, Mn, Cu, Ni and V), Osaka (Na, Cr, Pb and
Zn), Ohmuta (Ca) and Niigata (As), respectively. These cities are either
industrial or large cities of Japan. Conversely, the lowest average
concentrations were noticed in Nonotake (Al, Ca, Fe, Ti, Cu, Cr, Ni, V and Zn)
and Nopporo (Mn, As and Pb), as expected ref-5 . On the basis of these
results, Nonotake and Nopporo were selected as the baseline-remote area in
Japan.
A simple model of linear regression, in which the evaluations were made by the
least squares method ref-6 ; ref-7 , was used to build the linear dependence
described by Eq. (3). The results of NASN data presented on a logarithmic
scale relative to the data of Nonotake $\left\\{C_{Nonotake}^{i}\right\\}$
(see Figure 1) show with high confidence the adequacy of experimental and
theoretical dependence of Eq. (3) type. As can be seen from Figure 1, the
Nonotake station was chosen as the baseline, where the lowest concentrations
of crustal and anthropogenic elements and small variations in time (23 years)
were observed ref-5 . The NASN data are presented in the form of “city – city”
concentration dependences.
Figure 1: Relation between annual concentrations of elements in atmospheric
particulate matter over Japan and the same data obtained in the region of
Nonotake (data from NASN of Japan ref-5 ). In some cases (see text) the
concentration of anthropogenic element _Cr_ was excluded from the data. The
underlined cities are large industrial centres in the Japan ref-5 .
Analysis of concentration data for Japanese city atmospheric PM unambiguously
shows that the i-th element mass in atmospheric PM grows by power low, proving
the assumption ref-4 ; ref-8 ; ref-9 ; ref-10 about the fractal nature of
atmospheric PM genesis.
## 3 The linear regression equation and the composition of atmospheric
aerosols in different regions of the Earth
It is evident that in order to generalize the results from NASN data
processing more widely, the validity of Eq. (3) should be checked on the basis
of atmospheric aerosol studies in performed independent experiments at
different latitudes. For this reason such studies were performed in the
regions of Odessa (Ukraine), Ljubljana (Slovenia) and the Ukrainian
Academecian Vernadsky Antarctic station (64∘15W; 65∘15S). The determination of
the element composition of the atmospheric air in these experiments was
performed by the traditional method based on collection of atmospheric aerosol
particles on nucleopore filters with subsequent use of $k_{0}$-instrumental
neutron activation analysis. Regression analysis was used for processing of
the experimental data.
### 3.1 Experimental
#### 3.1.1. Collection of atmospheric aerosol particles on nucleopore filters
For collection of atmospheric aerosol particles on filters, a device of the
PM10 type was used with the Gent Stacked Filter Unit (SFU) interface ref-11 ;
ref-12 . The main part of this device is a flow-chamber containing an
impactor, the throughput capacity of which is equivalent to the action of a
filter with an aerodynamic pore diameter of 10 $\mu m$ and a 50% aerosol
particle collection efficiency based on mass, and an SFU interface designed by
the Norwegian Institute for Air Research (NILU) and containing two filters
(Nucleopore) each 47 mm in diameter, the first filter with a pore diameter of
8 $\mu m$ and the second filter with 0.4 $\mu m$ pore diameter. It was
experimentally found ref-12 that such geometry (Figure 2) results in an
aerosol collection efficiency of the first filter of approximately 50%,
whereas for the second filter this value was close to 100% ref-12 . More
detailed results of thorough testing of a similar device can be found in
ref-12 .
Figure 2: Schematic representation of the flow-chamber of the PM10 type device
for air sampling.
#### 3.1.2. $k_{0}$-instrumental neutron activation analysis
Airborne particulate matter (APM) loaded filters were pelleted with a manual
press to a pellet of 5 mm diameter and each packed in a polyethylene ampoule,
together with an Al-0.1%Au IRMM-530 disk 6 mm in diameter and 0.2 mm thickness
and irradiated for short irradiations (2-5 min) in the pneumatic tube (PT) of
the 250 kW TRIGA Mark II reactor of the J. Stefan Institute at a thermal
neutron flux of $3.5\cdot 10^{12}cm^{-2}s^{-1}$, and for longer irradiations
in the carousel facility (CF) at a thermal neutron flux of $1.1\cdot
10^{12}cm^{-2}s^{-1}$ (irradiation time for each sample about 18-20 h). After
irradiation, the sample and standard were transferred to clean 5 mL
polyethylene mini scintillation vial for gamma ray measurement. To determine
the ratio of the thermal to epithermal neutron flux ($f$) and the parameter
$\alpha$, which characterizes the degree of deviation of the epithermal
neutron flux from the $1/E$-law, the cadmium ratio method for multi monitor
was used ref-13 . It was found that $f=32.9$ and $\alpha=-0.026$ in the case
of the PT channel, and $f=28.7$ and $\alpha=-0.015$ for the CF channel. These
values were used in calculation of the concentrations of short- and long-lived
nuclides.
$\gamma$-activity of irradiated samples were measured on two HPGe-detectors
(ORTEC, USA) of 20 and 40% measurement efficiency ref-13 . Experimental data
obtained on these detectors were fed into and processed on EG&G ORTEC Spectrum
Master and Canberra S100 high-velocity multichannel analyzers, respectively.
To calculate net peak areas, HYPERMET-PC V5.0 software was used ref-14 ,
whereas for evaluation of elemental concentrations in atmospheric aerosol
particles, KAYZERO/SOLCOI® software was used ref-15 . More details of the
$k_{0}$-instrumental neutron-activation analysis applied could be found in
ref-13 .
### 3.2 Comparative analysis of atmospheric PM composition in different
regions of the Earth
The results of presenting the atmospheric PM concentration values of Ljubljana
$\left\\{C_{Ljubljana}^{i}\right\\}$ and Odessa
$\left\\{C_{Odessa}^{i}\right\\}$ on a logarithmic scale relative to the
similar data from Academician Vernadsky station
$\left\\{C_{Ant.station}^{i}\right\\}$ demonstrate with high reliability that
the correlation coefficient r is approximately equal to unity both for the
direct and reverse regression lines “Odessa - Antarctic station”, “Ljubljana -
Antarctic station” (Figure 3):
$r=\left[b_{12}b_{21}\right]^{1/2}\approx 1,$ (4)
where $b_{12}$ and $b_{21}$ are the slopes of direct and reverse regression
lines (Eq. (1)) for corresponding pairs.
Figure 3: Lines of direct regression “Odessa - Antarctic station” (a),
“Ljubljana - Antarctic Station” (c) and line of inverse regression “Antarctic
Station - Odessa” (b), “Antarctic Station - Ljubljana” (d).
Figures 4 and 5 show the regression lines for daily normalized average
concentrations of crustal, anthropogenic and marine elements (Table
LABEL:tab1), measured on March 2002 in the regions of Odessa (Ukraine), the
Ukrainian Antarctic station (∘15W; 65∘15S) and Ljubljana (Slovenia) relative
to the same data obtained in Nonotake ref-5 (Figure 4) and the South Pole
ref-1 (Figure 5).
Figure 4: Relationship between atmospheric PM elemental concentrations measured in the regions Odessa (Ukraine), Ljubljana (Slovenia), Vernadsky station (64∘15W; 65∘15S), South Pole ref-1 and the same data measured in the region of Nonotake ref-5 . Figure 5: Relationship between atmospheric PM elemental concentrations measured in the regions Odessa (Ukraine), Ljubljana (Slovenia), Vernadsky station (64∘15W; 65∘15S), Nonotake ref-5 and the same data measured in the region of the South Pole ref-1 . Table 1: Chemical element composition in atmospheric PM. Symbols $C_{SP}$, $C_{CB}$, $C_{Nonotake}$ denote concentrations at the South Pole, continental background stations and Nonotake-city, respectively. Measured concentrations in the vicinity of the Ukrainian Antarctic Station, Odessa and Ljubljana are denoted by $C_{Antarctica}$, $C_{Odessa}$ and $C_{Ljubljana}$ Element | $C_{SP}$ | $C_{CB}$ | $C_{Nonotake}$ | $C_{Antarctica}$ | $C_{Odessa}$ | $C_{Ljubljana}$
---|---|---|---|---|---|---
($ng/m^{3}$)
S | 49 | - | - | - | - | -
Si | - | - | - | - | - | -
Cl | 2.4 | 90 | - | - | - | 528.2
Al | 0.82 | 1.2 $\times$ 103 | 237.4 | - | - | 1152
Ca | 0.49 | - | 185.8 | 93 | 2230 | 1630
Fe | 0.62 | 1 $\times$ 10 2 | 157.6 | 53.4 | 1201 | 1532
Mg | 0.72 | - | - | - | - | 1264
K | 0.68 | - | - | 24.4 | 502.8 | 918
Na | 3.3 | 1.4 $\times$ 10 2 | 538.8 | 361.95 | 393.8 | 1046
Pb | - | 10 | 23.2 | - | - | -
Zn | 3.3 $\times$ 10-2 | 10 | 38.3 | 4.48 | 62.6 | 127.2
Ti | 0.1 | - | 15.4 | - | - | -
F | - | - | - | - | - | -
Br | 2.6 | 4 | - | 1.34 | 14.81 | 9.62
Cu | 2 $\times$ 10 -2 | 3 | 8.20 | 11050 | 4240 | -
Mn | 1.2 $\times$ 10 -2 | 3 | 6.61 | - | - | 34.7
Ni | - | 1 | 1.48 | - | - | -
Ba | 1.6 $\times$ 10 -2 | - | - | - | - | 6.91
V | 1.3 $\times$ 10 -3 | 1 | 2.44 | - | - | 17.08
I | 0.74 | - | - | - | - | 36.24
Cr | 4 $\times$ 10 -2 | 0.8 | 1.14 | 3.11 | 45.2 | -
Sr | 5.2 $\times$ 10 -2 | - | - | - | - | -
As | 3.1 $\times$ 10 -2 | 1 | 2.74 | - | 1.83 | 4.18
Rb | 2 $\times$ 10 -3 | - | - | - | - | 0.92
Sb | 8 $\times$ 10 -4 | 0.5 | - | 0.07 | 1.97 | 4.62
Cd | < 1.5 $\times$ 10 -2 | 0.4 | - | - | - | -
Mo | 49 | - | - | 1.46 | 4.08 | -
Se | < 0.8 | 0.3 | - | 0.02 | 0.47 | 1.21
Ce | 4 $\times$ 10 -3 | - | - | - | 4.22 | 2.72
Hg | 0.17 | 0.3 | - | 0.36 | 2.75 | -
W | 1.5 $\times$ 10 -3 | - | - | 0.24 | - | -
La | 4.5 $\times$ 10 -4 | - | - | - | 1.79 | 1.36
Ga | < 1 $\times$ 10 -3 | - | - | - | - | -
Co | 5 $\times$ 10 -4 | 0.1 | - | 0.02 | 0.89 | 0.29
Ag | < 4 $\times$ 10 -4 | - | - | 2.74 | 1.42 | -
Cs | 1 $\times$ 10 -4 | - | - | - | 0.09 | -
Sc | 1.6 $\times$ 10 -4 | 5 $\times$ 10 -2 | 0.04 | 3.36 $\times$ 10 -3 | 0.21 | 0.298
Th | 1.4 $\times$ 10 -4 | - | - | - | 0.29 | 0.060
U | - | - | - | - | 0.07 | -
Sm | 9 $\times$ 10 -5 | - | - | 2.68 $\times$ 10 -3 | 0.24 | 0.198
In | 5 $\times$ 10 -5 | - | - | - | - | -
Ta | 7 $\times$ 10 -5 | - | - | - | - | -
Hf | 6 $\times$ 10 -5 | - | - | - | - | -
Yb | < 0.05 | - | - | - | - | -
Eu | 2 $\times$ 10 -5 | - | - | - | - | -
Au | 4 $\times$ 10 -5 | - | - | 1.07 $\times$ 10 -5 | 0.006 | -
Lu | 6.7 $\times$ 10 -6 | - | - | - | - | -
The choice of the atmospheric aerosol concentration values of the South Pole
as a baseline, relative to which a dependence of the type given by Eq. (1) was
analyzed, was made for three reasons: (i) these data were obtained by the same
technique and method as in section 3.1, and simultaneously expand the
geographical comparison, (ii) the element spectrum that characterizes the
atmosphere of South Pole covers a wide range of elements (see Table 1); and
(iii) the South Pole has the purest atmosphere on Earth, making it a
convenient basis for comparative analysis.
We present also the monthly normalized average concentrations of atmospheric
PM measured over the period 2006-2007 in the region of the Ukrainian Antarctic
station “Academician Vernadsky” (Figures 6, 7).
Figure 6: The regression lines for the normalized monthly average
concentrations of atmospheric PM measured in the region of the Ukrainian
Antarctic station “Academician Vernadsky” measured in August–December 2006
with respect to October 2006. Figure 7: The regression lines for the
normalized monthly average concentrations of atmospheric PM measured in the
region of the Ukrainian Antarctic station “Academician Vernadsky” in January-
March 2007 with respect to October 2006.
Comparative analysis of experimental sets of normalized concentrations of
atmospheric aerosol elements measured in our experiments and independent
experiments of the Japanese National Air Surveillance Network (NASN) shows a
stable linear (on a logarithmic scale) dependence on different time scales
(from average daily to annual). That points to a power law increase of every
atmospheric PM element mass (volume) and simultaneously to the cause of this
increase – the fractal nature of atmospheric PM genesis.
In other words, stable fulfillment of the equality (4) not only for the
experimental data shown in Figures 3-7, but also for any pairs of the NASN
data ref-5 (see Figure 2) unambiguously indicates that, on the one hand, the
model of linear regression is satisfactory and, on the other hand, any of the
analyzed samples $\left\\{m_{i}\right\\}$, which describe the sequence of i-th
element partial concentrations in an aerosol, must to obey the Gauss
distribution with respect to the random quantity $\ln{p_{i}}$. Proof of these
assertions for multifractal objects is presented below.
## 4 The spectrum of multifractal dimensions and log normal mass distribution
of secondary aerosol elements
A detailed analysis of Figs. 1, 3-5, where the linear regressions for
different pairs of experimental samples of element concentrations in
atmospheric PM measured at various latitudes are shown, allows us to draw a
definite conclusion about the multifractal nature of PM. The basis of such a
conclusion is the reliably observed linear dependence of Eq. (3) type between
the normalized concentrations $C_{i}$ of the same element i in atmospheric PM
in different regions of the Earth.
Thus, it is necessary to consider the atmospheric PM, which is the
multicomponent (with respect to elements) system, as a nonhomogeneous fractal
object, i.e., as a multifractal. At the same time, the spectrum of fractal
dimensions $f(\alpha)$ and not a single dimension $\alpha_{0}$ (which is equal
to $D_{0}$ for a homogeneous fractal) is necessary for complete description of
a nonhomogeneous fractal object. We will show below that the spectrum of
fractal dimensions $f(\alpha)$ of multifractal predetermines the log normal
type of statistics or, in other words, the log normal type of mass
distribution of multifractal i-th component. This is very important, because
the representation of mass distribution of atmospheric PM as the log normal
distribution is predicted within the framework of the self-preserving theory
ref-20 ; ref-21 and is confirmed by numerous experiments at the same time
ref-3 .
To explain the main idea of derivation we give the basic notions and
definitions of the theory of multifractals. Let us consider a fractal object
which occupies some bounded region £ of size $L$ in Euclidian space of
dimension $d$. At some stage of its construction let it represents the set of
$N>>1$ points distributed somehow in this region. We divide this region £ into
cubic cells of side $\varepsilon<<L$ and volume $\varepsilon^{d}$. We will
take into consideration only the occupied cells, where at least one point is.
Let $i$ be the number of occupied cell $i=1,2,….N(\varepsilon)$, where
$N(\varepsilon)$ is the total number of occupied cells, which depends on the
cell size $\varepsilon$ . Then in the case of a regular (homogeneous) fractal,
according to the definition of fractal dimensions $D$, the total number of
occupied cells $N(\varepsilon)$ at quite small $\varepsilon$ looks like
$N(\varepsilon)\approx\varepsilon_{L}^{-D}=L_{\varepsilon}^{D}.$ (5)
where $\varepsilon$ is the cell size in $L$ units, $L_{\varepsilon}$ is the
size of fractal object in $\varepsilon$ units.
When a fractal is nonhomogeneous a situation becomes more complex, because, as
was noted above, a multifractal is characterized by the spectrum of fractal
dimensions $f(\alpha)$, i.e. by the set of probabilities $p_{i}$, which show
the fractional population of cells $\varepsilon$ by which an initial set is
covered. The less the cell size is, the less its population. For self-
similarity sets the dependence $p_{i}$ on the cell size $\varepsilon$ is the
power function
$p_{i}(\varepsilon)=\frac{1}{N_{i}(\varepsilon)}\approx\varepsilon_{L}^{\alpha_{i}}=L_{\varepsilon}^{-\alpha_{i}}$
(6)
where $\alpha_{i}$ is a certain exponent which, generally speaking, is
different for the different cells $i$. It is obvious, that for a regular
(homogeneous) fractal all the indexes $\alpha_{i}$ in Eq. (6) are identical
and equal to the fractal dimension $D$.
We now pass on to probability distribution of the different values
$\alpha_{i}$. Let $n(\alpha)d\alpha$ is the probability what $\alpha_{i}$ is
in the interval $\left[\alpha,\alpha+d\alpha\right]$. In other words,
$n(\alpha)d\alpha$ is the relative number of the cells $i$, which have the
same measure $p_{i}$ as $\alpha_{i}$ in the interval
$\left[\alpha,\alpha+d\alpha\right]$. According to (5), this number is
proportional to the total number of cells
$N(\varepsilon)\approx\varepsilon_{L}^{-D}$ for a monofractal, since all of
$\alpha_{i}$ are identical and equal to the fractal dimension $D$.
However, this is not true for a multifractal. The different values of
$\alpha_{i}$ occur with probability characterized by the different (depending
on $\alpha$) values of the exponent $f(\alpha)$, and this probability
inherently corresponds to a spectrum of fractal dimensions of the homogeneous
subsets $\text{\textsterling}_{\alpha}$ of initial set £:
$n(\alpha)\approx\varepsilon_{L}^{-f(\alpha)}.$ (7)
Thus, from here a term “multifractal” becomes clear. It can be understood as a
hierarchical joining of the different but homogeneous fractal subsets
$\text{\textsterling}_{\alpha}$ of initial set £, and each of these subsets
has the own value of fractal dimension $f(\alpha)$.
Now we show how the function $f(\alpha)$ predetermines the log normal kind of
mass (volume) distribution of the multifractal $i$-th component. To ease
further description we represent expression (8) in the following equivalent
form:
$n(\alpha)\approx\exp{\left[f(\alpha)\ln{L_{\varepsilon}}\right]}.$ (8)
It is not hard to show ref-19 that the single-mode function $f(\alpha)$ can
be approximated by a parabola near its maximum at the value $\alpha_{0}$.
$f(\alpha)\approx D_{0}-\eta\cdot(\alpha-\alpha_{0})^{2},$ (9)
where the curvature of parabola
$\eta=\frac{f^{\prime\prime}(\alpha_{0})}{2}=\frac{1}{2\left[2(\alpha_{0}-D_{0})+D^{\prime\prime}_{q=0}\right]}$
(10)
is determined by the second derivative of function $f(\alpha)$ at a point
$\alpha_{0}$. Due to a convexity of the function $f(\alpha)$ it is obvious,
that the magnitude in square brackets must be always positive. The fact, that
the last summand $D^{\prime\prime}_{q=0}$ in these brackets is numerically
small and it can be neglected, will be grounded below.
At the large $L_{\varepsilon}$ the distribution $n(\alpha)$ (8) with an
allowance for (9) takes on the form
$n(\alpha)\sim\exp{\left[D_{0}\ln{L_{\varepsilon}}\frac{\left(\alpha-\alpha_{0}\right)^{2}\ln{L_{\varepsilon}}}{4\left(\alpha_{0}-D_{0}\right)}\right]}.$
(11)
Then, taking into account (5), we obtain from (11) the distribution function
of random variable $p_{i}$
$n(p_{i})\sim\ln{N_{D_{0}}}\cdot\exp{\left[-\frac{1}{4\ln{\left(1/p_{0}N_{D_{0}}\right)}}\left(\ln{p_{i}}-\ln{p_{0}}\right)^{2}\right]},$
(12)
which with consideration of normalization takes on the final form
$P(p_{i})=\frac{1}{\sqrt{2\pi}\sigma}\exp{\left(-\mu-\frac{1}{2}\sigma^{2}\right)}\cdot\exp{\left[-\frac{1}{2\sigma^{2}}\left(\ln{p_{i}}-\mu\right)^{2}\right]},$
(13)
where
$\mu=\ln{p_{0}},~{}~{}~{}\sigma^{2}=2\ln{\frac{1}{p_{0}N_{D_{0}}}}.$ (14)
This is the so-called log normal (relative) mass $p_{i}$ distribution. It is
possible to present the first moments of such a kind of distribution for
random variable $p_{i}$ in the following form:
$\left\langle
p\right\rangle=\exp{\left(\mu+\frac{3}{2}\sigma^{2}\right)}=1/p_{0}^{2}N_{D_{0}}^{3}=L_{\varepsilon}^{2\alpha_{0}-3D_{0}},$
(15)
$\mbox{var}(p)=\exp{\left(2\mu\right)}\cdot\left[\exp{\left(4\sigma^{2}\right)}-\exp{\left(3\sigma^{2}\right)}\right]=L_{\varepsilon}^{2\left(2\alpha_{0}-3D_{0}\right)}\left(L_{\varepsilon}^{2\left(2\alpha_{0}-D_{0}\right)}-1\right).$
(16)
At the same time, it is easy to show that the distribution (13) for the random
variable lnpi has the classical Gaussian form
$P\left(\ln{p}\right)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp{\left\\{-\frac{1}{2\sigma^{2}}\left[\ln{p}-\left\langle\ln{p}\right\rangle\right]^{2}\right\\}},$
(17)
where the first moments of this distribution for the random variable
$\ln{p_{i}}$ look like
$\left\langle\ln{p}\right\rangle=\mu+\sigma^{2}=\left(\alpha_{0}-2D_{0}\right)\ln{L_{\varepsilon}},$
(18)
$\mbox{var}\left(\ln{p}\right)=\sigma^{2}=2\left(\alpha_{0}-D_{0}\right)\ln{L_{\varepsilon}}.$
(19)
Thus, according to the known theorem of multidimensional normal distribution
shape ref-22 , a normal law of plane distribution for the two-dimensional
random variable $(p_{1i},p_{2i})$ will be written down as
$P\left(\ln{p_{1i}},\ln{p_{2i}}\right)=\frac{1}{2\pi\sigma_{1}\sigma_{2}\sqrt{1-r^{2}}}\exp{\left[-\frac{1}{2\left(1-r^{2}\right)}\left(u^{2}+v^{2}-2ruv\right)\right]},$
(20)
where
$u=\frac{\ln{p_{1i}}-\left\langle\ln{p_{1i}}\right\rangle}{\sigma_{1}},~{}~{}~{}v=\frac{\ln{p_{2i}}-\left\langle\ln{p_{2i}}\right\rangle}{\sigma_{2}},$
(21)
$r=\mbox{cov}(\ln{p_{1i}},\ln{p_{2i}})/\sigma_{1}\sigma_{2}$ is the
correlation coefficient between $\ln{p_{1i}}$ and $\ln{p_{2i}}$.
Then, by virtue of well-known linear correlation theorem ref-6 ; ref-22 it is
easy to show that $\ln{p_{1i}}$ and $\ln{p_{2i}}$ are connected by the linear
correlation dependence, if the two-dimensional random variable
$(\ln{p_{1i}},\ln{p_{2i}})$ is normally distributed. This means that the
parameters of two-dimensional normal distribution of the random values
$p_{1i}$ and $p_{2i}$ for the $i$-th component in one aerosol particle, which
are measured in different regions of the Earth (the indexes 1 and 2), are
connected by the equations of direct linear regression:
$\ln{p_{1i}}-\left\langle\ln{p_{1i}}\right\rangle=r\frac{\sigma_{1i}}{\sigma_{2i}}\left[\ln{p_{2i}}-\left\langle\ln{p_{2i}}\right\rangle\right]$
(22)
and inverse linear regression
$\ln{p_{2i}}-\left\langle\ln{p_{2i}}\right\rangle=r\frac{\sigma_{2i}}{\sigma_{1i}}\left[\ln{p_{1i}}-\left\langle\ln{p_{1i}}\right\rangle\right],$
(23)
where $i=1,…,N_{p}$ is the component number.
Taking into consideration that we measure experimentally the total
concentration $C_{i}$ of the $i$-th component in the unit volume of
atmosphere, the partial concentration $m_{i}$ of the $i$-th component in one
aerosol particle measured in different regions of the Earth (the indexes 1 and
2) looks like
$m_{1i}=C_{1i}/n_{1},~{}~{}~{}m_{2i}=C_{2i}/n_{2},$ (24)
where $n_{1}$ and $n_{2}$ are the number of inoculating centers, whose role
play the primary aerosols $(D_{p}<1\mu m)$.
Here it is necessary to make important digression concerning the choice of
quantitative measure for description of fractal structures. According to Feder
ref-18 , determination of appropriate probabilities corresponding to the
chosen measure is the main difficulty. In other words, if choice of measure
determines the search procedure of probabilities $\left\\{p_{i}\right\\}$,
which describe the increment of the chosen measure for given level of
resolution $\varepsilon$, then the probabilities themselves predetermine, in
its turn, the proper method of their measurement. So, general strategy of
quantitative description of fractal objects, in general case, should contain
the following direct or reverse procedure: the choice of measure - the set of
appropriate probabilities - the measuring method of these probabilities.
We choose the reverse procedure. So long as in the present work the averaged
masses of elemental components of atmospheric PM-multifractal are measured,
the geometrical probabilities, which can be constructed by experimental data
for some fixed $\varepsilon$, have the practically unambiguous form:
$p_{i}\left(\varepsilon=const\right)=\frac{m_{i}/\rho_{i}}{\sum\limits_{i}m_{i}/\rho_{i}}=\frac{C_{i}/\rho_{i}}{\sum\limits_{i}C_{i}/\rho_{i}},$
(25)
where $\rho_{i}$ is the specific gravity of the $i$-th component of secondary
aerosol.
Since the random nature of atmospheric PM formation is a priori determined by
the random process of multicomponent diffusion-limited aggregation (DLA), we
used the so-called harmonic measure ref-18 to describe quantitatively a
stochastic surface inhomogeneity or, more precisely, to study an evolution of
possible growth of cluster PM diameter.
In practice, a harmonic measure is estimated in the following way. Because the
perimeter of clusters, which form due to DLA, is proportional to their mass,
the number of knots $N_{p}$ on the perimeter, i.e., the number of possible
growing-points, is proportional to the number of cells $N$ in a cluster. Both
these magnitudes, $N_{p}$ and $N$, change according to the power law (5)
depending on the cluster diameter $L$. From here it follows that all the knots
$N_{p}$, which belong to the perimeter of such clusters, have a nonzero
probability what a randomly wandering particle will turn out in them, i.e.,
they are the carriers of harmonic measure $M_{d}(q,\varepsilon_{L})$:
$M_{d}\left(q,\varepsilon_{L}\right)=\sum\limits_{i=1}^{N_{p}}p_{i}^{q}\cdot\left(\frac{\varepsilon}{L}\right)^{d}=Z\left(q,\varepsilon_{L}\right)\cdot\varepsilon_{L}^{d}\xrightarrow[L\rightarrow\infty]{}\begin{cases}0,&d>\tau(q)\\\
\infty,&d<\tau(q)\end{cases},$ (26)
where $Z(q,L_{\varepsilon})$ is the generalized statistical sum in the
interval $-\infty<q<\infty$, $\tau(q)$ is the index of mass, at which a
measure does not become zero or infinity at
$L\rightarrow\infty(\varepsilon_{L}\rightarrow 0)$.
It is obvious, that in such a form the harmonic measure is described by the
full index sequence $\tau(q)$, which determines according to what power law
the probabilities $p_{i}$ change depending on $L$. At the same time, the
spectrum of fractal dimensions for the harmonic measure is calculated in the
usual way, but using the “Brownian particles-probes” of fixed diameter
$\varepsilon$ for study of possible growth of the cluster diameter $L$. From
(26) it follows that in this case the generalized statistical sum
$Z(q,\varepsilon_{L})$ can be represented in the form
$Z\left(q,\varepsilon_{L}\right)=\sum_{i=1}^{N_{p}}p_{i}^{q}\sim\varepsilon_{L}^{-\tau(q)}.$
(27)
As is known from numerical simulation of a harmonic measure, when the DLA
cluster surface is probed by the large number of randomly wandering particles,
the peaks of “high” asperities in such a fractal aggregate have greater
possibilities than the peaks of “low” asperities. So, if possible growing-
points on the perimeter of our aerosol multifractal to renumber by the index
$i=1,\ldots,N_{p},$ the set of probabilities
$\Re=\left\\{p_{i}\right\\}_{i=1}^{N_{p}},$ (28)
composed of the probabilities of Eq. (25) type will emulate the possible set
of interaction cross-sections between Brownian particle and atmospheric PM-
multifractal surface, which consists of the $N_{p}$ groups of identical atoms
distributed on the surface. Each of these groups characterizes the $i$-th
elemental component in the one atmospheric PM.
A situation is intensified by the fact that by virtue of (17) each of the
independent components obeys the Gauss distribution which, as is known ref-22
, belong to the class of infinitely divisible distributions or, more
specifically, to the class of so-called $\alpha$-stable distributions. This
means that although the Gauss distribution has different parameters (the
average $\mu_{i}=\left\langle\ln{p_{i}}\right\rangle$ and variance
$\sigma_{i}^{2}=\mbox{var}(\ln{p_{i}})$ for each of components, the final
distribution is the Gauss distribution too, but with the parameters
$\mu=\left\langle\ln{p}\right\rangle=\sum\mu_{i}$ and
$\sigma^{2}=\sum\sigma_{i}^{2}$. From here it follows that the parameters of
the two-dimensional normal distribution of all corresponding components in the
plane $\left\\{p_{1},p_{2}\right\\}$ are connected by the equations of direct
linear regression:
$\ln{p_{1}}-\left\langle\ln{p_{1}}\right\rangle=r\frac{\sigma_{1}}{\sigma_{2}}\left[\ln{p_{2}}-\left\langle\ln{p_{2}}\right\rangle\right]$
(29)
and inverse linear regression:
$\ln{p_{2}}-\left\langle\ln{p_{2}}\right\rangle=r\frac{\sigma_{2}}{\sigma_{1}}\left[\ln{p_{1}}-\left\langle\ln{p_{1}}\right\rangle\right].$
(30)
So, validity of the assumption (28) for the secondary aerosol or, that is the
same, validity of the application of harmonic measure for the quantitative
description of aerosol stochastic fractal surface, will be proven if
simultaneous consideration of (28) and equation of direct linear regression
(29) will result in an equation identical to the equation of direct linear
regression (3).
Therefore, taking into account (28) we write down, for example, the equation
of direct linear regression or, in other words, the condition of linear
correlation between the samples of $i$-th component concentrations
$\left\\{\ln{\left(C_{1i}/\rho_{i}\right)}\right\\}$ and
$\left\\{\ln{\left(C_{2i}/\rho_{i}\right)}\right\\}$ in an atmospheric aerosol
measured in different places (indexes 1 and 2):
$\ln{\left(C_{1i}/\rho_{i}\right)}=a_{12}+b_{12}\ln{\left(C_{2i}/\rho_{i}\right)},~{}~{}~{}a_{12}=\frac{1}{N_{p}}\ln{\frac{\prod\limits_{i=1}^{N_{p}}\left(C_{1i}/\rho_{i}\right)}{\left(\prod\limits_{i=1}^{N_{p}}\left(C_{2i}/\rho_{i}\right)\right)^{b_{12}}}},~{}~{}~{}b_{12}=r\frac{\sigma_{1}}{\sigma_{2}}.$
(31)
It is obvious, that this equation completely coincides with the equation of
linear regression (3), but is theoretically obtained on basis of the Gauss
distribution of the random magnitude $\ln{p_{i}}$ and not in an empirical way.
Physical interpretation of the intercept $a_{12}$ is evident from the
expression (29), whereas meaning of the regression coefficient $b_{12}$
becomes clear, according to (19) and (29), from the following expression:
$b_{12}=r\left(\frac{\mbox{var}\left(\ln{p_{1}}\right)}{\mbox{var}\left(\ln{p_{2}}\right)}\right)^{1/2}=r\left[\frac{\ln{\left(L_{1}/\varepsilon_{1}\right)}}{\ln{\left(L_{2}/\varepsilon_{2}\right)}}\right]^{1/2},$
(32)
where $L_{1}$ and $L_{2}$ are the average sizes of separate atmospheric PM-
multifractals typical for the atmosphere of investigated regions (indexes 1
and 2) of the Earth, $\varepsilon_{1}$ and $\varepsilon_{2}$ are the cell
sizes into which the corresponding atmospheric PM-multifractals are divided.
Below we give a computational procedure algorithm for identification of the
generalized fractal dimension $D_{q}$ spectra and function $f(\alpha)$. It is
obvious, that such a problem can be solved by the following redundant system
of nonlinear equations of Eqs. (15), (16), (18) and (19) type:
$\begin{array}[]{c}\ln{\left\langle
p\right\rangle}=\left(2\alpha_{0}-3D_{0}\right)\ln{\left(L/\varepsilon\right)},\\\
\\\
\mbox{var}(p)=\left(L/\varepsilon\right)^{2\left(2\alpha_{0}-3D_{0}\right)}\left[\left(L/\varepsilon\right)^{2\left(2\alpha_{0}-D_{0}\right)}-1\right].\end{array}$
(33)
$\begin{array}[]{c}\left\langle\ln{p}\right\rangle=\mu+\sigma^{2}=\left(\alpha_{0}-2D_{0}\right)\ln{\left(L/\varepsilon\right)},\\\
\\\
\mbox{var}\left(\ln{p}\right)=\sigma^{2}=2\left(\alpha_{0}-D_{0}\right)\ln{\left(L/\varepsilon\right)},\end{array}$
(34)
where $\varepsilon$ is the cubic cell fixed size, into which the bounded
region £ of size $L$ in Euclidian space of dimension $d$ is divided.
To solve the system of equations (33)-(34) with respect to the variables
$\alpha_{0}$, $D_{0}$ and $\varepsilon$, $\varepsilon_{L}$ it is necessary and
sufficient to measure experimentally the $i$-th components of the
concentration sample $\left\\{C_{i}\right\\}$ in the unit volume of
atmospheric air (see section 3) and size distribution of atmospheric PM for
determination of the average size $L$.
It will be recalled that from the physical standpoint so-called the box
counting dimension $D_{0}$, the entropy dimension $D_{1}$ and the correlation
dimension $D_{2}$ are the most interesting in the spectrum of the generalized
fractal dimensions $D_{q}$ corresponding to different multifractal
inhomogeneities. Within the framework of the notions and definitions of
multifractal theory mentioned above we describe below the simple procedure for
finding of spectrum of the generalized fractal dimensions $D_{q}$, taking into
account the solution of the system of equations (33)-(34).
From (27) it follows that in our case a multifractal is characterized by the
nonlinear function $\tau(q)$ of moments $q$
$\tau(q)=\lim_{\varepsilon_{L}\rightarrow
0}\frac{\ln{Z\left(q,\varepsilon_{L}\right)}}{\ln{\varepsilon_{L}}}.$ (35)
As well as before we consider a fractal object, which occupies some bounded
region £ of “running” size $L$ (so that $\varepsilon_{L}\rightarrow 0$) in
Euclidian space of dimension $d$. Then spectrum of the generalized fractal
dimensions $D_{q}$ characterizing the multifractal statistical inhomogeneity
(the distribution of points in the region £) is determined by the relation
$D_{q}=\frac{\tau(q)}{q-1},$ (36)
where $\left(q-1\right)$ is the numerical factor, which normalizes the
function $\tau(q)$ so that the equality $D_{q}=d$ is fulfilled for a set of
constant density in the $d$-dimensional Euclidian space.
Further, we are interested in the known in theory of multifractal connection
between the mass index $\tau(q)$ and the multifractal function $f(\alpha)$ by
which the spectrum of generalized fractal dimensions $D_{q}$ is determined
$D_{q}=\frac{\tau(q)}{q-1}=\frac{1}{q-1}\left[q\cdot
a(q)-f\left(a(q)\right)\right].$ (37)
It is obvious, that in our case, when the sample $\left\\{p_{i}\right\\}$ is
experimentally determined and the cell size $\varepsilon<<L$ is numerically
evaluated (by the system of equations (33)-(34)), the spectrum of generalized
fractal dimensions $D_{q}$ (36) can be obtained by the expression for the mass
index $\tau(q)$ (35):
$D_{q}=\frac{\tau(q)}{q-1}\approx\frac{\ln{\sum\limits_{i=1}^{N_{p}}p_{i}^{q}}}{(q-1)\ln{\varepsilon_{L}}}.$
(38)
Finally, joint using of the Legendre transformation
$\alpha=\frac{d\tau}{dq},$ (39)
$f(\alpha)=q\frac{d\tau}{dq}-\tau,$ (40)
which sets direct algorithm for transition from the variables
$\left\\{q,\tau(q)\right\\}$ to the variables
$\left\\{\alpha,f(\alpha)\right\\}$, and the approximate analytical expression
(38) for the function $D_{q}$ makes it possible to determine an expression for
the multifractal function $f(\alpha)$.
Now we will consider the special case of search of the box counting dimension
$D_{0}$ and the entropy dimension $D_{1}$. One of goals of this consideration
is the validation of assumption of smallness of the magnitude
$D^{\prime\prime}_{q=0}$ in the expression (10), which was used for the
derivation of log normal distribution of the random magnitude $p_{i}$ (13).
It is easy to show that combined using of Eq. (9) and the inverse Legendre
transformation sets, which sets transition from the variables
$\left\\{\alpha,f(\alpha)\right\\}$ to the variables
$\left\\{q,\tau(q)\right\\}$, gives the following dependence of $\alpha(q)$ on
the moments $q$:
$\alpha(q)=2q\left(\alpha_{0}-D_{0}\right)+\alpha_{0}.$ (41)
Substituting (41) and (9) into (37), we obtain the approximate expression for
the spectrum of generalized fractal dimensions $D_{q}(q=0.1)$ depending on
$\alpha_{0}$ and $D_{0}$:
$D_{q}=\frac{1}{q-1}\left[q^{2}\left(\alpha_{0}-D_{0}\right)+q\alpha_{0}-D_{0}\right].$
(42)
Thus, we can write down the expressions for the box counting dimension $D_{0}$
and the entropy dimension $D_{1}$ depending on $\alpha_{0}$ and $D_{0}$
$D_{0}=D_{0},$ (43)
$D_{1}=\lim_{q\rightarrow 1}\frac{q\cdot
a(q)-f\left(a(q)\right)}{q-1}=2D_{0}-\alpha_{0}.$ (44)
Here it is necessary to make a few remarks. It will be recalled that
$f(\alpha_{q=1})=D_{1}$ is the value of fractal dimension of that subset of
the region £, which makes a most contribution to the statistical sum (36) at
$q=1$. However, by virtue of normalizing condition the statistical sum (36) is
equal to unity at $q=1$ and does not depend on the cell size $\varepsilon$, on
which the region £ is divided. Thus, this most contribution also is of order
unity. Therefore, in this case (and only in this case!) the probabilities of
cell occupation $p_{i}\approx\varepsilon_{L}^{\alpha}$ (6) are inversely
proportional to the total number of cells
$n(\varepsilon)\approx\varepsilon_{L}^{-f(\alpha)}$ (7), i.e., the condition
$f(\alpha)=\alpha$ is fulfilled.
So, the parameters of system of the equations (33)-(34) obtained by the
expression (9) can not in essence contain information about the generalized
fractal dimensions $D_{q}$ for absolute value of the moments $q$ greater than
unity (i.e., $|q|>1$).
Secondly, it is easy to show that the expression (39) for the entropy
dimension $D_{1}$ does not depend on concrete type of the function
$f(\alpha)$, but is determined by its properties, for example, by symmetry
$f(\alpha)=\alpha$, $f^{\prime}(\alpha)=1$ and convexity
$f^{\prime\prime}(\alpha)>0$. The geometrical method for determination of the
entropy dimension $D_{1}$ shown in Figure 8 is simultaneously the geometrical
proof of assertion (44).
Figure 8: Geometrical method for determination of the entropy dimension
$D_{1}$, which leads to the obvious equality $D_{1}=2D_{0}-\alpha_{0}$.
Thirdly, the expressions for the entropy dimension $D_{1}$ obtained by
parabolic approximation of the function $f(\alpha)$ and geometrical method
(Figure 8) are equivalent. This means that the magnitude
$D^{\prime\prime}_{q=0}$ in the expression (10) is equal to zero. Thus, our
assumption of smallness of the magnitude $D^{\prime\prime}_{q=0}$ in the
expression (10) is mathematically valid.
In the end, we note that the knowledge of generalized fractal dimensions
$D_{q}$, the correlation dimension $D_{2}$ and especially $D_{1}$, which
describes an information loss rate during multifractal dynamic evolution,
plays the key role for an understanding of the mechanism of secondary aerosol
formation, since makes it possible to simulate a scaling structure of an
atmospheric PM with well-defined typical scales. Returning to the initial
problem of distribution of points over the fractal set £, it is possible to
say that the magnitude $D_{1}$ gives an information necessary for
determination of point location in some cell, while the correlation dimension
$D_{2}$ determines the probability what a distance between the two randomly
chosen points is less than $\varepsilon_{L}$. In other words, when the
relative cell size tends to zero ($\varepsilon_{L}\rightarrow 0$), these
magnitudes are anticorrelated, i.e. the entropy $D_{1}$ decreases, while the
multifractal correlation function $D_{2}$ increases.
## 5 Conclusions
Comparative analysis of different pairs of experimental normalized
concentration values of atmospheric PM elements measured in different regions
of the Earth shows a stable linear (on a logarithmic scale) correlation
($r=1$) dependence on different time scales (from average daily to annual).
That points to a power law increase of every atmospheric PM element mass
(volume) and simultaneously to the cause of this increase – the fractal nature
of the genesis of atmospheric PM.
Within the framework of multifractal geometry it is shown that the mass
(volume) distribution of the atmospheric PM elemental components is a log
normal distribution, which on the logarithmic scale with respect to the random
variable (elemental component mass) is identical to the normal distribution.
This means that the parameters of the two-dimensional normal distribution with
respect to corresponding atmospheric PM-multifractal elemental components,
which are measured in different regions, are a priory connected by equations
of direct and inverse linear regression, and the experimental manifestation of
this fact is the linear (on a logarithmic scale) correlation between the
concentrations of the same elemental components in different sets of
experimental atmospheric PM data.
We would like to note here that a degree of our understanding of the mechanism
of atmospheric PM formation, which due to aggregation on inoculating centres
(primary aerosols ($D_{p}<1\mu m$)) show a scaling structure with well-defined
typical scales, can be described by the known phrase: “ …we do not know till
now why clusters become fractals, however we begin to understand how their
fractal structure is realized, and how their fractal dimension is related to
the physical process” ref-23 . This made it possible to show that the spectrum
of fractal dimensions of multifractal, which is a multicomponent (by elements)
aerosol, always predetermines the log normal type of statistics or, in other
words, the log normal type of mass (volume) distribution of the i-th component
of atmospheric PM.
It is theoretically shown, how solving the system of nonlinear equations
composed of the first moments (the average and variance) of a log normal and
normal distributions, it is possible to determine the multifractal function
$f(\alpha)$ and spectrum of fractal dimensions $D_{q}$ for separate averaged
atmospheric PM, which are the global characteristics of genesis of atmospheric
PM and does not depend on the local place of registration (measurement).
We should note here that the results of this work allow an approach to
formulation of the very important problem of aerosol dynamics and its
implications for global aerosol climatology, which is connected with the
global atmospheric circulation and the life cycle of troposphere aerosols
ref-3 ; ref-21 . It is known that absorption by the Earth’s solar short-wave
radiation at the given point is not compensated by outgoing long-wave
radiation, although the integral heat balance is constant. This constant is
supported by transfer of excess tropical heat energy to high-latitude regions
by the aid of natural oceanic and atmospheric transport, which provides the
stable heat regime of the Earth. It is evident that using data about elemental
and dispersed atmospheric PM composition in different regions of the Earth
which are “broader-based” than today, one can create the map of latitude
atmospheric PM mass and size distribution. This would allow an analysis of the
interconnection between processes of ocean-atmosphere circulation and
atmospheric PM genesis through the surprising ability of atmospheric PM for
long range transfer, in spite of its short “lifetime” (about 10 days) in the
troposphere. If also to take into consideration the evident possibility of
determination of latitude inoculating centers (i.e., primary aerosol)
distribution, this can lead to a deeper understanding of the details of
aerosol formation and evolution, since the natural heat and dynamic
oscillations of the global ocean and atmosphere are quite significant and
should impact influence primary aerosol formation dynamics and fractal genesis
of secondary atmospheric aerosol, respectively.
It is important to note also that continuous monitoring of the main
characteristics of South Pole aerosols as a standard of relatively pure air,
and the aerosols of large cities, which are powerful sources of anthropogenic
pollution, allows to determine the change of chemical and dispersed
compositions of aerosol pollution. Such data are necessary for a
scientifically-founded health evaluation of environmental quality, as well as
for the planning and development of an air pollution decrease strategy in
cities.
## References
* (1) Maenhaut, W.; Zoller, W.H. Determination of the chemical composition of the South Pole aerosol by instrumental neutron activation analysis. J. Radioanal. Chem. 1977, 37, 637-650.
* (2) Pushkin, S.G.; Mihaylov, V.A. Comparative Neutron Activation Analysis: Study of Atmospheric Aerosols; Nauka, Siberian Department: Novosibirsk, 1989.
* (3) Raes, F.; van Dingenen, R.; Vignati, E.; Wilson, J.; Putaud, J.P.; Seinfeld J.H.; Adams, P. Formation and cycling of aerosols in the global troposphere. Atmos. Environ. 2000, 34, 4215-4240.
* (4) Rusov, V.D.; Glushkov, A.V.; Vaschenko, V.N. Astrophysical Model of the Earth Global Climate; Naukova Dumka: Kiev, 2003 (in Russian).
* (5) Figen, Var; Yasushi Narita; Shigeru Tanaka. The concentration, trend and seasonal variation of metals in the atmosphere in 16 Japanese cities shown by the results of National Air Surveillance Network (NASN) from 1974 to 1996. Atmos. Environ. 2000, 34, 2755-2770.
* (6) Brownlee, K.A. Statistical Theory and Methodology in Science and Engineering; Ed. John Wiley & Sons: New York, 1965.
* (7) Bendat, J.S.; Piersol, A.G. Random Data: Analysis and Measurement Procedures; Ed. John Wiley & Sons: New York, 1986.
* (8) Schroeder, M. Fractals, Chaos, Power Laws: Minutes from Infinite Paradise; Ed. W. Freeman and Company: New York, 2000.
* (9) Witten, T.A.; Sander, L.M. Diffusion-limited aggregation: kinetic critical phenomenon. Phys. Rev. Lett.1981, 47, 1400-1403.
* (10) Zubarev, A.Yu.; Ivanov, A.O. Fractal structure of colloid aggregate. Reports of Russian Academy of Sci. 2002, 383, 472-477.
* (11) Maenhaut, W.; Francos, F.; Cafmeyer, J. The “Gent” Stacked Filter Unit (SFU) Sampler for Collection of Atmospheric Aerosols in Two Size Fractions: Description and Instructions for Installation and Use. Report No.NAHRES-19, IAEA: Vienna, 1993, pp. 249-263.
* (12) Hopke, P.K.; Hie, Y.; Raunemaa, T.; Biegalski, S.; Landsberger, S.; Maenhaut, W.; Artaxo, P.; Cohen, D. Characterization of the Gent stacked filter unit $PM_{10}$ Sampler. Aerosol Sci. Tech. 1997, 27, 726-735.
* (13) Jaćimović, R.; Lazaru, A.; Mihajlović, D;, Ilić, R;, Stafilov, T. Determination of major and trace elements in some minerals by $k_{0}$-instrumental neutron activation analysis. J. Radioanal. Nucl. Chem. 2002, 253, 427-434.
* (14) HYPERMET-PC V5.0, User’s Manual; Institute of Isotopes: Budapest, Hungary, 1997.
* (15) Kayzero/Solcoi® ver. 5a. User’s Manual for Reactor-neutron Activation Analysis (NAA) Using the $k_{0}$-Standardization Method; DSM Research: Geleen, Netherlands, 2003.
* (16) Cronover, R.M. Introduction to Fractals and Chaos; Jones and Bartlett Publishers, 1995.
* (17) Mandelbrot, B.B. The fractal geometry of nature. Updated and Augmented; W.H. Freeman and Company: New York, 2002.
* (18) Feder, J. Fractals; Plenum Press: New York, 1988.
* (19) Bozhokin, S.V.; Parshin, D.A. Fractals and Multifractals; Scientific Publishing Centre "Regular and Chaotic Dynamics": Moscow-Izhevsk, 2001 (in Russian).
* (20) Lai, F.S.; Friedlander, S.K.; Pich, J.; Hidy, G.M. The self-preserving particle size distribution for Brownian coagulation in the free-molecular regime. J. Colloid Interf. Sci. 1972, 39, 395-405.
* (21) Raes, F.; Wilson, J.; van Dingenen, R. Aerosol dynamics and its implication for the global aerosol climatology. In Aerosol Forcing of Climate; Charson, R.J., Heintzenberg, J., Eds.; John Wiley & Sons: New York, 1995.
* (22) Feller, W. An Introduction to Probability Theory and its Applications; John Wiley & Sons: New York, 1971.
* (23) Bote, R.; Julen, R.; Kolb, M. Aggregation of Clusters. In Fractals in Physics; Pietronero, L., Tosatti, E., Eds.; North-Holland :Amsterdam, 1986, pp. 353-359.
|
arxiv-papers
| 2011-04-03T22:54:07 |
2024-09-04T02:49:18.086710
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vitaliy D. Rusov (1), Radomir Ilic (2), Radojko R. Jacimovic (2),\n Vladimir N. Pavlovich (3), Yuriy A. Bondarchuk (1), Vladimir N. Vaschenko\n (4), Tatiana N. Zelentsova (1), Margarita E. Beglaryan (1), Elena P. Linnik\n (1), Vladimir P. Smolyar (1), Sergey I. Kosenko (1) and Alla A. Gudyma (4)\n ((1) Odessa National Polytechnic University, Odessa, Ukraine, (2) J. Stefan\n Institute, Ljubljana, Slovenia, (3) Institute for Nuclear Research, Kyiv,\n Ukraine, (4) State Ecological Academy for Postgraduate Education and\n Management, Kyiv, Ukraine)",
"submitter": "Vladimir Smolyar",
"url": "https://arxiv.org/abs/1104.0435"
}
|
1104.0530
|
# Radiation from the LTB black hole
J. T. Firouzjaee Department of Physics, Sharif University of Technology,
Tehran, Iran firouzjaee@physics.sharif.edu; taghizade.javad@gmail.com Reza
Mansouri Department of Physics, Sharif University of Technology, Tehran, Iran
and
School of Astronomy, Institute for Research in Fundamental Sciences (IPM),
Tehran, Iran mansouri@ipm.ir
###### Abstract
Does a dynamical black hole embedded in a cosmological FRW background emit the
Hawking radiation where a globally defined event horizon does not exist? What
are the differences to the Schwarzschild black hole? What about the first law
of black hole mechanics? We face these questions using the LTB cosmological
black hole model recently published. Using the Hamilton-Jacobi- and radial
null geodesic- methods suitable for dynamical cases, we show that it is the
apparent horizon which contributes to the Hawking radiation and not the event
horizon. The Hawking temperature is calculated using the two different methods
giving the same result. The first law of LTB black hole dynamics and the
thermal character of the radiation is also dealt with.
###### pacs:
95.30.Sf, 04.70.-s, 04.70.Dy
## I Introduction
Any black hole in the real universe is necessarily a dynamical one, i.e. it is
neither stationary nor asymptotically flat. Therefore, its horizon has to be
defined locally. The need to understand such dynamical and cosmological black
holes has led to a revival of discussions on the concepts of black hole
itself, its singularity, horizon, and thermodynamics haw-bh . Indeed, the
conventional definition of black holes implies an asymptotically flat space-
time and a global definition of the event horizon. The universe, however, is
not asymptotically flat and a global definition of the horizon is not
possible. The need for local definition of black holes and their horizons has
led to concepts such as Hayward’s trapping horizon Hayward94 , and Ashtekar
and Krishnan’s dynamical horizon ashtekar02 . It is not a trivial fact that in
a specific example of a dynamical black hole any of these horizons may occur.
In addition, we do not know if and how the Hawking radiation and the laws of
black hole thermodynamics will apply to dynamical and cosmological black
holes. Hence, it is of special interest to find out specific examples of
dynamical black holes as a test bed for horizon problems.
Within a research program, we have already found an exact solution of Einstein
equations based on the LTB solution LTB representing a dynamical black hole
which is asymptotically FRW man . Different horizons and local mass
definitions applied to this cosmological black hole are also reported man-g ;
man-ash . In this paper we are looking into the question of the Hawking
radiation Hawking2 and the first law of black hole dynamics in such a
dynamical LTB black hole .
A first attempt to look into Hawking radiation from a ’cosmological black
hole’ is reported in saida . There the authors consider the Einstein-Straus
solution and the Sultana-Dyer one as cosmological black holes. The Einstein-
Straus solution, however, is constructed such that it represents a ’freezed-
out’ Schwarzschild black hole within a FRW universe; it can not represent a
dynamical black hole. The Sultana-Dyer solution is reproduced by the
Schwarzschild metric through a conformal factor; it represent a FRW universe
with a fixed black hole of a certain mass within it. Again, it can not reflect
the features we expect from a dynamical black hole with the mass in-fall
within an asymptotically FRW universe. Therefore, it is still an open question
how the Hawking radiation and black hole dynamical laws will look like for a
dynamical black hole within a FRW universe. We will look into this question in
the case of the LTB cosmological black hole we have found as an exact solution
of Einstein equations with mass in-fall, and without using a cut-and-paste
technology of manifolds man .
Now, Hawking’s approach Hawking2 based on quantum field theory, and applied
to quasi cosmological black holessaida is not a suitable method to calculate
Hawking temperature in the case of proper dynamical black holes where one has
to solve the field equations in a dynamical background. In such cases, like
the LTB black hole man , one should look for alternative approaches allowing
to calculate the temperature of the Hawking radiation and the surface gravity.
The so-called Hamilton-Jacobi (H-J) approach svmp has initiated methods
suitable in such cases. The method, however, suffers from not being manifestly
covariant. Using Kodama’s formalism Kodama:1979vn for spherically symmetric
space-times, and based on the Hamilton-Jacobi method, Hayward Hayward:1997jp
has formulated a covariant form of the H-J approach and studied the quantum
instability of a dynamical black hole Hayward:2008jq ; hay0906 .
Another useful method is the semi-classical tunneling approach to Hawking
radiation due to Parikh and Wilczek (PW) parikh ; P ; svmp using radial null
geodesics. We will apply this radial null geodesic method, formulated
originally for the static case, to our dynamic cosmological black hole and
compare the result with that of the Hamilton-Jacobi method. The method,
written in a gauge-invariant form, has led first to a Hawking temperature
twice as large as the correct one Chowdhury-akh06 . It has, however, been
shown that the correct Hawking temperature is regained by taking into account
a contribution from the time coordinate upon crossing the horizon
akh08-pilling .
Among different definitions for the surface gravity of evolving horizons
proposed in the past, the one formulated by Hayward Hayward:1997jp based on
the Kodama’s formalism Kodama:1979vn is the most suitable one to be used in
our dynamic case. This leads us to the first law of black holes compatible
with the Hawking temperature calculated by two different methods mentioned
above.
The question of thermal character of the Hawking radiation will also be
discussed within this formalism. As Parikh and Wilczek have already pointed
out, the Hawking radiation is non-thermal when the energy conservation is
enforced kraus95 ; parikh . As a result it has been shown by Zhang et al cai-
inf09 that the total entropy becomes conserved and the black hole evaporation
process is unitary.
We will introduce in section II the LTB black hole. In Section III, the
covariant Hamilton-Jacobi tunneling method is introduced and applied to the
LTB black hole. Section IV is devoted to the radial null geodesic method and
its application to the LTB black hole. In Section V, the first law for the LTB
black hole is derived. The thermal character of radiation is considered in
section VI. We then conclude in section VII.
## II Introducing the LTB black hole
The cosmological LTB black hole is defined by a cosmological spherical
symmetric isotropic solution having an overdense mass distribution within it
man . The overdense mass distribution collapses due to the dynamics of the
model leading to a black hole at the center of the structure. The LTB metric
is the simplest spherically symmetric solution of Einstein equations
representing an inhomogeneous dust distribution LTB . It may be written in
synchronous coordinates as
$\displaystyle ds^{2}=-dt^{2}+\frac{R^{\prime
2}}{1+f(r)}dr^{2}+R(t,r)^{2}d\Omega^{2},$ (1)
representing a pressure-less perfect fluid satisfying
$\displaystyle\rho(r,t)=\frac{2M^{\prime}(r)}{R^{2}R^{\prime}},\hskip
11.38092pt\dot{R}^{2}=f+\frac{2M}{R}.$ (2)
Here dot and prime denote partial derivatives with respect to the parameters
$t$ and $r$ respectively. The angular distance $R$, depending on the value of
$f$, is given by
$\displaystyle R=-\frac{M}{f}(1-\cos\eta(r,t)),$ $\displaystyle\hskip
22.76228pt\eta-\sin\eta=\frac{(-f)^{3/2}}{M}(t-t_{b}(r)),$ (3)
$\displaystyle\dot{R}=(-f)^{1/2}\frac{sin(\eta)}{1-cos\eta},$ (4)
for $f<0$, and
$R=(\frac{9}{2}M)^{\frac{1}{3}}(t-t_{b})^{\frac{2}{3}},$ (5)
for $f=0$, and
$\displaystyle R=\frac{M}{f}(\cosh\eta(r,t)-1),$ $\displaystyle\hskip
22.76228pt\sinh\eta-\eta=\frac{f^{3/2}}{M}(t-t_{b}(r)),$ (6)
for $f>0$.
The metric is covariant under the rescaling $r\rightarrow\tilde{r}(r)$.
Therefore, one can fix one of the three free functions of the metric, i.e.
$t_{b}(r)$, $f(r)$, or $M(r)$. The function $M(r)$ corresponds to the Misner-
Sharp mass in general relativity man-g . The $r$ dependence of the bang time
$t_{b}(r)$ corresponds to a non-simultaneous big bang- or big-crunch-
singularity.
There are two generic singularities of this metric, where the Kretschmann
scaler and Ricci one become infinite: the shell focusing singularity at
$R(t,r)=0$, and the shell crossing one at $R^{\prime}(t,r)=0$. However, there
may occur that in the case of $R(t,r)=0$ the density
$\rho=\frac{M^{\prime}}{R^{2}R^{\prime}}$ and the term $\frac{M}{R^{3}}$
remain finite. In this case the Kretschmann scalar remains finite and there is
no shell focusing singularity. Similarly, in the case of vanishing
$R^{\prime}$ the term $\frac{M^{\prime}}{R^{\prime}}$ may remain finite
leading to a finite density and no shell crossing singularity either. Note
that an expanding universe means generally $\dot{R}>0$. However, in a region
around the center it may happen that $\dot{R}<0$, corresponding to the
collapsing region.
The LTB metric may also be written in a form similar to the Painlev$\acute{e}$
form of the Schwarzschild metric. By taking the physical radius as a new
coordinate using the relation $dR=R^{\prime}dr+\dot{R}dt$ one obtains
$\displaystyle
ds^{2}=(\frac{\dot{R}^{2}}{1+f}-1)dt^{2}+\frac{dR^{2}}{1+f}-\frac{2\dot{R}}{1+f}dRdt$
$\displaystyle+R(t,r)^{2}d\Omega^{2}.$ (7)
The $(t,R)$ coordinates are usually called the physical coordinates. In the
case of $f=0$, the metric is quite similar to the Painlev$\acute{e}$ metric.
## III Hamilton-Jacobi method
The Hamilton-Jacobi method to calculate the Hawking radiation uses the fact
that within the WKB approximation the tunneling probability for the
classically forbidden trajectory from inside to outside the horizon is given
by
$\Gamma\propto\exp\left(-\frac{2}{\hbar}\mbox{Im }S\right),$ (8)
where $S$ is the classical action of the (massless) particle to the leading
order in $\hbar$ svmp . Note that the exponent has to be a scalar invariant,
otherwise no physical meaning could be given to $\Gamma$. If, in particular,
it has the form of a thermal emission spectrum with $2\mbox{Im
}\,S=\beta\omega$, then both the inverse temperature $\beta$ and the
particle’s energy $\omega$ have to be scalars; otherwise no invariant meaning
could be given to the horizon temperature, which would then not be an
observable.
Now, let us use the Kodama formalism to introduce the necessary invariant
quantitiesHayward:2008jq . Any spherically symmetric metric can be expressed
in the form
$ds^{2}=\gamma_{ij}(x^{i})dx^{i}dx^{j}+R^{2}(x^{i})d\Omega^{2}\,,\qquad
i,j\in\\{0,1\\}\;,$ (9)
where the two-dimensional metric
$d\gamma^{2}=\gamma_{ij}(x^{i})dx^{i}dx^{j}$ (10)
is referred to as the normal metric, $x^{i}$ are associated coordinates, and
$R(x^{i})$ is the areal radius considered as a scalar field in the normal two-
dimensional space. Another relevant scalar quantity on this normal space is
$\chi(x)=\gamma^{ij}(x)\partial_{i}R\partial_{j}R\,.$ (11)
The dynamical trapping horizon, $H$, may be defined by
$\chi(x)\Big{|}_{H}=0\,,\qquad\partial_{i}\chi\Big{|}_{H}\neq 0\,.$ (12)
The Misner-Sharp gravitational mass is then given by
$M(x)=\frac{1}{2}R(x)\left(1-\chi(x)\right)\,,$ (13)
which is an invariant quantity on the normal space. In the special case of the
LTB metric this reduces to the Misner-Sharp mass $M(r)$. Note also that on the
horizon $M|_{H}=m=R_{H}/2$. Now, there is always possible is such a
spherically symmetric space time to define a preferred observer and its
related invariant energy corresponding to the classical action of a particle.
The Kodama vector Kodama:1979vn , representing a preferred observer
corresponding to the Killing vector in the static case, for the case of the
LTB metric (9)is given by
$K^{i}(x)=\frac{1}{\sqrt{-\gamma}}\varepsilon^{ij}\partial_{j}R\,,\qquad
K^{\theta}=0=K^{\varphi}\;.$ (14)
Using this Kodama vector, we may introduce the invariant energy associated
with a particle by means of its classical action being a scalar quantity on
the normal space
$\omega=K^{i}\partial_{i}S.$ (15)
Note that during the process of horizon tunneling $\omega$ is invariant
independent of coordinates and is regular across the horizon. In the case of
the eikonal approximation for massless wave field (geometric optics limits),
which plays an important role in calculating the Hawking radiation using
tunneling method man-nielsen , the classical action $S$ for the massless
particle satisfies the Hamilton-Jacobi equation
$\gamma^{ij}\partial_{i}S\partial_{j}S=0\,.$ (16)
The relevant imaginary part of the classical action along the $\gamma$ null
curve is calculated in Hayward:2008jq , where it has been shown that the
tunneling rate (8) is valid for the future trapped horizon, and
$\mbox{Im }\,S=\mbox{Im
}\,\left(\int_{\gamma}dx^{i}\partial_{i}S\right)=\frac{\pi\omega_{H}}{\kappa_{H}}\,,$
(17)
where $\omega_{H}$ is the Kodama energy and $\kappa_{H}$ is the dynamical
surface gravity associated with the dynamical horizon:
$\kappa_{H}=\frac{1}{2}\Box_{\gamma}R\Big{|}_{H}=\frac{1}{2\sqrt{-\gamma}}\partial_{i}(\sqrt{-\gamma}\gamma^{ij}\partial_{j}R)\Big{|}_{H}.$
(18)
These are scalar quantities in the normal space. Therefore, the leading term
of the tunneling rate is invariant, as it should be for an observable
quantity. The particle production rate then takes the thermal form $\Gamma\sim
e^{-\frac{w}{T}}$ with
$T=\frac{\hbar\kappa_{H}}{2\pi}.$ (19)
### III.1 Application to the LTB back hole
Assume the cosmological LTB black hole which has an infinite redsift surface
satisfying the eikonal approximation condition for the Hamilton-Jacobi
equation (16). It has been shown in man-ash that the apparent horizon of this
LTB black hole in its last stages is a slowly evolving horizon with the least
mass in-fall due to the expanding background preventing the mass infall to the
central black hole.
Now, from equations (1) and (2), and the definition of the surface gravity
(18), we obtain
$\kappa_{H}=\frac{m}{R^{2}}-\frac{m^{\prime}}{2RR^{\prime}}=\frac{1}{2R}-\frac{m^{\prime}}{2RR^{\prime}},$
(20)
where $m$ is the Misner-Sharp mass on the horizon. Using this expression for
the surface gravity, we obtain the Hawking temperature according to the
Hamilton-Jacobi tunneling approach:
$\displaystyle T$ $\displaystyle=$
$\displaystyle\frac{\hbar\kappa_{H}}{2\pi}=\frac{\hbar}{4\pi}\frac{\sqrt{1+f}}{R^{\prime}}[-\frac{\partial_{t}(\dot{R}R^{\prime})}{\sqrt{1+f}}+\partial_{r}(\sqrt{1+f})]$
(21) $\displaystyle=$
$\displaystyle\frac{\hbar}{4\pi}(\frac{f^{\prime}}{2R^{\prime}}-\ddot{R}-\frac{\dot{R}\dot{R}^{\prime}}{R^{\prime}})=\frac{\hbar}{4\pi}(\frac{1}{R}-\frac{m^{\prime}}{RR^{\prime}}).$
To relate this result to the temperature seen by the Kodama observer, we
calculate first the Kodama vector (14). It is given by
$K^{i}=\frac{\sqrt{1+f}}{R^{\prime}}(R^{\prime},-\dot{R})$. The equation
$|K|=\sqrt{-K_{i}K^{i}}=\sqrt{1+f-\dot{R}^{2}}=\sqrt{1-\frac{2m}{R}}$ shows
that the Kodama vector is a null vector on the horizon. The corresponding
velocity vector is then given by $\hat{K}^{i}=\frac{K^{i}}{|K|}$. We then
obtain the frequency measured by such an observer as
$\hat{\omega}=\hat{K}^{i}\partial_{i}S$. The emission rate will take the
thermal form
$\Gamma\propto e^{-\frac{\hat{w}}{\hat{T}}}$ (22)
which defines the temperature $\hat{T}$. It is then easily seen that the
temperature for this observer is
$\hat{T}=\frac{T}{\sqrt{1-\frac{2m}{R}}},$ (23)
which diverges at the horizon. The invariant redshift factor
$\frac{1}{\sqrt{1-\frac{2m}{R}}}$ is the same factor which appear in the light
frequency on the horizon showing an infinite redshift to the observer in the
infinity. Then $T$ itself, being finite at the horizon, may be interpreted as
the redshift-renormalized temperature Hayward:2008jq .
## IV Radial null geodesic approach
There is another approach to the Hawking radiation as a quantum tunneling
process using WKB approximation for radial null geodesics tunneling out from
near the horizon parikh . The imaginary part of the action is defined by
$\displaystyle{\textrm{Im}}S$ $\displaystyle=$
$\displaystyle{\textrm{Im}}\int_{r_{\textrm{in}}}^{r_{\textrm{out}}}p_{r}dr={\textrm{Im}}\int_{r_{\textrm{in}}}^{r_{\textrm{out}}}\int_{0}^{p_{r}}dp^{\prime}_{r}dr$
(24) $\displaystyle=$
$\displaystyle{\textrm{Im}}\int_{r_{\textrm{in}}}^{r_{\textrm{out}}}\int_{0}^{H}\frac{-dH^{\prime}}{\dot{r}}dr,$
using the Hamilton equation $\dot{r}=\frac{dH}{dp_{r}}|_{r}$ with $H$ being
the Hamiltonian of the particle, i.e. the generator of the cosmic time t. Now,
taking the tunneling probability as $\Gamma\sim
e^{-\frac{2}{\hbar}{\textrm{Im}}S}$, being proportional to the Boltzmann
factor $e^{-\frac{\omega}{T}}$, we find the Hawking temperature as
$\displaystyle T_{H}=\frac{\omega\hbar}{2{\textrm{Im}}S}.$ (25)
It is easy to show that for a Schwarzschild black hole one obtain the correct
expression $T_{H}=\frac{\hbar}{8\pi M}$.
It has been pointed out in Chowdhury-akh06 , that
$2Im\int_{r_{in}}^{r_{out}}p_{r}dr$ is not canonically invariant and thus it
does not represent a proper observable. The object which is canonically
invariant is $Im\oint p_{r}dr$, where the closed path goes across the horizon
and back. Using this invariant definition and $\Gamma\sim
e^{\frac{-1}{\hbar}Im\oint p_{r}dr}$, the Hawking temperature is found to be
twice the original temperature. This discrepancy in the temperature has been
resolved by considering a temporal contribution to the tunneling amplitude. In
the case of the Schwarzschild black hole, the temporal contribution to the
action is found by changing Schwarzschild coordinates into Kruskal-Szekeres
coordinates and then matching different Schwarzschild time coordinates across
the horizon akh08-pilling .
### IV.1 Application to the LTB black hole
We use the LTB space-time in physical coordinates. In this case the outgoing
and ingoing null geodesics are given by
$\frac{dR}{dt}=(\dot{R}\pm\sqrt{1+f}),$ (26)
where the plus sign refers to the outgoing null geodesics. Now, expanding the
above equation around the horizon, we obtain
$\displaystyle\frac{dR}{dt}$ $\displaystyle=$
$\displaystyle\sqrt{1+f}-\sqrt{1+f}\sqrt{1-\frac{R-2m}{R(1+f)}}$ (27)
$\displaystyle\cong$ $\displaystyle\frac{R-2m}{2R\sqrt{1+f}},$
where $R-2m$ is assumed to be a non-zero small quantity. Using the results for
the radial null geodesics, we obtain for the imaginary part of the action
corresponding to the LTB metric
$\displaystyle{\textrm{Im}}S={\textrm{Im}}\int_{R_{\textrm{in}}}^{R_{\textrm{out}}}p_{R}dR$
$\displaystyle={\textrm{Im}}\int_{R_{\textrm{in}}}^{R_{\textrm{out}}}\int_{0}^{H}\frac{-dH^{\prime}}{\dot{R}}dR$
$\displaystyle={\textrm{Im}}\int_{R_{\textrm{in}}}^{R_{\textrm{out}}}\int_{0}^{H}\frac{2R\sqrt{1+f}(-dH^{\prime})}{R-2m}dR,$
(28)
where we have used the above expansion for $\frac{dR}{dt}$ up to the first
order of $R-R_{H}=R-2m$. The corresponding Kodama invariant energy of the
particle $\omega=K^{i}S_{i}=\sqrt{1+f}\partial_{t}S$ is then calculated to be
$\omega=(\sqrt{1+f})dH^{\prime}$. We then use the expansion of $R-2m(r)$ in
the form
$\displaystyle R-2m(r)$ $\displaystyle=$
$\displaystyle(1-2\frac{dm}{dR})(R-R|_{H})-2\frac{dm(r)}{dt}(t-t|_{H})$
$\displaystyle=$
$\displaystyle((1-2\frac{dm}{dR})-\frac{2}{\frac{dR}{dt}|_{null}}\frac{dm(r)}{dt})(R-R|_{H}).$
(29)
Changing the variables $(t,R)$ to $(t,r)$, makes it easier to use the
expression for the surface gravity in synchronous coordinates. Putting
$\frac{dm}{dR}=\frac{m^{\prime}}{R^{\prime}}$,
$\frac{dm(r)}{dt}=m^{\prime}\frac{\partial r}{\partial t}|_{R=cont}$,
$\frac{\partial r}{\partial t}|_{R=cont}=-\frac{\dot{R}}{R^{\prime}}$, and
$\frac{dR}{dt}=2\dot{R}$ in (IV.1) and (IV.1), we obtain for the imaginary
part of the action
$\displaystyle
ImS=\frac{w\pi}{(\frac{1}{2R}-\frac{m^{\prime}}{2RR^{\prime}})}=\frac{w\pi}{\kappa_{H}},$
(30)
leading to the Hawking temperature (20):
$T=\frac{\hbar\kappa_{H}}{2\pi}.$ (31)
This Hawking radiation temperature is the same as the one calculated in the
previous section using Hamilton-Jacobi approach hay0906 . The definition of
the surface gravity used here has been essential to arrive at this result
indicating it to be more useful than the other definitions in the literature
nielsen .
Note that according to (2) and (7), the term $R-2m$ and accordingly the metric
factor of $dt^{2}$, i.e. $\frac{\dot{R}^{2}}{1+f}-1$, vanishes on the horizon,
leading to $\frac{dR}{dt}=2\dot{R}$. A fact which has not to be assumed while
calculating the radial null geodesics, as has been done in hay0906 . Otherwise
it would result in the vanishing of the imaginary part of the action and
therefore no tunneling.
## V First Law of the Dynamical LTB Black Holes
Using the tunneling approach for the Hawking radiation, we formulate now a
first law of the LTB black hole. Consider the following invariant quantity in
the normal space (33):
$T^{(2)}=\gamma^{ij}T_{ij}\,,$ (32)
where $T_{ij}$ is the normal part of energy momentum tensor. Now, using the
invariant surface gravity term in LTB given by (20), it is easy to show that
on the dynamical horizon of our LTB black hole we have
$\kappa_{H}=\frac{1}{2R_{H}}+8\pi R_{H}T^{(2)}_{H}\,,$ (33)
where we have used $T_{00}=-\rho$ according to the section (3.1) and Einstein
equations (2). The horizon area, the areal volume associated with the horizon,
and their respective differentials are then given by
$\mathcal{A}_{H}=4\pi R_{H}^{2}\,,\qquad d\mathcal{A}_{H}=8\pi R_{H}dR_{H}\,,$
(34) $V_{H}=\frac{4}{3}\pi R_{H}^{3}\,,\qquad dV_{H}=4\pi R_{H}^{2}dR_{H}\,.$
(35)
Substitution from above leads to
$\frac{\kappa_{H}}{8\pi}d\mathcal{A}_{H}=d\left(\frac{R_{H}}{2}\right)+T_{H}^{(2)}dV_{H}\,.$
(36)
Introducing the Misner-Sharp energy at the horizon, i.e. $m=R_{H}/2$, this can
be recast in the form of a first law:
$dm=\frac{\kappa_{H}}{2\pi}d\left(\frac{\mathcal{A}_{H}}{4}\right)-T_{H}^{(2)}dV_{H}=\frac{\kappa_{H}}{2\pi}ds_{H}-T_{H}^{(2)}dV_{H}\,,$
(37)
where $s_{H}=\mathcal{A}_{H}/4\hbar$ generalizes the Bekenstein-Hawking black
hole entropy.
To see how this black hole first law is related to the Hawking radiation, we
concentrate on two conserved currents which can be introduced in for LTB black
hole. The first one is due to the Kodama vector $K^{a}$, and the corresponding
conserved charge given by the area volume
$V=\int_{\sigma}K^{a}d\sigma_{a}=4\pi R^{3}/3$, where $d\sigma_{a}$ is the
volume form times a future directed unit normal vector of the space-like
hypersurface $\sigma_{a}$. The second one may be defined by the energy-
momentum density $j^{a}=T_{b}^{a}K^{b}$ along the Kodama vector, and its
corresponding conserved charge $E=-\int_{\sigma}j^{a}d\sigma_{a}$ being equal
to the Misner-Sharp energy. The total energy inside the apparent horizon can
then be written as $E_{H}=R|_{H}/2$, which is the Misner-Sharp energy at the
radius $R=R|_{H}$ of our LTB black hole. The energy outside the region can be
expressed as $E_{>H}=-\int_{\sigma}T_{b}^{a}K^{b}d\sigma_{a}$, where the
integration extends from the apparent horizon to infinity. We may therefore
express the total energy of the spacetime as
$E_{t}=R|_{H}/2-\int_{\sigma}T_{b}^{a}K^{b}d\sigma_{a}.$ (38)
Consider now a tunneling process. The initial state before the tunneling
defined by $R=R|_{H}$ having an energy $E_{t}^{i}$, and the final one after
the tunneling defined by $R=R|_{H}+\delta R|_{H}$ having the energy
$E_{t}^{f}$. According to the energy conservation, the Kodama energy change
between the final and initial states of the tunneling process is then
calculated to be
$w=dE_{t}=E_{t}^{f}-E_{t}^{i}=\frac{\delta R|_{H}}{2}-\rho dV.$ (39)
Substituting from (30) and (36) in the above equation, we obtain the tunneling
rate $\Gamma\sim
e^{\frac{-2ImS}{\hbar}}=e^{\frac{-1}{4\hbar}\int_{s_{i}}^{s_{f}}d\mathcal{A}_{H}}=e^{\Delta
s}$, with $\Delta s=s_{f}-s_{i}$ being the entropy change. Our discussion
shows that the tunneling rate arises as a natural consequence of the unified
first law of thermodynamics $dE_{H}=Tds-T_{H}^{(2)}dV_{H}$ at the apparent
horizon.
## VI Non-thermal radiation from the LTB black hole
The question of how the formulas for black hole radiation are modified due to
the self-gravitation of the radiation is dealt with by Kraus and Wilczek in
kraus95 . There it is shown that the Hawking radiation is non-thermal when the
energy conservation is enforced. The particle in the particle hole system is
treated as a spherical shell to have a workable model with the least degrees
of freedom. The radiating black hole of mass $M$ will be modeled as a shell
with the energy $\omega$ around the hole having the energy $M(r)-\omega$. We
are going to adapt this model to our LTB black hole. Note first that due to
the fact that our LTB black hole model is asymptotically Friedman-like, the
ADM mass is not available to be fixed there. We therefore turn to the quasi-
local Misner-Sharp mass. Let us then calculate the tunneling amplitude using
this modification:
$\displaystyle{\textrm{Im}}S$ $\displaystyle=$
$\displaystyle{\textrm{Im}}\int_{R_{\textrm{in}}}^{R_{\textrm{out}}}p_{R}dR$
(40) $\displaystyle=$
$\displaystyle{\textrm{Im}}\int_{R_{\textrm{in}}}^{R_{\textrm{out}}}\int_{0}^{H}\frac{-dH^{\prime}}{\dot{R}}dR$
$\displaystyle=$
$\displaystyle{\textrm{Im}}\int_{2M(r)}^{2(M(r)-\omega)}dR\int_{0}^{H}\frac{-dH^{\prime}}{\dot{R}}.$
Carrying out the first integral, we obtain
$\displaystyle{\textrm{Im}}S=\int_{0}^{H}\frac{\pi
dH^{\prime}}{(\frac{1}{4M-4H^{\prime}}-\frac{m^{\prime}}{(4M-4H^{\prime})R^{\prime}})}\simeq$
$\displaystyle\frac{\pi
H(1-\frac{H}{2M})}{(\frac{1}{4M}-\frac{m^{\prime}}{4MR^{\prime}})}=\frac{\pi
w(1-\frac{\omega}{2M})}{\kappa_{H}},$ (41)
where we have used the fact $\frac{\omega}{2M}<<1$ in the last step to expand
the integrand and carry out the integral. The result shows the non-thermal
character of the radiation.
It has been shown in man-ash that the boundary of the LTB black hole becomes
a slowly evolving horizon for $R^{\prime}>>1$, with the surface gravity being
equal to $\kappa_{H}=\frac{1}{4M}$, and an infinite redshift for the light
coming out of this horizon. Therefore, using the above equation, the tunneling
probability has the same form as in the case of Schwarzschild, i.e.
$\Gamma\sim exp[-8\pi\omega(M-\frac{\omega}{2})]$. Having this form of the
tunneling probability we may refer to Zhang et al. cai-inf09 who have shown
that this form of the non-thermal radiation leads to the conservation of the
total entropy.
## VII Conclusions
Within a research program to understand more in detail the LTB cosmological
black hole, its similarities and differences to the Schwarztschild black hole,
we have calculated the Hawking radiation from this dynamical black hole by
using the tunneling methods suitable for dynamical cases. It turns out that
for the LTB black hole the Hamilton-Jacobi and the radial null geodesic-method
both lead to the same tunneling rate. It turns out that it is not the event
horizon but the future outer tapping horizon that contributes to this Hawking
radiation.
Formulation of a first law for the LTB black hole and the tunneling amplitude
show that the tunneling rate has a direct relation to the change of the LTB
black hole entropy. Assuming the energy conservation for the LTB black hole’s
slowly evolving horizon, we show that the radiation is non-thermal and that
the entropy is conserved during the radiation.
## References
* (1) Hawking S W, Ellis G F R, _The Large Scale Structure of Space-Time_ (Cambridge University Press, 1973).
* (2) S. A. Hayward, Phys. Rev. D 49, (1994) 6467.
* (3) A. Ashtekar and B. Krishnan, Phys. Rev. Lett., 89 (2002) 261101 .
* (4) R. C. Tolman, Proc. Natl. Acad. Sci. U.S.A. 20 (1934) 410; H. Bondi, Mon. Not. R. Astron. Soc. 107 (1947) 343; G. Lema$\hat{i}$tre, Ann. Soc. Sci. Brux. I A53 (1933) 51.
* (5) J.T. Firouzjaee, Reza Mansouri, Gen. Relativity Gravitation., 42 (2010) 2431.
* (6) J.T. Firouzjaee, M. Parsi Mood, Reza Mansouri, to be published Gen. Relativity Gravitation, (arXiv:1010.3971).
* (7) J.T. Firouzjaee, arXiv:1102.1062.
* (8) S. Hawking, Commun. Math. Phys., 43 (1975) 199.
* (9) H. Saida, T. Harada and H. Maeda, Class. Quant. Grav. 24 (2007) 4711.
* (10) M. K. Parikh and F. Wilczek, Phys. Rev. Lett. 85, (2000) 5042.
* (11) K. Srinivasan and T. Padmanabhan, Phys. Rev. D60, (1999) 024007; S. Shankaranarayanan, T. Padmanabhan and K. Srinivasan, Class. Quant. Grav. 19,(2002) 2671 ; R. Banerjee and B. R. Majhi, Phys. Lett. B662 (2008) 62 .
* (12) B. D. Chowdhury, B. D. Chowdhury, Pramana 70 (2008) 593; E. T. Akhmedov, V. Akhmedova, and D. Singleton, Phys. Lett. B 642(2006) 124 .
* (13) E. T. Akhmedov, T. Pilling, and D. Singleton, Int.J.Mod.Phys.D17:2453-2458, (2008) ; V. Akhmedova, T. Pilling, A. de Gill, and D. Singleton, Phys. Lett. B 666 (2008) 269 .
* (14) S. A. Hayward, R. Di Criscienzo, L. Vanzo, M. Nadalini and S. Zerbini, Class. Quant. Grav. 26, (2009) 062001.
* (15) R. Di Criscienzo, S. A. Hayward, M. Nadalini L. Vanzo and S. Zerbini, Class. Quant. Grav. 27, (2010) 015006.
* (16) Alex B. Nielsen, Jong Hyuk Yoon, Class.Quant.Grav.,25 (2008) 085010.
* (17) M. K. Parikh, Int. J. Mod. Phys. D 13, 2351 (2004) [Gen. Rel. Grav. 36, (2004) 2419] .
* (18) S. A. Hayward, Class. Quant. Grav. 15, (1998) 3147.
* (19) H. Kodama, Prog. Theor. Phys. 63, (1980) 1217.
* (20) Alex B. Nielsen, J. T. Firouzjaee in preparation.
* (21) P. Kraus and F. Wilczek, Nucl. Phys. B433, (1995) 403.
* (22) B. Zhang, Q.y. Cai, L. You and M.S. Zhan, Phys. Lett., 98, B (2009) 675; Werner Israel and Zinkoo Yun, Phys.Rev.D., 82 (2010) 124036.
|
arxiv-papers
| 2011-04-04T11:35:13 |
2024-09-04T02:49:18.093762
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "J. T. Firouzjaee, Reza Mansouri",
"submitter": "Javad Taghizadeh firouzjaee",
"url": "https://arxiv.org/abs/1104.0530"
}
|
1104.0570
|
# Ground-based NIR emission spectroscopy of HD189733b
I. P. Waldmann, G. Tinetti University College London, Dept. Physics &
Astronomy, Gower Street, WC1E 6BT, UK ingo@star.ucl.ac.uk P.Drossart LESIA,
Observatoire de Paris, CNRS, Universit Pierre et Marie Curie, Universit Paris-
Diderot. 5 place Jules Janssen, 92195 Meudon France M. R. Swain, P. Deroo
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove
Drive, Pasadena, California 91109-8099, USA C. A. Griffith University of
Arizona, Dept. of Planetary Sciences, 1629 E. University Blvd, Tucson, AZ,
85721, USA
###### Abstract
We investigate the K and L band dayside emission of the hot-Jupiter HD 189733b
with three nights of secondary eclipse data obtained with the SpeX instrument
on the NASA IRTF. The observations for each of these three nights use
equivalent instrument settings and the data from one of the nights has
previously reported by Swain et al (2010). We describe an improved data
analysis method that, in conjunction with the multi-night data set, allows
increased spectral resolution (R$\sim$175) leading to high-confidence
identification of spectral features. We confirm the previously reported strong
emission at $\sim$3.3 $\mu$m and, by assuming a 5$\%$ vibrational temperature
excess for methane, we show that non-LTE emission from the methane $\nu_{3}$
branch is a physically plausible source of this emission. We consider two
possible energy sources that could power non-LTE emission and additional
modelling is needed to obtain a detailed understanding of the physics of the
emission mechanism. The validity of the data analysis method and the presence
of strong 3.3 $\mu$m emission is independently confirmed by simultaneous,
long-slit, L band spectroscopy of HD 189733b and a comparison star.
techniques: spectroscopic, methods: data analysis, planets and satellites:
atmospheres, planets and satellites: individual (HD 189733b)
## 1 Introduction
The field of extrasolar planets is rapidly evolving, both in terms of number
of planets discovered and techniques employed in the characterisation of these
distant worlds. In recent years, increasing attention has been directed to the
detection and interpretation of spectroscopic signatures of exoplanetary
atmospheres and was mainly pioneered using Spitzer and HST instruments (eg.
Agol et al., 2010; Beaulieu et al., 2008, 2010; Charbonneau et al., 2002,
2005, 2008; Grillmair et al., 2008; Harrington et al., 2006; Knutson et al.,
2007; Snellen et al., 2010b; Swain et al., 2008, 2009a, 2009b; Tinetti et al.,
2007, 2010). With the removal of spectroscopic capabilities for Spitzer at the
end of the Spitzer cold-phase, increased efforts need to be undertaken to
ensure spectroscopic capabilities using ground-based observatories. As
inarguably difficult as this is, various groups have succeeded in the
detection of metal lines and complex molecules (Bean et al., 2010; Redfield et
al., 2008; Snellen et al., 2008, 2010a; Swain et al., 2010). In order to
obtain the desired observations, different groups have developed different
techniques. These can be divided into three main categories:
(1) Time-unresolved techniques: here usually one or more high signal-to-noise
(SNR) spectra are taken in and out of transit and both in and out-of-transit
spectra are differenced with the additional use of a telluric model. Care
needs to be taken to not over-correct and remove the exoplanetary signal.
(2) Time-resolved high-resolution: this is sensitive to very thin and strong
emission lines where the exoplanet eclipse is followed with many consecutive
exposures and the emission line is identified by the varying doppler shift of
the planet as it transits (Snellen et al., 2010a).
(3) Time-resolved mid-resolution: as above, the exoplanetary eclipse is
followed by many consecutive exposures with a mid-resolution spectrograph
making this method sensitive to broad roto-vibrational transitions. The use of
telluric corrections with a synthetic model is not necessary since we obtain a
normalised lightcurve per spectral channel of which the transit depths
constitute the spectral signatures.
Here, we re-analyse the original Swain et al. (2010) data as well as three
additional planetary eclipses observed with the IRTF/SpeX instrument. One
eclipse, in particular, was obtained with a reference star in the slit. We
used the time-resolved mid-resolution method pioneered by (Swain et al., 2010)
with an improved methodology and data-preprocessing routine. The additional
data in conjunction with the more advanced techniques adopted, secured results
at higher spectral resolution and smaller error bars. Furthermore, we
thoroughly tested our data to eliminate/quantify the residual telluric
contamination.
## 2 Observations and data reduction
Secondary eclipse data of the hot-Jupiter HD189733b were obtained on the
nights of August 11th 2007 (previously been analysed by Swain et al. (2010)),
June 22nd 2009 and the 12th of July 2009 using the SpeX instrument (Rayner et
al., 2003) on the NASA Infrared Telescope Facility (IRTF). The observations
were timed to start approximately one to two hours before the secondary
eclipse event, until one to two hours post-egress. The instrumental setup was
not changed for these three nights. The raw detector frames were reduced using
the standard SpeX data reduction package, SpexTool, available for IDL (Cushing
et al., 2004), resulting in sets of 439, 489 and 557 individual stellar
spectra for each secondary eclipse event respectively. The extraction was done
using the aperture photometry setting with a two arc-second aperture.
In addition we have analysed a fourth secondary eclipse of HD189733b observed
on July 3rd 2010 using the same instrument. As opposed to the other three
nights, we observed HD189733b in the L-band only, with a single order, long-
slit setting. The one arc minute slit allowed us to simultaneously observe our
target and a reference star with a K-band magnitude of 8.05 (2MASS
20003818+2242065). For not saturating the target star, we kept the exposure
time at 8 seconds and employed the standard ABBA nodding sequence throughout
the night. Each AB set was differenced to remove the background and the final
spectra were extracted using both a custom built routine and standard IRAF
routines. We found both extractions to yield the same results but the custom
built routine performs better in terms of the final scatter observed. The flux
received from the reference star is on average 27 times less than that of the
target.
The secondary eclipses in the obtained raw spectra (from here onwards, ’raw’
refers to the flat fielded, background corrected, wavelength calibrated and
extracted spectra) are dominated with systematic (telluric and instrumental)
noise. Consequently, the spectral reduction step is followed by data de-
noising and signal amplification steps as described in the following sections.
## 3 Extraction of the exoplanetary spectrum
We describe in the following subsections how the planetary signal was
extracted from the raw spectra. With the nature of the observations being a
combined light (planet and stellar flux) measurement, we employ time-
differential spectrophotometry during the time of the secondary eclipse.
Standard photometric calibration routines typically achieve a $\sim$1$\%$
level of photometric accuracy, hence further de-noising is necessary to reach
the required precision. We first removed the instrument systematics in the
data (data cleaning) and then we extracted the planetary signal in the cleaned
data (spectral analysis).
### 3.1 Data-cleaning
To achieve the accuracy we need, a robust cleaning of the data is required.
The cleaning process comprises three main steps: 1) Normalising the spectra,
getting rid of flux offsets in the timeseries and correcting for airmass
variations. 2) Correcting wavelength shifts between spectra by re-aligning all
spectra with respect to one reference spectrum. This step removes $\sim 80\%$
of outliers. 3) Filtering the timeseries of each spectral channel with
adaptive wavelets. This step removes white and pink noise contributions at
multiple passbands without damaging the underlying data structure Persival &
Walden (2000).
#### 3.1.1 Normalisation
Firstly, we discarded the spectral information outside the intervals of 2.1 -
2.45$\mu$m and 2.9 - 4.0$\mu$m to avoid the edges of the K and L photometric
bands respectively. Then, we corrected for airmass and instrumental effects.
This was achieved in a two step process. We first calculated a theoretical
airmass function, $AF=exp(-b\times airmass(t))$, for each night and divided
the data by this function. However, we found this procedure insufficient since
the baseline curvature is caused not only by the airmass but by other
instrumental effects (e.g. changing gravity vectors of the instrument). We
hence additionally fitted a second order polynomial to the pre- and post-
eclipse baseline of each timeseries and divided each single timeseries by the
polynomial. Furthermore, we normalised each observed spectrum by its mean
calculated in a given wavelength band (equation 1).
$\overset{\wedge}{F}_{n}(\lambda)=\frac{F_{n}(\lambda)}{\bar{F}_{n}}\begin{cases}\lambda=2.1-2.45\mu
m&K-band\\\ \lambda=2.9-4.0\mu m&L-band\end{cases}$ (1)
$\bar{F}_{n}=\frac{\int_{\lambda_{0}}^{\lambda_{1}}F_{n}(\lambda)\text{d}\lambda}{\lambda_{1}-\lambda_{0}}$
where $F(\lambda)$ is the flux expressed as a function wavelength, $\lambda$,
for each spectrum obtained, $n$. $\bar{F}_{n}$ and
$\overset{\wedge}{F}_{n}(\lambda)$ is the normalised spectrum. In the case of
an idealised instrument and constant airmass, the normalisation would be
superfluous. However, due to pixel sensitivity variations and bias off-sets on
the CCD chip, the individual spectra need to be normalised to avoid frequent
’jumps’ in the individual timeseries. In the domain of the high-interference
limit (Pagiatakis et al., 2007; Swain et al., 2010), the astrophysical signal
is preserved. We investigated the effects of normalising the spectrum over a
whole wavelength band or smaller sub-sections of the spectrum and various
combinations of both, but found the differences to be negligible.
#### 3.1.2 Spectra re-alignment and filtering
After the normalisation, we constructed 2D images with rows representing
spectra of the planet-star system at a specific time, and columns representing
timeseries for specific wavelengths (see figure 1A). In figure 1A, the main
sources of outliers in individual timeseries, are miss-alignments by up to 4
pixels along the wavelength axis. We corrected this effect by fitting
Gaussians to thin (FWHM $\sim$ 5px) emission and absorption lines to estimate
the line centres to the closest pixel. When the shift occurred for all the
lines, the spectrum was corrected with respect to a reference spectrum, i.e.
the first spectrum in the series. Then cosmic rays were removed by a 2D median
filter replacing 5$\sigma$ outliers with the median of its surrounding 8
pixels.
#### 3.1.3 Wavelet de-noising
Due to variations in detector efficiency, the cumulative flux of each spectrum
depends on the exact position of the spectrum on the detector (horizontal
bands in figure 1A), resulting in high frequency scatter in each individual
timeseries. This effect was already attenuated by the normalisation step but
further removal of systematic and white noise is required. Based on the de-
noising approach proposed by Thatte et al. (2010), we have opted for a wavelet
filtering of the individual timeseries using the ’Wavelet Toolbox’ in MATLAB.
There are clear advantages to wavelet de-noising compared to simple smoothing
algorithms. With wavelets we can specifically filter the data for high
frequency ’spikes’ and low frequency trends without affecting the
astrophysical signal or losing temporal phase information. This allows for an
efficient reduction of white and pink noise in the individual timeseries. By
contrast, smoothing algorithms, such as kernel regression, will impact the
desired signal since these algorithms smooth over the entire frequency
spectrum Donoho (1995) and Persival & Walden (2000). For a more detailed
discussion see appendix and Thatte et al. (2010); Donoho (1995); Persival &
Walden (2000); Stein (1981); Sardy (2000). The use of the wavelet filtering to
each individual timeseries yielded a factor of two improvement on the final
error bars. The final results were generated with and without wavelet de-
noising and found to be consistent within the respective errorbars. An example
of the final de-noised data can be seen in figure 1B.
### 3.2 Measuring the exoplanetary spectrum
After the data were de-noised as described in the previous subsection, we
focused on the extraction of the planetary signal. We based our analysis on
the approach described in Swain et al. (2010). The spectral emission features
of a secondary eclipse event are too small to be statistically significant for
an individual spectral channel. High signal to noise detections require a low
spectral resolution, i.e. binning the data in $\lambda$. This can be done more
efficiently in the frequency domain for reasons discussed below. Each
timeseries $X_{i}(t)$ (here $i$ denotes the spectral channel) was re-
normalised to a zero mean to minimise windowing effects in the frequency
domain. The discrete fast-Fourier transform (DFT) was computed for each
timeseries and, depending on the final binning, $m$ number of Fourier-
transformed timeseries were multiplied with each other and finally normalised
by taking the geometric mean (equation 2).
$\mathcal{F}[\bar{X}(t)]=\left(\prod_{i=1}^{m}\mathcal{F}[X_{i}(t)]\right)^{1/m}$
(2)
where $\mathcal{F}$ is the discrete Fourier-transform and $X_{i}(t)$ is the
timeseries for the spectral channel $i$ for $m$ number of spectral channels in
the Fourier product ($m\in\mathbb{Z}^{+}$). Since the input timeseries are
always real and the Fourier transforms are Hermitian, we can take the n’th
root of the real-part of the final product without loosing information.
In the time-domain, this operation is equivalent to a consecutive convolution
of $X_{i}$ with $X_{i+1}$, equation 3.
$(X_{i}\ast X_{i+1})[n]\overset{def}{=}\sum_{t=1}^{n}X_{i}[t]X_{i+1}[n-t]$ (3)
We can appreciate from equation 3 that one eclipsing timeseries is the
weighting function of the other. The consecutive repetition of this process
for all remaining ($i$-1) timeseries, effectively filters the convolved
timeseries with the weighting function that is another timeseries. This has
the effect of smoothing out noise components whilst preserving the signal
common to all the timeseries sets (Pagiatakis et al., 2007). The final result
of this process is the geometric mean of all timeseries. For an individual
timeseries, the eclipse signal may not be statistically significant but the
simultaneous presence of the eclipse in all the timeseries allows us to
amplify the eclipse signal to a statistical significance by suppressing the
noise. The convolution theorem states that the Fourier transform of a
convolution is equivalent to the dot product of the Fourier transforms
$\mathcal{F}(X_{i}\ast X_{i+1})\equiv
k\otimes\mathcal{F}(X_{i})\otimes\mathcal{F}(X_{i+1})$ (4)
where $\otimes$ signifies multiplication in the Fourier space and $k$ is a
normalisation constant. This process is the base of the our analysis.
#### 3.2.1 Time-domain analysis
Calculated the Fourier product, $\mathcal{F}[\bar{X}(t)]$, for $i$ spectral
channels, we can take the inverse of the Fourier transform to obtain the
filtered lightcurve signal.
$\bar{X}(t)=\mathcal{F}^{-1}(\mathcal{F}[\bar{X}(t)])$ (5)
The lightcurves were then re-normalised by fitting a second-order polynomial
to the out-of-transit baseline. We modeled the final lightcurves with equation
8 of Mandel & Agol (2002), using the system parameters reported in Bakos et
al. (2006), with the transit depth as the only free parameter left.
As clear from the lightcurves presented in section 5, the systematic noise in
the data is higher in areas of low transmissivity. Systematic noise increases
the scatter of the obtained lightcurves as well as the error-bars of the final
spectra and places a lower limit of $m$ = 50 channels ($\sim 2.88$nm) on the
currently achievable spectral bin size. This is a noticeable improvement
compared to the original Swain et al. (2010) analysis which reported a lower
limit of $m$ = 100 and 150 spectral channels for the K and L-bands
respectively.
#### 3.2.2 Frequency-domain analysis
The generated lightcurves are of high quality and ready for accurate
spectroscopic measurements. However, as previously mentioned, a certain amount
of periodic and systematic noise is still present in the timeseries. The noise
residuals are in part generated during the conversion of the data from the
frequency domain to the time domain, in part are due to systematics. We can
remove some of these residuals, by measuring the eclipse depth directly in the
frequency domain, assuming that most systematic noise is found at different
frequencies to the eclipse signal.
In first order approximation, we can assume the eclipse signal to be a box-
shaped function or square wave of which the Fourier transform is the well
known sinc function (Riley et al., 2004). The Fourier series of such a
symmetric square wave is given by equation 6 as a function of the lightcurve’s
transit depth, $\delta$, and transit duration, $\tau$.
$\displaystyle f(t)$
$\displaystyle=4\delta\left(\text{sin}(2/\tau)-\frac{1}{3}\text{sin}(3/\tau)+\frac{1}{5}\text{sin}(5/\tau)-...\right)$
(6)
$\displaystyle=4\delta\sum_{k=1,3,5...}^{\infty}\frac{\text{sin}((2k-1)2/\tau)}{(2k-1)}$
The lightcurve signal is composed of a series of discrete frequencies, $k$,
since the boundary conditions of the function are finite. This series is very
rapidly converging. Figure 2 illustrates this. Here we took the Fourier
transform of the secondary eclipse model shown in the insert. The frequency
spectrum is centred on the first Fourier coefficient. It is clear that most of
the power is contained in the first Fourier coefficient and the series quickly
converges asymptotically to zero after the third coefficient. Taking the
product in equation 2 has the effect of strengthening the eclipse signal,
whilst weakening the noise contribution: the frequencies contributing to the
noise are in fact expected to be different to the ones contributing to the
eclipse signal. In the case of stochastic (Gaussian) noise, wavelength
dependent instrumental noise or scintillation noise, this is obvious.
Following Fourier series properties, the modulus of the amplitude, $|A|$, of
the coefficients in equation 6 is directly proportional to the transit depth
$\delta$ and the transit duration $\tau$, where $\tau=t_{1-4}/t_{s}$ and
$t_{1-4}$ is the transit duration from the first to fourth contact point and
$t_{s}$ is the sampling rate (ie. exposure time + overheads, fig. 2).
$|A|_{sqrwave}=\frac{\tau\delta}{2}\sum_{k=1,3,5...}^{\infty}\frac{1}{(2k-1)}$
(7)
The amplitude of the Fourier coefficients above $k=1$ decreases by $(2k-1)$
for a box-shape function and is an even faster converging series for real
lightcurve shapes which are used in the analysis (see appendix). Following
from equation 7 we see that for the first Fourier coefficient, $k=1$, the
relationship between the transit depth, $\delta$, and the Fourier coeffcient
amplitude, $|A|$, is simply given by $|A_{k=1}|=(\tau/2)\delta$. From the
analytical arguments presented above, we know that $\tau$ is the transit
duration (in units of number of observed spectra). We checked the consistency
of the theory with the data, by calculating the value of $\tau$ numerically.
To calculate $\tau$ we produced secondary eclipse curves with the transit
duration and sampling rate of the original IRTF data sets (Mandel & Agol,
2002, equation 8). We generated 300 curves with transit depths ($\tau$)
ranging from 0.0001 to 0.1 and measured the corresponding amplitude
($|A_{k=1}|$). Here, the derivative, $d(|2A|)/d\delta$ gives us the value of
$\tau$. We find $\tau$ = 116 in-eclipse measurements, which agrees with the
number of in-transit spectra obtained for the real IRTF data-sets.
$N$ spectra were obtained at a constant sampling interval of $t_{s}$, giving
us a sampling rate of $R=1/t_{s}$ in the frequency domain. For a complete
representation of the data, the sampling rate is equal to the Nyquist rate,
$R=2B$, where B is the spectral bandwidth of the Fourier transform. The total
number of Fourier coefficients, $K$, is then given by $K=2BN$. It follows that
the resolution in the frequency domain is determined by $\Delta f=1/N$. In
other words, the more measurements are available the more Fourier coefficients
can be extracted to describe the data and consequently the frequency range
covered by each coefficient is smaller for a fixed sampling rate.
The fact that $\Delta f$ is finite ($\Delta f\rightarrow 0$ for infinitely
sampled data-sets), means that the first Fourier coefficient can be
contaminated by remaining noise signals very similar in frequency. To estimate
the error bar on this contamination, we varied the out of transit (oot) length
$N_{oot}$ by 50$\%$ and calculated the resulting spectrum for each $\Delta f$.
The error is then estimated as the standard deviation to the mean of all
computed spectra.
### 3.3 Application to data
We have applied the same procedure described in sections 3.1 & 3.2 to the four
data sets. In addition to the individual analysis, we also combined in the
frequency domain the three data sets recorded with the same observational
technique. Given that the low-frequency systematics –such as residual airmass
function, telluric water vapour content, seeing, etc– are significantly
different for each individual night, by combining multiple data sets, we can
amplify the lightcurve signal and reduce the systematic noise.
To generate the final K and L-band spectra, we chose in equation 2 $m=100$
spectral channels. From $R_{spectra}=\lambda_{centre}/\Delta\lambda$, we get a
final spectral resolution of R $\sim$ 50\. Combining all three data-sets
together ($\sim$33 spectral channels taken from each observed planetary
eclipse) we obtain a spectral resolution of $\sim$170 and $\sim$ 185 for the K
and L-bands respectively. We note that the spectral resolving power for the
SpeX instrument, considering the seeing, is R $\sim$ 800.
## 4 Model
We have simulated planetary emission spectra using lineby-line radiative
transfer models as described in Tinetti et al. (2005, 2006) with updated line
lists at the hot temperatures from UCL ExoMol and new HITEMP (Barber et al.,
2006; Yurchenko et al., 2011; Rothman et al., 2009). Unfortunately accurate
line lists of methane at high temperatures covering the needed spectral range
are not yet available. We combined HITRAN 2008 (Rothman et al., 2009), and the
high temperature measurements from (Thiévin et al., 2008). These LTE-models
were fitted to the spectra presented in section 5.
Additional to the standard LTE model, we considered possible non-LTE models to
fit the presented data. Upper atmospheres of planetary atmospheres are subject
to non-LTE emissions; although negligible in most part of the near infrared
spectrum, these emissions become dominant in the strongly absorbing vibration
bands of molecular constituents, like CO2 in telluric planets and CH4 in giant
planets (and Titan). A synthetic model of the spectrum in the L band has been
adapted from a model of Giant Planets fluorescence of CH4 developed for
ISO/SWS (Drossart et al., 1999). The main steps involved in the radiative
transfer with redistribution of frequency in non-LTE regime can be summarised
as follows:
* •
We first calculate the solar (stellar) flux absorbed from all bands of CH4.
Although classical, this part of the model can be cumbersome as all the main
absorption bands corresponding to the stella flux have to be (in principle)
taken into account. Limitations come from the knowledge of the spectroscopy of
the hot bands. In this model, the following bands are taken into account:
Pentad (3.3 micron) Octad (2.3 micron); Tetradecad (1.8 micron). An estimate
of the accuracy of the approximation in neglecting hotter bands will be given
below. Following an approach given by Doyennette et al. (1998), the
spectroscopy of CH4 is simplified by dividing the vibrational levels in
stretching and bending modes: therefore x superlevels (instead of the 29
potential sub-levels of the molecule). It is also assumed that for each super-
level belonging to a polyad, thermal equilibrium is achieved within the
population. This assumption comes from the observation that intra-vibrational
transitions within polyads have a higher transition rate than inter-
vibrational transitions.
* •
The population of the vibrational levels is then calculated within each
”super-level” of CH4. The vibrational de-excitation is assumed to follow the
bending mode de-excitation scheme (Appleby, 1990).
* •
From the population of the each super-level, the radiative rate of each level
can be calculated to determine the emission within each of the bands
(fundamental, octad-dyad and tetradecad-pentad) that contributing to the 3.3
micron domain.
* •
If hot band emission can be proven to remain optically thin down to deep
levels of the atmosphere, the resonant fluorescence is not the same, as self-
absorption is an essential ingredient of the fluorescence. Evidently, photons
absorbed, on average, at a tau=1 level have the same probability to be re-
absorbed as re-emitted upwards. The optically thick fluorescence, including
absorption and re-emission, is therefore applied to the resonant band.
## 5 Results
### 5.1 Validation of the method used
As described in previous sections, we analysed four nights of observations:
three in multi-order mode, with only HD 189733b in the slit (referred to as
’short-slit nights’) and one night in L-band with single order, long-slit set
up, observing HD 189733b and a fainter reference star simultaneously. While
the long-slit observation covers a narrower spectral interval compared to the
other eclipse observations, it is a critical test of the methodology with its
simultaneous observations of the target and the reference star. In figure 11
we present two lightcurves: HD 189733 and the reference star. Both are centred
at 3.31$\mu$m with a binning width of 50 channels ($\sim 2.88$nm). As expected
the HD 189733 timeseries (top) shows the distinctive lightcurve shape whilst
the reference star (bottom) timeseries shows a null result. We have fitted a
Mandel & Agol (2002) secondary eclipse lightcurve to both and found the HD
189733b transit depth to be $\delta_{HD189}=0.0078\pm 0.0003$ and
$\delta_{REF}=0.0\pm 0.0007$ respectively. These results are in good agreement
with the spectra presented below.
### 5.2 K and L-band spectra
The same analysis was undertaken for the three short-slit nights: illustrative
lightcurves are presented in figures 3 & 4. In figure 3 are plotted the
lightcurves of the ’three-nights-combined’ analysis for the K and L-band bands
centred at 2.32, 3.20, 3.31 3.4 and 3.6 microns, with 50 channel ($\sim 2.88$
nm) bins. The residual systematic noise is most pronounced in the areas of low
atmospheric transmissivity, which is reflected in the error bars of the
lightcurves and of the retrieved spectra. We also show the lightcurves centred
on the methane $\nu_{3}$ branch at $\sim$3.31$\mu$m for all individual nights,
figure 4.
Having verified the detection of HD 189733b eclipse in all data sets, we have
generated K- and L-band spectra for each individual night as well as for all
the three nights combined. The three individual nights are plotted in figures
5 and 7 for K and L bands respectively. All spectra are consistent with each
other and are within the error bars of the initial Swain et al. (2010)
results. This said, we find the nights of the 11th of August 2007 and July
12th 2009 of higher quality and in better agreement. The single night analysis
supports the assumption that intra-night variations are negligible which
allowed us to averaged the data sets and hence increase the signal to noise of
the final spectra. We could hence push the resolution to R $\sim$ 170-180 for
the final combined spectra. Figures 8 and 9 are the three-nights-combined K
and L-band spectra respectively. We include in these figures the comparison
with black body emission curves and LTE models. It is clear from the figures
that the strong features observed in the L-band cannot be explained by
standard LTE processes.
### 5.3 Comparison of the observations with atmospheric LTE and non-LTE
models
Even if many uncertainties subsist on the thermal vertical profile of
HD189733b, the thermal methane emission needed to reproduce the observed
spectrum would lead to brightness temperatures of $\sim$3000 K, which not only
are unlikely given the star-planet configuration, but would also appear in
other bands –e.g. in the $\nu$4 band at 7.8 $\mu$m– hypothesis ruled out from
Spitzer observations. While LTE models cannot explain such temperatures, non-
LTE models with only stellar photons as pumping mechanism do not supply enough
excess flux. This result is not unexpected since the contribution of stellar
reflection from the planet is smaller in L band than the thermal emission, and
fluorescence is only a redistribution of the stellar flux (even if a small
enhancement comes from the redistribution of frequency in the fluorescence
cascade). However, a good fit can be obtained by assuming a vibrational
temperature excess for methane by 5$\%$ due to an enhancement of the octad
level population in methane which is higher than expected by stellar flux
pumping (figure 9). This increase is currently an ad-hoc hypothesis and simply
describes the amount of vibrational temperature increase required to explain
the observed feature.
In the case of the K-band spectrum, it is less obvious whether LTE or non-LTE
processes are prevalent. We show in figure 8 a comparison with two LTE
simulations, one including CH4 plus CO2 in absorption as suggested by other
data sets. Another model was obtained with LTE emission of methane. However,
neither of the two simulations perfectly capture the spectrum observed. Given
the stronger non-LTE emission features detected at $\sim$3.3 $\mu$m, one can
expect to find non-LTE effects in the K-band as well. Further observations are
required in order to build up the required spectral resolution to decisively
constrain the excitation mechanisms at work.
## 6 Discussion
In figure 5 we present the K-band spectra of the three separate nights. This
plot shows a slight discrepancy between the night of the 22nd of June 2009
compared to the other two nights analysed. We can observe a systematic off-set
in both the K and L-bands (figure 7) with this night giving consistently lower
emission results. We associate this effect to the poorer observing conditions
and a degraded quality of the data compared to the data obtained in the other
two nights: a very high intrinsic scatter of the data may in fact reduce the
eclipse depth retrieved. We estimated the average spectra excluding the night
of June 22nd 2009 (figure 6) and found the results to be in good agreement
with the 3 nights-combined spectrum. This test demonstrates the robustness of
the final retrieved spectrum. It should be noted that this issue is less
severe in the L-band, since the overall signal strength is higher, than in the
K-band.
Whilst the K-band spectra could be explained with LTE models, we encounter a
quite different picture in the L-band. The observed emission around
$\sim$3.3$\mu$m exhibits a very poor match with the predicted LTE scenario. By
contrast, non-LTE emission of methane can capture the behaviour of the
$\nu_{3}$ branch. Similar fluorescence effects have been observed in our own
solar system, mainly CO2 in telluric and CH4 in giant solar system planets
(Barthélemy et al., 2005). In section 4 we outline a plausible model for the
creation of such a prominent feature. As previously mentioned, the increase in
CH4 vibrational temperature of 5$\%$ is presently an ad-hoc hypothesis: it
simply describes the amount of non-LTE population required to fit the
observations, pure LTE populations being insufficient. The source of this
population increase can come for a variety of sources: XUV illumination from
the star, electron precipitations, etc.which are presently not constrained at
all. Such effects are nonetheless known in planetary physics, such as on
Jupiter, where H2 vibrational temperatures in the upper atmosphere have been
demonstrated to be out of equilibrium through Ly-alpha observations
(Barthélemy et al., 2005), with a 1.4-1.5 fold increase in vibrational
temperature.
### 6.1 Validation of observations
The results presented here are found to be consistent with the results
initially presented by Swain et al. (2010), HST/NICMOS data in the K-band
(Swain et al., 2008) and verified in the L-band by the Spitzer/IRAC 3.6$\mu$m
broadband photometry (Charbonneau et al., 2008), see figures 6 $\&$ 7.
However, Mandell et al. (2011), from here M11, recently published a critique
of the original Swain et al. (2010), from here S10, result reporting a non-
detection of any exoplanetary features in their analysis. Since the results of
this publication are in good accord with Swain et al. (2010), the fundamental
discrepancy between the findings presented here and those by M11 need to be
addressed.
M11 argue that the L-band features reported by S11 were likely due to un-
accounted for telluric water emissions rather than exoplanetary methane. This
hypothesis poses four main questions which will be addressed below: (1) Do the
L-band features look like water emissions? (2) Are the results repeatable? (3)
Do or do we not see similar lightcurve features in the reference star? (4) Can
we quantify the amount of residual telluric contamination in the data?
#### 6.1.1 Do the L-band features look like water?
Here the simple answer is no. As discussed in section 5 and shown in figures 7
$\&$ 9, the improved spectral resolution of these results shows that we are
clearly dealing with methane signatures. As M11 pointed out, a temporary
change in telluric opacity due to atmospheric water (or methane) could mimic a
secondary eclipse event. However, for temporal atmospheric variations to mimic
an eclipse signal in the combined result of all three nights, the opacity
variations, as well as the airmass function, would need to be identical or at
least very similar in all data sets. The likelihood of such hypothesis is very
small.
In addition, we have retrieved weather recordings from near-by weather
stations. These include periodic temperature, relative humidity and pressure
readings from the the CFHT111http://mkwc.ifa.hawaii.edu/ as well as
atmospheric opacity (tau) readings at 225 $\mu$m obtained by the
CSO222http://www.cso.caltech.edu/ (see figure 12). Spread over all three
eclipsing events, we found no significant correlations between these
parameters and the secondary transit shape expected.
#### 6.1.2 Are the results repeatable?
A main focus throughout this publication is to demonstrate the repeatability
of the observations. In section 5 we present spectra retrieved for each
individual observing run of the three ’short-slit’ nights and found them
consistent with each other within the error-bars. For the methane $\nu_{3}$
band which is the most difficult to achieve measurement we present lightcurves
for all three observing runs considered, figure 4. These do vary in quality
from night to night but are found to be consistent with one another over a
measured timescale ranging from August 11th 2007 to July 12th 2009. This test
of repeatability is of paramount importance in asserting the validity of the
analysis as a whole.
#### 6.1.3 Do or do we not see similar lightcurve features in the reference
star?
We do not see any lightcurve features in the reference star’s timeseries. As
described in previous sections, we have obtained a fourth night in addition to
the three main nights analysed here. This fourth night was taken in the
single-order, L-band only mode with a one arc-minutes long slit. This allowed
us to simultaneously observe the target HD189733b and a fainter reference
star, 2MASS 20003818+2242065, over the course of a secondary eclipse on July
3rd 2010. We have equally applied the same routines outlined in section 3 to
both, the target and the reference. In figure 10 we plot the resulting
lightcurves of both stars centred at 3.31$\mu$m using the standard 50 channel
bin. We find the transit depth for HD189733b to be within the error bars of
the other nights analysed, whilst the reference star timeseries is flat.
Hence, the routines used produce a null result where a null detection is
expected.
Furthermore it is important to note that a faulty background subtraction would
have much stronger effects on the fainter reference star than on the target,
as any residual background is a proportionally larger fraction of the stellar
signal. We find the mean observed flux for a single exposure to be
F${}_{HD189}\sim$24300e- and F${}_{REF}\sim$900e- for the target and the
reference stars respectively. We can now state that the observed flux is a sum
of the stellar flux and a background contribution: Fobserved = Fstar \+ Fback.
We also assume that the background flux, Fback, is the same for both stars as
they were observed simultaneously on the same detector. Whatever the value of
Fback may be, its relative contribution on the overall flux would be $\sim$27
times higher for FREF than for FHD189. Following this argument, if we now
assume the lightcurve feature to be due to an inadequate background correction
(as postulated by M11), we would expect a $\sim$27 times deeper lightcurve
signal in the reference star timeseries than in HD189733b. To illustrate the
severity of this effect, we re-plotted the timeseries presented in figure 10
with an additional 27 times deeper transit than that of HD189733b underneath.
Given the flat nature of the reference star’s timeseries though, we can
confidently confirm an adequate treatment of telluric and other backgrounds.
#### 6.1.4 Can we quantify the residual telluric contamination in the data?
Using the Fourier based techniques described in this paper, we can quantify
the remaining contribution of systematic noise and the residual telluric
components in the spectra shown in sec. 5. As described in section 3.2, we are
mapping individual Fourier coefficients of the lightcurve signal in the
frequency domain. Any systematic noise or telluric contamination can therefore
only contribute to this one frequency bin. The degree of residual
contamination by systematics on that frequency bin can hence be estimated by
running the routine described in section 3.2.2 on only out-of-transit and only
in-transit data, i.e. removing the eclipse signal. Figure 13 and 14 show the
planet signal (black) and out-of-transit and in-transit measurements of the
contamination in red and green respectively. We conclude that the amplitude of
the systematic noise and the residual telluric component is within the error
bars of the planetary signal.
## 7 Conclusion
In this paper we present new data on the secondary eclipse of HD 189733b
recorded with the SpeX instrument on the IRTF. Our data analysis algorithm for
time-resolved, ground-based spectroscopic data, is based on a thorough pre-
cleaning of the raw data and subsequent spectral analysis using Fourier based
techniques. By combining three nights of observations, with identical
settings, and a further development of the data analysis methodology presented
in Swain et al. (2010), we could to increase the spectral resolution to R
$\sim$ 175.
We confirm the existence of a strong feature at $\sim$ 3.3$\mu$m,
corresponding to the methane $\nu_{3}$ branch, which cannot be explained by
LTE models. Non-LTE processes are most likely the origin of such emission and
we propose a plausible scheme to explain it.
The possibility of telluric contamination of the data is thoroughly tested but
we demonstrate that the residual due to atmospheric leakage is well within the
error-bars, both by using Fourier based techniques and additional observations
with a reference star in the slit. This critical test demonstrates the
robustness of our calibration method and its broad applicability in the future
to other space and ground exoplanet data.
I.P.W. is supported by a STFC Studentship. We like to thank the IRTF, which is
operated by the University of Hawaii under Cooperative Agreement no.
NNX-08AE38A with the National Aeronautics and Space Administration, Science
Mission Directorate, Planetary Astronomy Program.
Figure 1: Zoomed in fraction of the data prior to the cleaning process (A) and
post cleaning (B). Each column is a timeseries at a specific wavelength and
each is an individual spectrum ($n$) taken at a specific time. Figure 2:
showing power spectrum of a Mandel & Agol (2002) model lightcurve of HD189733b
(inset). It can clearly be seen that most power of the lightcurve signal is
contained in the first Fourier coefficient. Figure 3: Lightcurves of the
’three-night-combined’ analysis for the K and L bands. Lightcurves are offset
vertically for clarity. Figure 4: Lightcuves centred at 3.31$\mu$m with a bin
size of 50 channels ($\sim 2.88$ nm) for the three individual nights and
’three-nights-combined’. Figure 5: showing K-band planetary signal for the
three nights separate: August 11th 2007, June 22nd 2009 and the 12th of July
2009 in blue, red and green respectively. The night of June 22nd 2009 had poor
observing conditions and the data was significantly noisier and planetary
emissions retrieved are systematically lower for this night in both K and
L-band. Results from Swain et al. (2010) are shown in black. Figure 6:
showing the combined K-band planetary signal for the nights of August 11th
2007 and July 12th 2009 only (red), excluding the poor data quality of the
June 22nd 2009 night. For comparison the spectrum of all three nights combined
(green) is overplotted. The difference between both spectra is small and
indicates the night of June 22nd 2009 having a small effect on the overall
result. Ground-based results from Swain et al. (2010) and HST/NICMOS data
(Swain et al., 2008), are shown in black and purple respectively. Figure 7:
showing L-band planetary signal for the three nights separate: August 11th
2007, June 22nd 2009 and the 12th of July 2009 in blue, red and green
respectively. Similar to figure 5, the night of June 22nd 2009 shows a
systematic lower emission. As described previously, this may be a result of
the poor data quality of this night. Results from Swain et al. (2010) are
shown in black. Figure 8: Three night combined K band spectrum compared with
three black body curves at 1000, 1500, 2000 K. Furthermore two LTE models of
CH4 in emission (turquoise) and CH4 plus CO2 in absorption (orange). Figure
9: Three nights combined L-band spectrum. The blue discontinuous line shows a
comparison of the observations with the ”enhanced fluorescent” model; non-
thermal population enhancement in the octad level with a 5$\%$ increase of
vibrational temperature of CH4. Overlaid are black body curves at 100, 1500,
2000, 3000 K. Figure 10: showing the lightcurves of the long-slit analysis of
HD189733b and the simultaneously observed fainter reference star beneath,
centred at 3.31$\mu$m with the standard 50 channel binning. Overplotted are
two fitted Mandel & Agol (2002) curves for the secondary eclipse. The
HD189733b lightcurve is in good agreement with the other results of this paper
whilst the reference star’s timeseries is noticeably flat. Figure 11: showing
on the top the observed lightcurve of HD189733b, beneath the simultaneously
observed flat timeseries of the fainter reference star. At the bottom in red
is the simulated reference star lightcurve expected to be observed under the
assumption that the observed signal in HD189733b is due to an imperfect
background subtraction. The flat nature of the observed reference star
lightcurve is a strong indication that the background subtraction was treated
adequately. Figure 12: showing from top to bottom: Temperature (deg. C, CFHT
Weather station), Rel. Humidity ($\%$,CFHT), Pressure (mb,CFHT) and optical
depth, tau (225$\mu$m, CSO) for the 12nd Aug. 2007 (blue), 22nd June (green)
and 12th July 2009 (red). The discontinuous vertical lines mark the secondary
transit duration. Figure 13: showing the three night combined K-band result
(black), in-transit and out-of-transit contamination measures are plotted in
blue (dash-dotted) and red (dashed) respectively. It can clearly be seen that
the contamination by telluric components is much smaller than the planetary
signal and the it’s amplitude lies within the signal’s error bar. Figure 14:
showing the three night combined L-band result (black), in-transit and out-of-
transit contamination measures are plotted in blue (dash-dotted) and red
(dashed) respectively. It can clearly be seen that the contamination by
telluric components is much smaller than the planetary signal and the it’s
amplitude lies within the signal’s error bar.
## Appendix A Additional notes on wavelets
As mentioned in section 3.1.3, wavelet de-noising of timeseries data has
several advantages: 1) wavelet de-composition is a non-parametric algorithm
and hence does not assume prior information on the signal or noise properties,
making it an easy to use and objective de-noising routine; 2) contrary to
smoothing algorithms (eg. kernel regression) high and low signal frequencies
are retained; 3) temporal phase information of the signal is preserved during
the de- and re-construction of the signal. This allows for an optimal white
and systematic noise reduction at varying frequency pass-bands. For our
purposes we use a non-linear wavelet shrinkage by soft-thresholding of the
obtained wavelet coefficients and iterative reconstruction of the data. The
intricacies of such an approach were extensively discussed by Donoho (1995)
and Persival & Walden (2000). Using the ’Wavelet Toolbox’ in MATLAB, each
individual timeseries underwent a 4 level wavelet shrinkage using ”Daubechies
4” wavelets. The wavelet coefficients were estimated for each decomposition
step using an heuristic form of the Stein’s Unbiased Risk Estimate (SURE) for
soft-thresholding (Stein, 1981). This allows for a MINMAX coefficient
estimation (Sardy, 2000) in cases of too low signal-to-noise (SNR) for the
SURE algorithm. After thresholding, the timeseries were reconstructed based on
the obtained coefficients for each timeseries.
## Appendix B Fourier analysis
In section 3.2.2, we discuss the properties of box-shaped lightcurves in the
frequency domain. It is needless to say that this is a gross over-
simplification and that the actual secondary eclipse lightcurve is more akin
to a trapezoid (equations B1) rather than a square-box. In the case of a
trapezoid, we can calculate the power to decrease by $1/k^{2}$ for Fourier
coefficients above k=1. Hence, the Fourier series for a trapezoidal shape
converges faster (equation B2).
$\displaystyle f_{trap}(t)$
$\displaystyle=8\sqrt{(}2)\delta\left(\text{sin}(1/\tau)+\frac{\text{sin}(3/\tau)}{9}-\frac{\text{sin}(5/\tau)}{25}-...\right)$
(B1)
$\displaystyle=8\sqrt{(}2)\delta\sum_{k=1,3,5...}^{\infty}\left(\frac{\text{sin}(k/4\tau)+\text{sin}(3k/4\tau)}{k^{2}}\right)$
$|A|_{trapez}=\frac{\tau\delta}{2}\sum_{k=1,3,5...}^{\infty}\frac{1}{k^{2}}$
(B2)
The difference between the box-car and trapezoidal shape do not affect the
linear relationship between spectral amplitude and transit depth. We
furthermore extend the argument to limb-darkened lightcurves that exhibit a
markedly rounder morphology. These are a natural extension to the trapezoidal
case and it is generally true that the ’rounder’ the eclipse shape, the less
power is contained in Fourier coefficients above $k$ = 1, and hence the series
are converging even faster.
## References
* Agol et al. (2010) Agol, E., Cowan, N. B., Knutson, H. A., Deming, D., Steffen, J. H., Henry, G. W., & Charbonneau, D. 2010, ApJ, 721, 1861
* Appleby (1990) Appleby, J. 1990, Icarus, 85, 355
* Bakos et al. (2006) Bakos, G. Á., et al. 2006, ApJ, 650, 1160
* Barber et al. (2006) Barber, R. J., Tennyson, J., Harris, G. J., & Tolchenov, R. N. 2006, MNRAS, 368, 1087
* Barthélemy et al. (2005) Barthélemy, M., Lilensten, J., & Parkinson, C. 2005, A&A, 437, 329
* Bean et al. (2010) Bean, J. L., Miller-Ricci Kempton, E., & Homeier, D. 2010, Nature, 468, 669
* Beaulieu et al. (2008) Beaulieu, J. P., Carey, S., Ribas, I., & Tinetti, G. 2008, ApJ, 677, 1343
* Beaulieu et al. (2010) Beaulieu, J. P., et al. 2010, MNRAS, 409, 963
* Charbonneau et al. (2002) Charbonneau, D., Brown, T. M., Noyes, R. W., & Gilliland, R. L. 2002, ApJ, 568
* Charbonneau et al. (2008) Charbonneau, D., Knutson, H. A., Barman, T., Allen, L. E., Mayor, M., Megeath, S. T., Queloz, D., & Udry, S. 2008, ApJ, 686
* Charbonneau et al. (2005) Charbonneau, D., et al. 2005, ApJ, 626
* Cushing et al. (2004) Cushing, M., Vacca, W., & Rayner, J. 2004, PASP, 116, 362
* Donoho (1995) Donoho, D. 1995, IEE Trans. on Inf. Theory, 41, 613
* Doyennette et al. (1998) Doyennette, L., Menard-Bourcin, F., Menard, J., Boursier, C., & Camy-Peyret, C. 1998, Phys.chem.A., 102
* Drossart et al. (1999) Drossart, P., Fouchet, T., Crovisier, J., Lellouch, E., Encrenaz, T., Feuchtgruber, H., & Champion, J. 1999, ESA-SP, 427, 169
* Grillmair et al. (2008) Grillmair, C. J., et al. 2008, Nature, 456, 767
* Harrington et al. (2006) Harrington, J., Hansen, B. M., Luszcz, S. H., Seager, S., Deming, D., Menou, K., Cho, J. Y.-K., & Richardson, L. J. 2006, Science, 314, 623
* Knutson et al. (2007) Knutson, H. A., et al. 2007, Nature, 447, 183
* Mandel & Agol (2002) Mandel, K., & Agol, E. 2002, ApJL, 580, L171
* Mandell et al. (2011) Mandell, M., Deming, D., Blake, G., Knutson, H. A., Mumma, M. J., Villanueva, G. L., & Salyk, C. 2011, ApJ, 728, 18
* Pagiatakis et al. (2007) Pagiatakis, S. D., Yin, H., & El-Gelil, M. A. 2007, Phys. Earth and Planet. Interiors, 160, 108
* Persival & Walden (2000) Persival, D., & Walden, A. 2000, Wavelet Methods for Time Series Analysis (Cambridge University Press)
* Rayner et al. (2003) Rayner, J. T., Toomey, D. W., Onaka, P. M., Denault, A. J., Stahlberger, W. E., Vacca, W. D., Cushing, M. C., & Wang, S. 2003, PASP, 115, 362
* Redfield et al. (2008) Redfield, S., Endl, M., Cochran, W. D., & Koesterke, L. 2008, ApJL, 673, L87
* Riley et al. (2004) Riley, K., F., Hobson, M., P., & Bence, S., J. 2004, Mathematical Methods 2nd edition (Cambridge University Press)
* Rothman et al. (2009) Rothman, L. S., et al. 2009, JQRST, 110
* Sardy (2000) Sardy, S. 2000, IEE Trans. on Sig. Proces, 48, 1023
* Snellen et al. (2008) Snellen, I. A. G., Albrecht, S., de Mooij, E. J. W., & Le Poole, R. S. 2008, A&A, 487, 357
* Snellen et al. (2010a) Snellen, I. A. G., de Kok, R. J., de Mooij, E. J. W., & Albrecht, S. 2010a, Nature, 468, 1049
* Snellen et al. (2010b) Snellen, I. A. G., de Mooij, E. J. W., & Burrows, A. 2010b, A&A, 513, 76
* Stein (1981) Stein, C. 1981, Ann.Statist., 9, 1135
* Swain et al. (2008) Swain, M. R., Vasisht, G., & Tinetti, G. 2008, Nature, 452, 329
* Swain et al. (2009a) Swain, M. R., Vasisht, G., Tinetti, G., Bouwman, J., Chen, P., Yung, Y., Deming, D., & Deroo, P. 2009a, ApJL, 690, L114
* Swain et al. (2009b) Swain, M. R., et al. 2009b, ApJ, 704, 1616
* Swain et al. (2010) —. 2010, Nature, 463, 637
* Thatte et al. (2010) Thatte, A., Deroo, P., & Swain, M. R. 2010, A&A, 523, 35
* Thiévin et al. (2008) Thiévin, J., Georges, R., Carles, S., Benidar, A., Rowe, B., & Champion, J. 2008, JQSRT, 109, 2027
* Tinetti et al. (2010) Tinetti, G., Deroo, P., Swain, M. R., Griffith, C. A., Vasisht, G., Brown, L. R., Burke, C., & McCullough, P. 2010, ApJL, 712, L139
* Tinetti et al. (2005) Tinetti, G., Meadows, V. S., Crisp, D., Fong , W., Velusamy, T., & Snively, H. 2005, Astrobiology, 5
* Tinetti et al. (2006) Tinetti, G., Meadows, V. S., Crisp, D., Fong, W., Fishbein, E., Turnbull, M., & Bibring, J.-P. 2006, Astrobiology, 6, 34
* Tinetti et al. (2007) Tinetti, G., et al. 2007, Nature, 448, 169
* Yurchenko et al. (2011) Yurchenko, S. N., Barber, R. J., & Tennyson, J. 2011, MNRAS, submitted
|
arxiv-papers
| 2011-04-04T13:53:16 |
2024-09-04T02:49:18.099260
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "I.P. Waldmann, G. Tinetti, P. Drossart, M. R. Swain, P. Deroo and C.\n A. Griffith",
"submitter": "Ingo Waldmann",
"url": "https://arxiv.org/abs/1104.0570"
}
|
1104.0730
|
# Corrections to the apparent value of the cosmological constant due to local
inhomogeneities
Antonio Enea Romano1,2,5 aer@phys.ntu.edu.tw Pisin Chen1,2,3,4
pisinchen@phys.ntu.edu.tw 1Department of Physics, National Taiwan University,
Taipei 10617, Taiwan, R.O.C.
2Leung Center for Cosmology and Particle Astrophysics, National Taiwan
University, Taipei 10617, Taiwan, R.O.C.
3Graduate Institute of Astrophysics, National Taiwan University, Taipei 10617,
Taiwan, R.O.C.
4Kavli Institute for Particle Astrophysics and Cosmology, SLAC National
Accelerator Laboratory, Menlo Park, CA 94025, U.S.A.
5Instituto de Fisica, Universidad de Antioquia, A.A.1226, Medellin, Colombia
###### Abstract
Supernovae observations strongly support the presence of a cosmological
constant, but its value, which we will call apparent, is normally determined
assuming that the Universe can be accurately described by a homogeneous model.
Even in the presence of a cosmological constant we cannot exclude nevertheless
the presence of a small local inhomogeneity which could affect the apparent
value of the cosmological constant. Neglecting the presence of the
inhomogeneity can in fact introduce a systematic misinterpretation of
cosmological data, leading to the distinction between an apparent and true
value of the cosmological constant. We establish the theoretical framework to
calculate the corrections to the apparent value of the cosmological constant
by modeling the local inhomogeneity with a $\Lambda LTB$ solution. Our
assumption to be at the center of a spherically symmetric inhomogeneous matter
distribution correspond to effectively calculate the monopole contribution of
the large scale inhomogeneities surrounding us, which we expect to be the
dominant one, because of other observations supporting a high level of
isotropy of the Universe around us.
By performing a local Taylor expansion we analyze the number of independent
degrees of freedom which determine the local shape of the inhomogeneity, and
consider the issue of central smoothness, showing how the same correction can
correspond to different inhomogeneity profiles. Contrary to previous attempts
to fit data using large void models our approach is quite general. The
correction to the apparent value of the cosmological constant is in fact
present for local inhomogeneities of any size, and should always be taken
appropriately into account both theoretically and observationally.
## I Introduction
High redshift luminosity distance measurements Perlmutter:1999np ;
Riess:1998cb ; Tonry:2003zg ; Knop:2003iy ; Barris:2003dq ; Riess:2004nr and
the WMAP measurements WMAP2003 ; Spergel:2006hy of cosmic microwave
background (CMB) interpreted in the context of standard FLRW cosmological
models have strongly disfavored a matter dominated universe, and strongly
supported a dominant dark energy component, giving rise to a positive
cosmological acceleration.
As an alternative to dark energy, it has been proposed Nambu:2005zn ;
Kai:2006ws that we may be at the center of an inhomogeneous isotropic
universe without cosmological constant described by a Lemaitre-Tolman-Bondi
(LTB) solution of Einstein’s field equations, where spatial averaging over one
expanding and one contracting region is producing a positive averaged
acceleration $a_{D}$, but it has been shown how spatial averaging can give
rise to averaged quantities which are not observable Romano:2006yc . Another
more general approach to map luminosity distance as a function of redshift
$D_{L}(z)$ to LTB models has been recently proposed Chung:2006xh ; Yoo:2008su
, showing that an inversion method can be applied successfully to reproduce
the observed $D_{L}(z)$. Interesting analysis of observational data in
inhomogeneous models without dark energy and of other theoretically related
problems is given for example in Alexander:2007xx ; Alnes:2005rw ;
GarciaBellido:2008nz ; GarciaBellido:2008gd ; GarciaBellido:2008yq ;
February:2009pv ; Uzan:2008qp ; Quartin:2009xr ; Quercellini:2009ni ;
Clarkson:2007bc ; Ishibashi:2005sj ; Clifton:2008hv ; Celerier:2009sv ;
Romano:2007zz ; Romano:2009qx ; Romano:2009ej ; Romano:2009mr ;
Mustapha:1998jb
Here in this paper we will adopt a different approach. We will consider a
Universe with a cosmological constant and some local large scale inhomogeneity
modeled by a $\Lambda LTB$ solution Romano:2010nc . For simplicity we will
also assume that we are located at its center. In this regard this can be
considered a first attempt to model local large scale inhomogeneities in the
presence of the cosmological constant or, more in general, dark energy. Given
the spherical symmetry of the LTB solution and the assumption to be located at
the center our calculation can be interpreted as the monopole contribution of
the large inhomogeneities which surround us. Since we know from other
observations such as CMB radiation that the Universe appears to be highly
isotropic, we can safely assume that the monopole contribution we calculate
should also be the dominant one, making our results even more relevant. After
calculating the null radial geodesics for a central observer we then compute
the luminosity distance and compare it to that of $\Lambda CDM$ model, finding
the relation between the two different cosmological constants appearing in the
two models, where we call apparent the one in the $\Lambda CDM$ and true the
one in $\Lambda LTB$. Our calculations show that the corrections to
$\Omega^{app}_{\Lambda}$, which is the value of the cosmological constant
obtained from analyzing supernovae data assuming homogeneity, can be important
and should be taken into account.
## II LTB solution with a cosmological constant
The LTB solution can be written as Lemaitre:1933qe ; Tolman:1934za ;
Bondi:1947av as
$\displaystyle
ds^{2}=-dt^{2}+\frac{\left(R,_{r}\right)^{2}dr^{2}}{1+2\,E(r)}+R^{2}d\Omega^{2}\,,$
(1)
where $R$ is a function of the time coordinate $t$ and the radial coordinate
$r$, $E(r)$ is an arbitrary function of $r$, and $R_{,r}=\partial_{r}R(t,r)$.
The Einstein equations with dust and a cosmological constant give
$\displaystyle\left({\frac{\dot{R}}{R}}\right)^{2}$ $\displaystyle=$
$\displaystyle\frac{2E(r)}{R^{2}}+\frac{2M(r)}{R^{3}}+\frac{\Lambda}{3}\,,$
(2) $\displaystyle\rho(t,r)$ $\displaystyle=$
$\displaystyle\frac{2M,_{r}}{R^{2}R,_{r}}\,,$ (3)
with $M(r)$ being an arbitrary function of $r$, $\dot{R}=\partial_{t}R(t,r)$
and $c=8\pi G=1$ is assumed throughout the paper. Since Eq. (2) contains
partial derivatives respect to time only, its general solution can be obtained
from the FLRW equivalent solution by making every constant in the latter one
an arbitrary function of $r$.
The general analytical solution for a FLRW model with dust and cosmological
constant was obtained by Edwards Dilwyn in terms of elliptic functions. By an
appropriate choice of variables and coordinates, we may extend it to the LTB
case thanks to the spherical symmetry of both LTB and FLRW models, and to the
fact that dust follows geodesics without being affected by adjacent regions.
An anaytical solution can be found by introducing a new coordinate
$\eta=\eta(t,r)$ and a variable $a$ by
$\displaystyle\left(\frac{\partial\eta}{\partial
t}\right)_{r}=\frac{r}{R}\equiv\frac{1}{a}\,,$ (4)
and new functions by
$\displaystyle\rho_{0}(r)\equiv\frac{6M(r)}{r^{3}}\,,\quad
k(r)\equiv-\frac{2E(r)}{r^{2}}\,.$ (5)
Then Eq. (2) becomes
$\left(\frac{\partial
a}{\partial\eta}\right)^{2}=-k(r)a^{2}+\frac{\rho_{0}(r)}{3}a+\frac{\Lambda}{3}a^{4}\,,$
(6)
where $a$ is now regarded as a function of $\eta$ and $r$, $a=a(\eta,r)$. It
should be noted that the coordinate $\eta$, which is a generalization of the
conformal time in a homogeneous FLRW universe, has been only implicitly
defined by Eq. (4). The actual relation between $t$ and $\eta$ can be obtained
by integration once $a(\eta,r)$ is known:
$t(\eta,r)=\int_{0}^{\eta}{a(x,r)dx}+t_{b}(r)\,,$ (7)
which can be computed analytically, and involve elliptic integrals of the
third kindellint .
The function $t_{B}(r)$ plays the role of constant of integration, and is an
arbitrary function of $r$, sometime called bang function, since by
construction at time $t=t_{b}(r)$ we have $a(t_{b}(r),r)=0$, and correspond to
the fact that the big bang initial singularity can happen at different times
at different positions from the center in a LTB space. In the rest of this
paper we will assume homogeneous bang, i.e. we will set
$t_{b}(r)=0.$ (8)
Inspired by the construction of the solution for the FLRW case get:
$a(\eta,r)=\frac{\rho_{0}(r)}{3\phi\left(\frac{\eta}{2};g_{2}(r),g_{3}(r)\right)+k(r)}\,,$
(9)
where $\phi(x;g_{2},g_{3})$ is the Weierstrass elliptic function satisfying
the differential equation
$\left(\frac{d\phi}{dx}\right)^{2}=4\phi^{3}-g_{2}\phi-g_{3}\,,$ (10)
and
$\displaystyle\alpha=\rho_{0}(r)\,,\quad g_{2}=\frac{4}{3}k(r)^{2}\,,\quad
g_{3}=\frac{4}{27}\left(2k(r)^{3}-\Lambda\rho_{0}(r)^{2}\right)\,.$ (11)
In this paper we will choose the so called FLRW gauge, i.e. the coordinate
system in which $\rho_{0}(r)$ is constant.
## III Geodesic equations and luminosity distance
We adopt the same method developed in Romano:2009xw to solve the null
geodesic equation written in terms of the coordinates $(\eta,r)$. Instead of
integrating differential equations numerically, we perform a local expansion
of the solution around $z=0$ corresponding to the point $(t_{0},0)$, or
equivalently $(\eta_{0},0)$, where $t_{0}=t(\eta_{0},0)$. The change of
variables from $(t,r)$ to $(\eta,r)$ permits us to have r.h.s. of all
equations in a fully analytical form, in contrast to previous considerations
of this problem which require a numerical calculation of $R(t,r)$ from the
Einstein equation (2). Thus, this formulation is particularly suitable for
derivation of analytical results.
The luminosity distance for a central observer in the LTB space-time as a
function of the redshift $z$ is expressed as
$D_{L}(z)=(1+z)^{2}R\left(t(z),r(z)\right)=(1+z)^{2}r(z)a\left(\eta(z),r(z)\right)\,,$
(12)
where $\Bigl{(}t(z),r(z)\Bigr{)}$ or $\Bigl{(}(\eta(z),r(z)\Bigr{)}$ is the
solution of the radial geodesic equation as a function of $z$. The past-
directed radial null geodesics is given by
$\displaystyle\frac{dt}{dr}=-\frac{R_{,r}(t,r)}{\sqrt{1+2E(r)}}\,.$ (13)
In terms of $z$, Eq. (13) takes the form Celerier:1999hp :
$\displaystyle{dr\over dz}$ $\displaystyle=$
$\displaystyle{\sqrt{1+2E(r(z))}\over{(1+z){\dot{R}}_{,r}[r(z),t(z)]}}\,,$
$\displaystyle{dt\over dz}$ $\displaystyle=$
$\displaystyle-{R_{,r}[r(z),t(z)]\over{(1+z){\dot{R}}_{,r}[r(z),t(z)]}}\,.$
(14)
The inconvenience of using the $(t,r)$ coordinates is that there is no exact
analytical solution for $R(t,r)$. So the r.h.s. of Eqs. (14) cannot be
evaluated analytically, but we are required to find a numerical solution for
$R$ first Hellaby:2009vz , and then to integrate numerically the differential
equations, which is quite an inconvenient and cumbersome procedure, and cannot
be used to derive anaytical results.
It can be shown Romano:2009xw that in the coordinates $(\eta,r)$ eqs. (14)
take the form:
$\displaystyle\frac{d\eta}{dz}$ $\displaystyle=$
$\displaystyle-\frac{\partial_{r}t(\eta,r)+F(\eta,r)}{(1+z)\partial_{\eta}F(\eta,r)}\equiv
p(\eta,r)\,,$ (15) $\displaystyle\frac{dr}{dz}$ $\displaystyle=$
$\displaystyle\frac{a(\eta,r)}{(1+z)\partial_{\eta}F(\eta,r)}\equiv
q(\eta,r)\,,$ (16)
where
$F(\eta,r)\equiv\frac{\
R_{,r}}{\sqrt{1+2E(r)}}=\frac{1}{\sqrt{1-k(r)r^{2}}}\left[\partial_{r}(a(\eta,r)r)-a^{-1}\partial_{\eta}(a(\eta,r)r)\,\partial_{r}t(\eta,r)\right]\,.$
(17)
It is important to observe that the functions $p,q,F$ have explicit analytical
forms, making it particularly useful do derive anaytical results.
## IV Number of independent parameters and Taylor expansion accuracy
In order to find the relation between the apparent and true value of the
cosmological constant we need in to match the terms in the red-shift expansion
:
$\displaystyle D_{i}^{\Lambda CDM}=D^{{\Lambda LTB}}_{i}$ , (18)
Before proceeding in deriving this relation we need to understand clearly how
many independent parameters we can solve for at different order in the Taylor
expansion for $D_{L}(z)$. After defining the expansion of the function $k(r)$
in terms of the dimensionless function $K(r)$:
$k(r)=(a_{0}H_{0})^{2}K(r)=K_{0}+K_{1}r+K_{2}r^{2}+..\\\ $ (19)
we have
$\displaystyle D_{1}^{\Lambda LTB}$ $\displaystyle=$
$\displaystyle\frac{1}{H^{\Lambda LTB}_{0}}\,,$ (20) $\displaystyle D^{\Lambda
LTB}_{i}$ $\displaystyle=$ $\displaystyle
f_{i}(\Omega_{\Lambda},K_{0},K_{1},..,K_{i-1}),$ (21)
which implies that if we want to match the coefficient $D_{i}$ up to order n,
we will have a total of $n+2$ independent parameters to solve for :
$\\{H^{\Lambda}_{0},\Omega_{\Lambda},K_{0},K_{i},..,K_{i-1}\\}.$ (22)
The matching conditions will imply a constraint over the $n+2$ independent
parameters, but this will not be enough completely determine them, since two
of them will always be free. For a matter of computational convenience we will
choose $K_{0},K_{1}$ as free parameters and express all the other in terms of
them. For example from :
$D_{2}^{\Lambda CDM}=D^{{\Lambda LTB}}_{2},$ (23)
we can get
$\Omega^{app}_{\Lambda}(\Omega_{\Lambda},K_{0},K_{1}),$ (24)
from
$D_{3}^{\Lambda CDM}=D^{{\Lambda LTB}}_{3},$ (25)
we can get
$K_{2}(\Omega_{\Lambda}^{app},K_{0},K_{1}),$ (26)
and in general from
$D_{i}^{\Lambda CDM}=D^{{\Lambda LTB}}_{i},$ (27)
we can get
$K_{i-1}(\Omega_{\Lambda}^{app},K_{0},K_{1}).$ (28)
Since our purpose is to find the corrections to the apparent value of the
cosmological constant, the second order term $D_{2}$ is enough. Higher order
terms in the redshift expansion will provide $K_{2},K_{3},..K_{i-1}$ as
functions of $\\{\Omega_{\Lambda}^{app},K_{0},K_{1}\\}$, but will not change
the analytical relation between $\Omega^{app}_{\Lambda}$ and
$\Omega_{\Lambda}$ which can be derived from eq.(23). For this reason we will
only need the expansion up to second order for the luminosity distance. The
fact that we have more free parameters than constraints implies that the same
correction to the apparent value of the cosmological constant can correspond
to an infinite number of different inhomogeneity profiles.
The corrections we calculate are accurate within the limits of validity of the
Taylor expansion $D^{\Lambda CDM}_{Taylor}$. It turns out that in the flat
case we consider the error is quite large already at a redshift of about $0.2$
as shown in the figure. This implies that the corrections should also be valid
only within this low redshift range, since even if we are exactly matching the
coefficients, the Taylor expansion of the $\Lambda CDM$ best fit formula
itself is not very accurate. This could be overcome by implementing other
types of expansions or numerical methods, such as Padé for example, with
better convergence behavior, but we’ll leave this to a future work.
Figure 1: The percentual error $\Delta=100\frac{D^{\Lambda CDM}-D^{\Lambda
CDM}_{Taylor}}{D^{\Lambda CDM}}$ for a third order expansion is plotted as a
function of the redshift. As it can be seen the error is already quite large
at redshift 0.1. Higher order expansion does not improve the convergence.
## V Central behavior
A function of the radial coordinate $f(r)$ is smooth at the center $r=0$ only
if all its odd derivatives vanish there. This can be shown easily by looking
at the partial derivatives of even order of this type for example:
$\displaystyle\partial^{2n}_{x}\partial^{2n}_{y}\partial^{2n}_{z}f(\sqrt{x^{2}+y^{2}+z^{2}})\,,$
(29)
where $\\{x,y,z\\}$ are the cartesian coordinates related to $r$ by
$r^{2}=x^{2}+y^{2}+z^{2}$. Quantities of the type above diverge at the center
if $\partial^{2m+1}_{r}f(r)\neq 0$ for $2m+1<2n$. If for example the first
derivative $f^{\prime}(0)$ is not zero, then the laplacian will diverge. This
implies that including linear terms expansions for $k(r)$ and $t_{b}(r)$ we
are considering models which are not smooth at the center. The general central
smoothness conditions are:
$\displaystyle k_{2m+1}$ $\displaystyle=$ $\displaystyle 0,$ (30)
$\displaystyle t_{b}^{2m+1}$ $\displaystyle=$ $\displaystyle 0\,,$ (31)
$\displaystyle 2m+1$ $\displaystyle<$ $\displaystyle i\,,$ (32)
which must be satisfied for all the relevant odd powers coefficients of the
central Taylor expansion. In our case this implies that if we only want to
consider centrally smooth inhomogeneities then we need to set to zero all the
odd derivatives of $K(r)$
$K_{2m+1}=0$ (33)
The consequence of this smoothness conditions is that the exact matching of
the Taylor expansion is possible only up to order five when we have five
constraints equations
$\displaystyle D_{i}^{\Lambda CDM}=D^{{\Lambda LTB}}_{i}$
$\displaystyle\quad,\quad 1\leq i\leq 5\,,$ (34)
and five free parameters
$\displaystyle{H_{0}^{\Lambda LTB},\Omega_{\Lambda},K_{0},K_{2},K_{4}}$ (35)
implying there is a unique solution. Going to higher order there will be more
equations than free parameters making the inversion problem impossible. This
means that the effects of a different value of the cosmological constant
cannot be mimicked by a smooth inhomogeneity, as far as the exact matching of
the Taylor expansion is concerned. From a data analysis point of view this
limitation could be easily circumvented, since these considerations are based
on matching the Taylor expansion of the best $\Lambda CDM$ fit, which is quite
different from fitting the actual data. Also it turns out that the Taylor
expansion $D^{\Lambda CDM}_{Taylor}(z)$ is more accurate at second order than
at any other order as shown in the figure, implying that exact matching beyond
second order is practically irrelevant from a data fitting point of view.
Under these considerations the inversion problem can be considered still
effectively undetermined since by matching up to second order we have two
equations and three parameters:
$\displaystyle{H_{0}^{\Lambda LTB},\Omega_{\Lambda},K_{0}}$ (36)
For completeness of the analysis we mention that after counting the number of
independent parameters we can easily conclude that the inversion problem
remain undetermined for the third order, and has a unique solution for the
fourth and fifth order as shown above.
## VI Calculating the luminosity distance
In order to obtain the redshift expansion of the luminosity distance we need
to use the following:
$\displaystyle k(r)$ $\displaystyle=$
$\displaystyle(a_{0}H_{0})^{2}K(r)=K_{0}+K_{1}r+K_{2}r^{2}+..$ (37)
$\displaystyle t(\eta,r)$ $\displaystyle=$ $\displaystyle
b_{0}(\eta)+b_{1}(\eta)r+b_{2}(\eta)r^{2}+..$ (38)
It should be noted that linear terms will in fact lead to central divergences
of the laplacian in spherical coordinates, which correspond to a central spike
of the energy distribution Romano:2009ej ; Romano:2009mr , but an appropriate
local averaging of the solution can easily heal this behavior, and we include
them here because they give the leading order contribution. Since we are
interested in the effects due to the inhomogeneities we will neglect $k_{0}$
in the rest of the calculation because this corresponds to the homogeneous
component of the curvature function $k(r)$.
Following the same approach given in Romano:2010nc , we can find a local
Taylor expansion in red-shift for the geodesics equations, and then calculate
the luminosity distance:
$\displaystyle D^{\Lambda LTB}_{L}(z)$ $\displaystyle=$
$\displaystyle(1+z)^{2}r(z)a^{\Lambda LTB}(\eta(z),r(z))=D^{\Lambda
LTB}_{1}z+D^{\Lambda LTB}_{2}z^{2}+D^{\Lambda LTB}_{3}z^{3}+..$ (39)
$\displaystyle D^{\Lambda LTB}_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{H_{0}},$ $\displaystyle D^{\Lambda LTB}_{2}$
$\displaystyle=$
$\displaystyle\frac{1}{{36H_{0}(\Omega_{\Lambda}^{true}-1)}}\bigg{[}54B_{1}(\Omega_{\Lambda}^{true}-1)^{2}+18B^{\prime}_{1}(\Omega_{\Lambda}^{true}-1)-18h_{0,r}(\Omega_{\Lambda}^{true})^{2}$
(40)
$\displaystyle+30h_{0,r}\Omega_{\Lambda}^{true}-12h_{0,r}+6K_{1}\Omega_{\Lambda}^{true}-10K_{1}+27(\Omega_{\Lambda}^{true})^{2}-18\Omega_{\Lambda}^{true}-9\bigg{]},$
where we have introduced the dimensionless quantities
$K_{0},K_{1},B_{1},B^{\prime}_{1},h_{0,r}$ according to
$\displaystyle H_{0}$ $\displaystyle=$
$\displaystyle\left(\frac{\partial_{t},a(t,r)}{a(t,r)}\right)^{2}\Biggr{|}_{t=t_{0},r=0}=\left(\frac{\partial_{\eta}a(\eta,r)}{a(\eta,r)^{2}}\right)^{2}\Biggr{|}_{\eta=\eta_{0},r=0},$
(41) $\displaystyle B_{1}(\eta)$ $\displaystyle=$ $\displaystyle
b_{1}(\eta)a_{0}^{-1},$ (42) $\displaystyle B_{1}$ $\displaystyle=$
$\displaystyle b_{1}(\eta_{0})a_{0}^{-1},$ (43) $\displaystyle B^{\prime}_{1}$
$\displaystyle=$ $\displaystyle\frac{\partial
B_{1}(\eta)}{\partial\eta}\Biggr{|}_{\eta=\eta_{0}}(a_{0}H_{0})^{-2},$ (44)
$\displaystyle h_{0,r}$ $\displaystyle=$
$\displaystyle\frac{1}{a_{0}H_{0}}\frac{\partial_{r}a(\eta,r)}{a(\eta,r)}\Biggr{|}_{\eta=\eta_{0},r=0},$
(45) $\displaystyle t_{0}$ $\displaystyle=$ $\displaystyle t(\eta_{0},0),$
(46)
and used the Einstein equation at the center $(\eta=\eta_{0},r=0)$
$\displaystyle 1$ $\displaystyle=$
$\displaystyle\Omega_{k}(0)+\Omega_{M}+\Omega_{\Lambda}=-K_{0}+\Omega_{M}+\Omega_{\Lambda},$
(47) $\displaystyle\Omega_{k}(r)$ $\displaystyle=$
$\displaystyle-\frac{k(r)}{H_{0}^{2}a_{0}^{2}},$ (48)
$\displaystyle\Omega_{M}$ $\displaystyle=$
$\displaystyle\frac{\rho_{0}}{3H_{0}^{2}a_{0}^{3}},$ (49)
$\displaystyle\Omega_{\Lambda}$ $\displaystyle=$
$\displaystyle\frac{\Lambda}{3H_{0}^{2}}.$ (50)
Because of our coordinate choice $\Omega_{M}$ is independent of $r$, and all
the radial dependence goes into $\Omega_{k}(r)$. Note that apart from the
central curvature term $K_{0}$, the inhomogeneity of the $LTB$ space is
expressed in $h_{0,r}$, which encodes the radial dependence of the scale
factor. Details of these rather cumbersome calculations are provided in a
separate companion paper, but it should be emphasized that in order to put the
formula for the luminosity distance in this form it is necessary to manipulate
appropriately the elliptic functions and then re-express everything in terms
of physically meaningful quantities such as $H_{0}$.
## VII Calculating $D_{L}(z)$ for $\Lambda CDM$ models.
The metric of a $\Lambda CDM$ model is the FLRW metric, a special case of LTB
solution, where :
$\displaystyle\rho_{0}(r)$ $\displaystyle\propto$ $\displaystyle const,$ (51)
$\displaystyle k(r)$ $\displaystyle=$ $\displaystyle 0,$ (52) $\displaystyle
t_{b}(r)$ $\displaystyle=$ $\displaystyle 0,$ (53) $\displaystyle a(t,r)$
$\displaystyle=$ $\displaystyle a(t).$ (54)
We will calculate independently the expansion of the luminosity distance and
the redshift spherical shell mass for the case of a flat $\Lambda CDM$, to
clearly show the meaning of our notation, and in particular the distinction
between $\Omega_{\Lambda}^{app}$ and $\Omega_{\Lambda}^{true}$. We can also
use these formulas to check the results derive before, since in absence of
inhomogeneities they should coincide.
One of the Einstein equation can be expressed as:
$\displaystyle H^{\Lambda CDM}(z)$ $\displaystyle=$ $\displaystyle
H_{0}\sqrt{(1-\Omega_{\Lambda}^{app}){\left(\frac{a_{0}}{a}\right)}^{3}+\Omega_{\Lambda}^{app}}=H_{0}\sqrt{(1-\Omega_{\Lambda}^{app}){(1+z)}^{3}+\Omega_{\Lambda}^{app}}.$
(55)
We can then calculate the luminosity distance using the following relation,
which is only valid assuming flatness:
$\displaystyle D^{\Lambda
CDM}_{L}(z)=(1+z)\int^{z}_{0}{\frac{dz^{\prime}}{H^{\Lambda
CDM}(z^{\prime})}}=D^{\Lambda CDM}_{1}z+D^{\Lambda CDM}_{2}z^{2}+D^{\Lambda
CDM}_{3}z^{3}+...$ (56)
From which we can get:
$\displaystyle D^{\Lambda CDM}_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{H_{0}}\,,$ (57) $\displaystyle D^{\Lambda CDM}_{2}$
$\displaystyle=$ $\displaystyle\frac{3\Omega_{\Lambda}^{app}+1}{4H_{0}}\,.$
(58)
We can check the consistency between these formulae and the ones derived in
the case of LTB by setting:
$\displaystyle K_{1}=B_{1}=B^{\prime}_{1}=K_{0}=h_{0,r}=0\,,$ (59)
which corresponds to the case in which
$\Omega_{\Lambda}^{app}=\Omega_{\Lambda}^{true}$.
## VIII Relation between apparent and true value of the cosmological constant
So far we have calculated the first two terms of the redshift expansion of the
luminosity distance for $\Lambda LTB$ and $\Lambda CDM$ model. Since we now
that the latter provides a good fitting for supernovae observations, we can
now look for the $\Lambda LTB$ models which give the same theoretical
prediction. From the above relations we can derive :
$\displaystyle H_{0}^{\Lambda LTB}$ $\displaystyle=$ $\displaystyle H^{\Lambda
CDM}_{0}\,,$ (61) $\displaystyle\Omega_{\Lambda}^{app}$ $\displaystyle=$
$\displaystyle\frac{1}{{27(\Omega_{\Lambda}^{true}-1)}}\Bigg{[}54B_{1}(\Omega_{\Lambda}^{true})^{2}-108B_{1}\Omega_{\Lambda}^{true}+54B_{1}+18B^{\prime}_{1}\Omega_{\Lambda}^{true}-18B^{\prime}_{1}$
(62)
$\displaystyle-18h_{0,r}(\Omega_{\Lambda}^{true})^{2}+30h_{0,r}\Omega_{\Lambda}^{true}-12h_{0,r}+6K_{1}\Omega_{\Lambda}^{true}-10K_{1}$
$\displaystyle+27\Omega_{\Lambda}^{true}(\Omega_{\Lambda}^{true}-1)\Bigg{]}\,,$
$\displaystyle\Omega_{\Lambda}^{true}$ $\displaystyle=$
$\displaystyle-\frac{1}{{6(6B_{1}-2h_{0,r}+3)}}\Bigg{[}\Bigg{(}(36B_{1}-6B^{\prime}_{1}-10h_{0,r}-2K_{1}+9\Omega_{\Lambda}^{app}+9)^{2}+$
(63)
$\displaystyle-4(6B_{1}-2h_{0,r}+3)(54B_{1}-18B^{\prime}_{1}-12h_{0,r}-10K_{1}+27\Omega_{\Lambda}^{app})\Bigg{)}^{1/2}-36B_{1}$
$\displaystyle+6B^{\prime}_{1}+10h_{0,r}+2K_{1}-9(\Omega_{\Lambda}^{app}-1)\Bigg{]}\,.$
We can also expand the above exact relations assuming that all the
inhomogeneities, can be treated perturbatively respect to the $\Lambda CDM$ ,
i.e. $\\{{K_{1},B_{1},B^{\prime}_{1}\\}}\propto\epsilon$, where $\epsilon$
stands for a small deviation from $FLRW$ solution :
$\displaystyle\Omega_{\Lambda}^{true}$ $\displaystyle=$
$\displaystyle\Omega_{\Lambda}^{app}-\frac{2}{{27(\Omega_{\Lambda}^{app}-1)}}(27B_{1}(\Omega_{\Lambda}^{app}-1)^{2}+9B^{\prime}_{1}(\Omega_{\Lambda}^{app}-1)-9h_{0,r}{(\Omega_{\Lambda}^{app})}^{2}+15h_{0,r}\Omega_{\Lambda}^{app}$
(64)
$\displaystyle-6h_{0,r}+3K_{1}\Omega_{\Lambda}^{app}-5K_{1})+O(\epsilon^{2})\,.$
As expected all these relations reduce to
$\displaystyle\Omega_{\Lambda}^{true}$ $\displaystyle=$
$\displaystyle\Omega_{\Lambda}^{app},$ (65)
in the limit in which there is no inhomogeneity, i.e. when
$K_{1}=B_{1}=B^{\prime}_{1}=h_{0,r}=0$.
## IX Conclusions
We have derived for the first time the correction due to local large scale
inhomogeneities to the value of the apparent cosmological constant inferred
from low redshift supernovae observations. This analytical calculation shows
how the presence of a local inhomogeneity can affect the estimation of the
value of cosmological parameters, such as $\Omega_{\Lambda}$. This effects
should be properly taken into account both theoretically and observationally.
By performing a local Taylor expansion we analyzed the number of independent
degrees of freedom which determine the local shape of the inhomogeneity, and
consider the issue of central smoothness, showing how the same correction can
correspond to different inhomogeneity profiles. We will address in a future
work the estimation of the magnitude of this effect based on experimental
bounds which can be set on the size and shape of a local inhomogeneity and the
fitting of actual supernovae data. It is important to underline here that we
do not need a large void as normally assumed in previous studies of $LTB$
models in a cosmological context. Even a small inhomogeneity could in fact be
important.
In the future it will also be interesting to extend the same analysis to other
observables such as barionic acoustic oscillations (BAO) or the cosmic
microwave background radiation (CMBR), and we will report about this in
separate papers. Another direction in which the present work could be extended
is modeling the local inhomogeneity in a more general way, for example
considering not spherically symmetric solutions. From this point of view our
calculation could be considered the monopole contribution to the general
effect due to a local large scale inhomogeneity of arbitrary shape. Given the
high level of isotropy of the Universe shown by other observations such as the
CMB radiation, we can expect the monopole contribution we calculated to be the
dominant one.
While this should be considered only as the first step towards a full
inclusion of the effects of large scale inhomogeneities in the interpretation
of cosmological observations, it is important to emphasize that we have
introduced a general definition of the concept of apparent and true value of
cosmological parameters, and shown the general theoretical approach to
calculate the corrections to the apparent values obtained under the standard
assumption of homogeneity.
###### Acknowledgements.
We thank A. Starobinsky, M. Sasaki for useful comments and discussions. Chen
and Romano are supported by the Taiwan NSC under Project No.
NSC97-2112-M-002-026-MY3, by Taiwan’s National Center for Theoretical Sciences
(NCTS). Chen is also supported by the US Department of Energy under Contract
No. DE-AC03-76SF00515.
## References
* (1) S. Perlmutter et al. [Supernova Cosmology Project Collaboration], “Measurements of Omega and Lambda from 42 High-Redshift Supernovae,” Astrophys. J. 517, 565 (1999) [arXiv:astro-ph/9812133].
* (2) A. G. Riess et al. [Supernova Search Team Collaboration], Astron. J. 116, 1009 (1998) [arXiv:astro-ph/9805201].
* (3) J. L. Tonry et al. [Supernova Search Team Collaboration], Astrophys. J. 594, 1 (2003) [arXiv:astro-ph/0305008].
* (4) R. A. Knop et al. [The Supernova Cosmology Project Collaboration], Astrophys. J. 598, 102 (2003) [arXiv:astro-ph/0309368].
* (5) B. J. Barris et al., Astrophys. J. 602, 571 (2004) [arXiv:astro-ph/0310843].
* (6) A. G. Riess et al. [Supernova Search Team Collaboration], Astrophys. J. 607, 665 (2004) [arXiv:astro-ph/0402512].
* (7) C. L. Bennett et al., Astrophys. J. Suppl. 148, 1 (2003) [arXiv:astro-ph/0302207];
* (8) D. N. Spergel et al., arXiv:astro-ph/0603449.
* (9) Y. Nambu and M. Tanimoto, arXiv:gr-qc/0507057.
* (10) T. Kai, H. Kozaki, K. i. nakao, Y. Nambu and C. M. Yoo, Prog. Theor. Phys. 117, 229 (2007) [arXiv:gr-qc/0605120].
* (11) A. E. Romano, Phys. Rev. D 75, 043509 (2007) [arXiv:astro-ph/0612002].
* (12) D. J. H. Chung and A. E. Romano, Phys. Rev. D 74, 103507 (2006) [arXiv:astro-ph/0608403].
* (13) C. M. Yoo, T. Kai and K. i. Nakao, Prog. Theor. Phys. 120, 937 (2008) [arXiv:0807.0932 [astro-ph]].
* (14) S. Alexander, T. Biswas, A. Notari and D. Vaid, “Local Void vs Dark Energy: Confrontation with WMAP and Type Ia Supernovae,” arXiv:0712.0370 [astro-ph]. CITATION = ARXIV:0712.0370;
* (15) H. Alnes, M. Amarzguioui and O. Gron, Phys. Rev. D 73, 083519 (2006) [arXiv:astro-ph/0512006].
* (16) J. Garcia-Bellido and T. Haugboelle, JCAP 0804, 003 (2008) [arXiv:0802.1523 [astro-ph]].
* (17) J. Garcia-Bellido and T. Haugboelle, JCAP 0809, 016 (2008) [arXiv:0807.1326 [astro-ph]].
* (18) J. Garcia-Bellido and T. Haugboelle, JCAP 0909, 028 (2009) [arXiv:0810.4939 [astro-ph]].
* (19) S. February, J. Larena, M. Smith and C. Clarkson, Mon. Not. Roy. Astron. Soc. 405, 2231 (2010) [arXiv:0909.1479 [astro-ph.CO]].
* (20) J. P. Uzan, C. Clarkson and G. F. R. Ellis, Phys. Rev. Lett. 100, 191303 (2008) [arXiv:0801.0068 [astro-ph]].
* (21) M. Quartin and L. Amendola, Phys. Rev. D 81, 043522 (2010) [arXiv:0909.4954 [astro-ph.CO]].
* (22) C. Quercellini, P. Cabella, L. Amendola, M. Quartin and A. Balbi, Phys. Rev. D 80, 063527 (2009) [arXiv:0905.4853 [astro-ph.CO]].
* (23) C. Clarkson, M. Cortes and B. A. Bassett, JCAP 0708, 011 (2007) [arXiv:astro-ph/0702670].
* (24) A. Ishibashi and R. M. Wald, Class. Quant. Grav. 23, 235 (2006) [arXiv:gr-qc/0509108].
* (25) T. Clifton, P. G. Ferreira and K. Land, Phys. Rev. Lett. 101, 131302 (2008) [arXiv:0807.1443 [astro-ph]].
* (26) M. N. Celerier, K. Bolejko, A. Krasinski arXiv:0906.0905 [astro-ph.CO].
* (27) A. E. Romano, Phys. Rev. D 76, 103525 (2007) [arXiv:astro-ph/0702229].
* (28) A. E. Romano, JCAP 1001, 004 (2010) [arXiv:0911.2927 [astro-ph.CO]].
* (29) A. E. Romano, JCAP 1005, 020 (2010) [arXiv:0912.2866 [astro-ph.CO]].
* (30) A. E. Romano, Phys. Rev. D 82, 123528 (2010) [arXiv:0912.4108 [astro-ph.CO]].
* (31) N. Mustapha, C. Hellaby and G. F. R. Ellis, Mon. Not. Roy. Astron. Soc. 292, 817 (1997) [arXiv:gr-qc/9808079].
* (32) A. E. Romano, M. Sasaki and A. A. Starobinsky, arXiv:1006.4735 [astro-ph.CO].
* (33) G. Lemaitre, Annales Soc. Sci. Brux. Ser. I Sci. Math. Astron. Phys. A 53, 51 (1933).
* (34) R. C. Tolman, Proc. Nat. Acad. Sci. 20, 169 (1934).
* (35) H. Bondi, Mon. Not. Roy. Astron. Soc. 107, 410 (1947).
* (36) D. Edwards, Monthly Notices of the Royal Astronomical Society, 159, 51 (1972).
* (37) Byrd, P.F. & Friedman, M,D,m1954. Handbook of Elliptic Integrals, Lange, Maxwell and Springer Ltd, London
* (38) A. E. Romano and M. Sasaki, arXiv:0905.3342 [astro-ph.CO].
* (39) M. N. Celerier, Astron. Astrophys. 353, 63 (2000) [arXiv:astro-ph/9907206].
* (40) C. Hellaby, PoS ISFTG, 005 (2009) [arXiv:0910.0350 [gr-qc]].
|
arxiv-papers
| 2011-04-05T04:29:10 |
2024-09-04T02:49:18.108138
|
{
"license": "Public Domain",
"authors": "Antonio Enea Romano, Pisin Chen",
"submitter": "Antonio Enea Romano",
"url": "https://arxiv.org/abs/1104.0730"
}
|
1104.0754
|
# An Off-Axis Relativistic Jet Model for the Type Ic supernova SN 2007gr
M. Xu11affiliation: Department of Astronomy, Nanjing University, Nanjing
210093, China; hyf@nju.edu.cn 22affiliation: Yukawa Institute for Theoretical
Physics, Oiwake-cho, Kitashirakawa, Sakyo-ku, Kyoto 606-8502, Japan
33affiliation: Key Laboratory of Modern Astronomy and Astrophysics (Nanjing
University), Ministry of Education, China , S. Nagataki22affiliation: Yukawa
Institute for Theoretical Physics, Oiwake-cho, Kitashirakawa, Sakyo-ku, Kyoto
606-8502, Japan , and Y. F. Huang11affiliation: Department of Astronomy,
Nanjing University, Nanjing 210093, China; hyf@nju.edu.cn 33affiliation: Key
Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry
of Education, China
###### Abstract
We propose an off-axis relativistic jet model for the Type Ic supernova SN
2007gr. Most of the energy ($\sim 2\times 10^{51}$ erg) in the explosion is
contained in non-relativistic ejecta which produces the supernova. The optical
emission is coming from the decay process of $\rm{}^{56}Ni$ synthesized in the
bulk SN ejecta. Only very little energy ($\sim 10^{48}$ erg) is contained in
the relativistic jet with initial velocity about $0.94$ times the speed of
light. The radio and X-ray emission comes from this relativistic jet. With
some typical parameters of a Wolf-Rayet star (progenitor of Type Ic SN), i.e.,
the mass loss rate $\dot{M}=1.0\times 10^{-5}~{}M_{\odot}~{}\rm yr^{-1}$ and
the wind velocity $v_{\rm w}=1.5\times 10^{3}~{}\rm km~{}s^{-1}$ together with
an observing angle of $\theta_{\rm obs}=63.3^{\circ}$, we can obtain the
multiband light curves that fit the observations well. All the observed data
are consistent with our model. Thus we conclude that SN 2007gr contains a weak
relativistic jet and we are observing the jet from off-axis.
gamma-ray bursts: general - supernovae: individual (SN 2007gr)
## 1 Introduction
Gamma-ray bursts (GRBs) are intense flashes of gamma-ray radiation in the
Universe (for recent reviews, see: Zhang 2007; Gehrels et al. 2009). It is
widely believed that outflows of GRBs are accelerated to ultra-relativistic
speeds (Mészáros 2002) and usually collimated with small jet anlges (Frail et
al. 2001). On the other side, core-collapse supernovae (SNe) are the explosive
deaths of massive stars that occur when their iron cores collapse to form
neutron stars or black holes (Wilson 1971; Barkat et al. 1974; Wheeler &
Levreault 1985; Woosley & Janka 2005; Woosley & Bloom 2006; Nagataki et al.
2007; Nagataki 2009; Nagataki 2010). According to their spectra, core-collapse
SNe are classified as Type Ib, Ic or Type II SNe (Wheeler 1993; Wheeler et al.
1993; Filippenko 1997). SN explosions will release massive ejecta and usually
they are isotropic and non-relativistic.
Recent observations have revealed that many nearby GRBs are associated with
core-collapse SNe. Examples of such association includes GRB 980425/SN 1998bw
(Galama et al. 1998), GRB 030329/SN 2003dh (Berger et al. 2003), GRB 031203/SN
2003lw (Cobb et al. 2004), GRB 060618/SN 2006aj (Campana et al. 2006), GRB
091127/SN 2009nz (Cobb et al. 2010), and GRB 100316D/SN 2010bh (Fan et al.
2010) etc. These GRBs are usually soft in $\gamma$-ray spectra and are
ubiquitously longer than $2$ seconds. Thus they belong to the so called
long/soft GRBs.
The observed GRB-connected SNe are all Type Ic SNe. The most favored
progenitors for Type Ic supernovae are Wolf-Rayet stars (Maeder & Lequeux
1982; Begelman & Sarazin 1986; Woosley & Bloom 2006). However, the GRB-
connected Type Ic SNe should be different from ordinary Type Ic SNe (Soderberg
et al. 2006), because they need to launch relativistic jets to produce the
bursts of $\gamma$-rays. The kinetic energy of these SNe appears to be greater
than that of ordinary SNe. In some SNe associated with GRBs, most of the
explosion energy is in non-relativistic ejecta which produces the supernova,
while only little energy is in the relativistic jets which are responsible for
making GRBs and their afterglows (Woosley & Bloom 2006).
While GRB-connected SN explosions can produce relativistic jets with Lorentz
factor as large as $\sim 100$, we believe that there were still some Type Ic
SNe that could only produce midly relativistic jets with initial Lorentz
factor of a few (Huang et al. 2002; Granot & Loeb 2003). It has been argued
that this kind of low Lorentz factor jets will produce UV or soft X-ray
transients but not GRBs (Huang et al. 2002; Xu et al. 2010). The interesting
Type Ic supernova SN 2009bb, which is identified as a relativistic ejecta
without a detected GRB (Soderberg et al. 2010a), may be such an event.
Another event, the Type Ic supernova SN 2007gr, is much more controversial.
Soderberg et al. (2010b) proposed that SN 2007gr is an ordinary Type Ic
supernova with an almost constant expansion speed ($v\propto t^{-0.1}$). On
the contrary, Paragi et al.’s (2010) 5 GHz Radio observations have revealed a
relativistic jet in SN 2007gr. While the opening angle of the jet is similar
to that of a typical GRB jet, its Lorentz factor seems to be far smaller than
a normal GRB outflow. In view of the non-relativistic expansion of the
photosphere of SN 2007gr (Valenti et al. 2008; Hunter et al. 2009), here, we
propose an off-axis relativistic jet model for SN 2007gr with typical
parameters of circumstellar medium (CSM) of a Wolf-Rayet star. Most of the
energy in the explosion is contained in non-relativistic ejecta which produces
the supernova, while only a small fraction of expansion energy is contained in
the relativistic jet. We first describe our dynamical model in Section 2. In
section 3 we describe the parameters used in our modeling. The model results
are shown in Section 4. Our conclusions and discussion are presented in
Section 5.
## 2 Model
On 2007 August 15.51 UT, SN 2007gr was discovered by the Katzmann Automatic
Imaging Telescope in NGC 1058 (Madison & Li 2007), a bright spiral galaxy
belonging to a group of galaxies. The distance of one member of this group,
NGC 935, has been derived as about 9.3 Mpc (Silbermann et al. 1996). In our
study, we adopt this value for the distance of SN 2007gr. The explosion date
was suggested as 2007 August $13\pm 2$ (Soderberg et al. 2010b). Radio
observations with the Very Large Array (Soderberg et al. 2010b) and European
VLBI network (EVN) (Paragi et al. 2010) revealed a radio source in the place
of the supernova. SN 2007gr was classified as Type Ic supernova according to
its spectra (Crockett et al. 2008; Valenti et al. 2008).
### 2.1 Relativistic jet
Paragi et al. (2010) reported the evidence for the existence of a relativistic
jet in 2007gr (with expansion speed $v>0.6\rm c$) based on their radio
observations, while Valenti et al. (2008) and Hunter et al. (2009) measured a
non-relativistic velocity (from $\sim 11000~{}\rm km/s$ at 1 week after
explosion to $\sim 4800~{}\rm km/s$ at 50 days after explosion) for the
photospheric expansion of SN 2007gr. These observations suggest that the
average speed of the optical ejecta (i.e., optical expansion velocity) is non-
relativistic while the radio ejecta is much faster and relativistic. This
scenario is similar to that of Soderberg et al. (2005,2010b), but note that
the velocity of the radio ejecta here is relativistic. So, we suggest that SN
2007gr may phenomenologically contain two components: a non-relativistic
component and a relativistic component. The non-relativistic component should
contain most of the explosion energy and account for the photospheric
expansion and the optical emission. The relativistic component should be a jet
that contains only a small fraction of the explosion energy and account for
the radio emission. It is more likely central engine driven. Its behavior
should be similar to a GRB jet, but note that its initial Lorentz factor is
significantly smaller. In this paper, we will simulate the evolution of the
relativistic jet numerically and compare the results with observations.
In our framework, the optical emission of supernova should mainly come from
the decay process of $\rm{}^{56}Ni$ synthesized in the SN explosion. On the
contrary, radio and X-ray emission of supernova is explained as synchrotron
radiation from relativistic electrons which are accelerated by the shock
produced in the collision between the jet and circumstellar medium (Chevalier
1998; Soderberg et al. 2005). The shock process is very similar to the
external shock process of GRBs that gives birth to GRB afterglows.
### 2.2 Jet dynamics
The dynamical evolution of a relativistic jet that collides with surrounding
medium can be conveniently described by the equations proposed by Huang et al.
(1999, 2000). Their method can be widely used in both ultra-relativistic and
non-relativistic phases. In this study, we will adopt Huang et al.’s equations
to simulate the evolution of the external shock. The evolution of the bulk
Lorentz of the jet ($\gamma$), the swept-up mass of medium ($m$), the radius
of the shock ($R$), and the half-opening angle of the jet ($\theta$) are
described by the following equations,
$\frac{d\gamma}{dm}=-\frac{\gamma^{2}-1}{M_{\rm ej}+\varepsilon
m+2(1-\varepsilon)\gamma m},$ (1)
$\frac{dm}{dR}=2\pi R^{2}(1-\cos\theta)nm_{\rm p},$ (2)
$\frac{dR}{dt}=\beta c\gamma(\gamma+\sqrt{\gamma^{2}-1}),$ (3)
$\frac{d\theta}{dt}=\frac{c_{\rm s}(\gamma+\sqrt{\gamma^{2}-1})}{R},$ (4)
where $M_{\rm ej}$ is the initial mass of the ejecta, $m_{p}$ is the mass of
the proton, and $\beta=\sqrt{\gamma^{2}-1}/\gamma$. The radiative efficiency
($\varepsilon$) is assumed as zero because the ejecta becomes adiabatic a few
hours after the burst.
In the above equations, the velocity of the lateral expansion has been assumed
to be the sound speed $c_{\rm s}$, which can be calculated from (Dai et al.
1999)
$c_{\rm
s}^{2}=\frac{\hat{\gamma}(\hat{\gamma}-1)(\gamma-1)c^{2}}{1+\hat{\gamma}(\gamma-1)},$
(5)
where $\hat{\gamma}\approx(4\gamma+1)/(3\gamma)$ is the adiabatic index. The
number density of the circumstellar medium ($n$) is inversely proportional to
the square of the shock radius, i.e.
$n=\frac{\dot{M}}{4\pi m_{\rm p}v_{\rm w}R^{2}},$ (6)
where $\dot{M}$ is mass loss rate of the circumstellar wind, and $v_{\rm w}$
is the wind speed.
### 2.3 Synchrotron radiation process
As usual, in the comoving frame, the shock-accelerated electrons are assumed
to follow a power-law distribution according to their Lorentz factors
($\gamma_{e}$)
$\frac{dN_{\rm e}^{\prime}}{d\gamma_{\rm e}}=(\gamma_{\rm e}-1)^{-p},$ (7)
where $p$ is the power-law index. Note that in the bracket, 1 is subtracted
from $\gamma_{e}$ to account for the non-relativistic phase (Huang & Cheng
2003). For a single electron with a Lorentz factor of $\gamma_{e}$, the
synchrotron radiation power at frequency $\nu^{\prime}$ is given by
$P(\nu^{\prime},\gamma_{\rm e})=\frac{\sqrt{3}e^{3}B^{\prime}}{m_{\rm
e}c^{2}}F(\frac{\nu^{\prime}}{\nu_{\rm c}^{\prime}}),$ (8)
where $e$ is the electron charge, $B^{\prime}$ is the magnetic intensity,
$m_{\rm e}$ is the mass of electron, $\nu_{c}^{\prime}=3\gamma_{\rm
e}^{2}eB^{\prime}/(4\pi m_{\rm e}c)$ and the function $F(x)$ is defined as
$F(x)=x\int_{x}^{+\infty}K_{5/3}(k)dk,$ (9)
with $K_{5/3}(k)$ being the Bessel function.
The magnetic energy density is assumed to be a fraction $\epsilon_{B}^{2}$ of
the total thermal energy density, i.e. (Dai et al. 1999),
$\frac{B^{\prime 2}}{8\pi}=\epsilon_{\rm
B}^{2}\frac{\hat{\gamma}\gamma+1}{\hat{\gamma}-1}(\gamma-1)nm_{\rm p}c^{2}.$
(10)
Therefore, the total synchrotron radiation power from all the shock
accelerated electrons is
$P(\nu^{\prime})=\int_{\gamma_{\rm e,min}}^{\gamma_{\rm e,max}}\frac{dN_{\rm
e}^{\prime}}{d\gamma_{\rm e}}P(\nu^{\prime},\gamma_{\rm e})d\gamma_{\rm e},$
(11)
where $\gamma_{\rm e,max}=10^{8}(B^{\prime}/1~{}\rm G)^{1/2}$ is the maximum
Lorentz factor of elections, $\gamma_{\rm e,min}=\epsilon_{\rm
e}(\gamma-1)m_{\rm p}(p-2)/[m_{\rm e}(p-1)]+1$ is the minimum Lorentz factor
of elections, and $\epsilon_{e}$ is electron energy faction. Then we can
obtain the observed flux density at frequency $\nu$,
$F_{\nu}=\frac{1}{\gamma^{3}(1-\beta\cos\Theta)^{3}}\frac{1}{4\pi
D_{L}^{2}}P^{\prime}[\gamma(1-\beta\cos\Theta)\nu],$ (12)
where $D_{L}$ is the luminosity distance, and $\Theta$ is the angle between
the line of sight and the velocity of emitting material.
The synchrotron self absorption effect (Rybicki & Lightman 1979) need to be
considered in calculating the radio flux (Chevalier 1998; Kong et al. 2009).
Self-absorption reduces the synchrotron radiation flux by a factor of
$(1-e^{\tau_{\nu}})/\tau_{\nu}$, where $\tau_{\nu}$ is the optical depth. The
self-absorption coefficient is given by
$k(\nu^{\prime})=\frac{p+2}{8\pi m_{\rm e}\nu^{\prime 2}}\int_{\gamma_{\rm
e,min}}^{\gamma_{\rm e,max}}\frac{dN_{\rm e}^{\prime}}{d\gamma_{\rm
e}}\frac{1}{\gamma_{\rm e}}P(\nu^{\prime},\gamma_{\rm e})d\gamma_{\rm e}.$
(13)
When calculating the observed flux, we integrate the emission over the whole
equal arrival time surface (Waxman 1997; Sari 1997; Panaitescu & Mészáros
1998) determined by
$t=\int\frac{1-\beta\Theta}{\beta c}dR\equiv{\rm const.}$ (14)
## 3 Parameters
Before fitting the observations of SN 2007gr numerically, we first give a
rough estimation of several fundamental parameters of the jet based on
theoretical analysis of some observed facts.
(i) The initial bulk Lorentz factor $\gamma_{0}$. Paragi et al. (2010) have
derived a conservative lower limit of 1.7 mas for the angular diameter from
their observations made with the EVN and the Green Bank Telescope during 2007
November 5 — 6 ($\sim 85$ days after the supernova explosion). This will
correspond to an average expansion speed faster than 0.6 times the speed of
light ($\beta>0.6$). In view of the deceleration of the ejecta, we assume that
the initial bulk Lorentz factor is 3, i.e., $\beta_{0}\approx 0.94$.
(ii) The initial half-opening angle $\theta_{0}$. The typical half-opening
angle of GRB jets is about $0.1$ rad (Zhang 2007; Gao & Dai 2010), while
supernova outflows are much more isotropic. Since the Lorentz factor of the
jet involved in SN 2007gr is much smaller than a typical GRB jet, its opening
angle should be correspondingly much larger. Here we assume that the initial
half-opening angle of the jet in our model is $\theta_{0}=0.6$ rad ($\sim
34.4^{\circ}$).
(iii) The observing angle $\theta_{\rm obs}$, i.e. the angle between the line
of sight and the symmetry axis of the jet. The ten hours of EVN observations
of SN 2007gr during 2007 November 5 — 6 restored an elliptical beam with the
size of $15.26\times 6.85$ mas (Paragi et al. 2010). In view of the none
detection of prompt emission in the early stage of SN 2007gr, we suggest that
the jet is not pointed toward us, but off-axis. In this paper, the inclination
angle between the jet axis and our line of sight is taken as $\theta_{\rm
obs}=1.1~{}\rm rad$ ($\sim 63.3^{\circ}$). Such an observing angle, together
with the smaller initial half-opening angle of $\theta_{0}=0.6~{}\rm rad$
($\sim 34.4^{\circ}$), means that our line of sight is completely outside the
jet boundry, i.e. we are observing the mildly relativistic jet off-axisly.
(iv) The initial kinetic energy of the shock $E$. The total kinetic energy and
ejected mass of the SN ejecta is $\sim 1.5-3\times 10^{51}$ erg and $\sim
1.5-3.5~{}M_{\odot}$ respectively (Valenti et al. 2008; Hunter et al. 2009).
The total internal energy for the observed radio emission is $\sim
0.7-4.5\times 10^{46}$ erg (Soderberg et al. 2010b; Paragi et al. 2010).
Because most of the energy in the explosion is contained in non-relativistic
ejecta and only very little energy is contained in the relativistic jet, we
assume that the initial kinetic energy of the relativistic shock wave is
$E=1.1\times 10^{48}$ erg.
(v) The number density of the circumstellar medium ($n$). SN 2007gr is
classified as Type Ic SN according to its spectra (Crockett et al. 2008;
Valenti et al. 2008). The most favored progenitors for Type Ic supernovae are
Wolf-Rayet stars (Maeder & Lequeux 1982; Begelman & Sarazin 1986; Woosley &
Bloom 2006). For Wolf-Rayet stars, the typical mass loss rate is $\dot{M}\sim
0.6-17.1\times 10^{-5}~{}M_{\odot}~{}\rm yr^{-1}$, and the wind speed is
typically $v_{\rm w}\sim 0.7-5.5\times 10^{3}~{}\rm km~{}s^{-1}$ (Eenens &
Williams 1994; Cappa et al. 2004). In our calculations, we assume the
following parameters for the progenitor of SN 2007gr, i.e., $\dot{M}=1.0\times
10^{-5}~{}M_{\odot}~{}\rm yr^{-1}$ and $v_{\rm w}\sim 1.5\times 10^{3}~{}\rm
km~{}s^{-1}$. Then the number density of CSM can be calculated from Eq. (6).
The mass loss rate in our modeling is more than 10 times higher than the ones
in previous studies (Paragi et al. (2010); Soderberg et al. (2010b)).
(vi) The electron energy faction $\epsilon_{\rm e}$. In our model, we assume
an evolving electron energy faction as $\epsilon_{\rm e}=\epsilon_{\rm
e,0}\times(R/R_{0})^{\alpha}$. The parameter $\epsilon_{\rm e}$ is evolving
with time. It is slightly different from a normal GRB model. In our
calculations, we assume $\epsilon_{\rm e,0}=0.1$, $R_{0}=3.0\times 10^{16}$
cm, $\alpha=5/4$, i.e.,
$\epsilon_{\rm e}=0.1\times(\frac{R}{3.0\times 10^{16}~{}\rm cm})^{5/4}.$ (15)
Note that our $\epsilon_{\rm e}$ is smaller than unity during all the
observing time.
For other parameters such as the magnetic energy fraction ($\epsilon_{\rm B}$)
and the power-law index of the energy distribution function of electrons
($p$), we take $\epsilon_{\rm B}=0.1$ and $p=3.3$, respectively, which are
similar to those adopted by Soderberg et al. (2010b).
## 4 Results
Using our off-axis relativistic jet model and the parameters above, we have
numerically calculated the multi-band emission of the jet in SN 2007gr. Here
we compare the theoretical light curves with observations.
Fig. 1 shows the light curves of radio emission. From this figure, we see that
the theoretical curves at 1.4 GHz (Fig. 1a), 4.9 GHz (Fig. 1b), and 8.5 GHz
(Fig. 1c) can fit the observational data well (note the last data point in
Fig. 1c is an upper limit). The curve at 22.5 GHz (Fig. 1d) is also consistent
with the upper limit of the observation.
Figure 1: Observed radio light curves of SN 2007gr and our best fit by using
the off-axis relativistic jet model. Panels (a), (b), (c), and (d) are light
curves at 1.4, 4.9, 8.5 and 22.5 GHz respectively. Observed data points are
taken from Soderberg et al. (2010b).
Fig. 2 illustrates the theoretical optical and X-ray light curves based on our
model. Our X-ray light curve is consistent with the upper limit of the
$Chandra$ observation (Soderberg 2007). The predicted optical emission is
significantly lower than the observed flux. This is a reasonable result. We
believe that the observed optical emission should mainly come from the decay
process of $\rm{}^{56}Ni$ synthesized in the bulk SN ejecta (Arnett 1982;
Sutherland & Wheeler 1984), as that typically happens in usual SN explosions.
Figure 2: Light curves of SN 2007gr in optical and X-ray bands. The solid and
dashed curves correspond to optical and X-ray emission from our off-axis
relatisvistic jet, respectively. The circular points are the observed R-band
data taken from Valenti et al. (2008). The triangle point is the upper limit
observed by $Chandra~{}X-ray~{}Observatory$ (Soderberg 2007).
Figs. 1 and 2 are our best fits to the observations by adopting optimal values
for the parameters involved. To get these optimal parameters, we actually have
tried many times. In Figs. 3 and 4, we go further to give some examples of the
radio light curves to illustrate the effects of various parameters. In these
figures, the observed 1.4GHz data points are taken from Soderberg et al.
(2010b). The solid curve corresponds to our best fit by using the parameters
described in Section 3. The dashed and dotted curves are drawn with only one
parameter altered. From these figures, we see that the theoretical radio
fluxes depend sensitively on the parameters of $\theta$, $\dot{M}$, and
$v_{\rm w}$. The flux of a more isotropic outflow is higher than that of a
narrower outflow. A fast wind with a low mass loss rate tends to increase the
emission. The parameters $\gamma,~{}E,~{}\epsilon_{\rm B}$, and $\epsilon_{\rm
e}$ affect not only the intensity, but also the shape and peak time of the
light curve. It means that the radiation spectra depends on these parameters
sensitively.
As is shown in Fig. 4b, the light curve is flattened by a positive $\alpha$.
If $\epsilon_{\rm e}$ is constant as normally assumed in GRB modeling, i.e.,
$\alpha=0$, then the flux decreases much faster than the positive $\alpha$
scenario. A positive $\alpha$ is necessary in our fitting, which indicates
that $\epsilon_{\rm e}$ is increasing with time, i.e., more and more kinetic
energy is gradually transformed to internal energy.
Figure 3: Effects of the parameters $\gamma$, $\theta$, $E$, and
$\epsilon_{\rm B}$ on the 1.4 GHz radio light curve. Observed data points of
SN 2007gr are again taken from Soderberg et al. (2010b). The solid curves are
our best fit, while the dashed and dotted curves are light curves with only
one parameter altered, as marked in the panels. Figure 4: Effects of the
parameters $\epsilon_{\rm e}$, $\dot{M}$, and $v_{\rm w}$ on the 1.4 GHz radio
light curve. Observed data points and the solid curves are the same as in Fig.
3. The dashed and dotted curves are light curves with only one parameter
altered, as marked in the panels.
## 5 Discussion and Conclusions
In this paper, we propose an off-axis relativistic jet model to explain the
multiband emission of the Type Ic supernova SN 2007gr. From the observations
of SN 2007gr by EVN (Paragi et al. 2010), we adopted a model where the outflow
contains a jet component and the line of sight is off-axis. The observing
angle is about $\theta_{\rm obs}=1.1$ rad. We suggest that the radio emission
come from synchrotron radiation of relativistic electrons accelerated by the
shock produced by the collision between the jet and the circumstellar medium.
Our calculations show that a jet with an initial half-opening angle of
$\theta_{0}=0.6$ rad and initial Lorentz factor of $\gamma_{0}=3$ can
reproduce the emission in radio band well. Optical and X-ray emission is also
consistent with the observational restrictions.
The jet engaged in our modeling of SN 2007gr differs significantly from normal
GRB jet. First, the Lorentz factor of normal GRB ejecta is typically several
hundred, while it is only a few here for SN 2007gr. Second, the opening-angle
of our SN jet ($0.6~{}\rm rad$) is much larger than that of a GRB jet ($\sim
0.1~{}\rm rad$). Additionally, we would like to point out that the lateral
expansion speed of the jet is a relatively complicate factor. In our
calculations, we have assumed it to be the comoving sound speed. Although this
is a reasonable assumption, deviation may still be possible and the dynamics
may be affected. Third, the initial energy of our SN jet (isotropically $\sim
10^{48}~{}\rm erg$) is much smaller than that of GRB jet ($\sim 10^{52}~{}\rm
erg$). We believe that the majority of energy is deposited into supernova
component in SN 2007gr explosion. Thus the above characteristics are not too
difficult to understand.
In view of Wolf-Rayet stars as the progenitors of type Ic SNe, we assume the
CSM as the typical wind of a Wolf-Rayet star with mass-loss rate
$\dot{M}=1.0\times 10^{-5}~{}M_{\odot}~{}\rm yr^{-1}$ and $v_{\rm w}=1.5\times
10^{3}~{}\rm km~{}s^{-1}$. A fast wind with low mass loss rate tend to
increase the flux of the radio emission. The mass loss rate in our model is
more than 10 times higher than that in previous studies, but it is more
typical for a Wolf-Rayet star. Note that in our modeling, we have assumed that
the CSM is purely composed of protons and electrons. Actually, other nucleus
such as helium may also appear in the wind of Wolf-Rayet stars. Although their
presence in the CSM should not change the final results significantly, we may
still need to consider the factor in the future, when observations become much
improved.
For the electron energy faction ($\epsilon_{\rm e}$), we usually assume that
it is constant ($\alpha=0$) in GRB model. However, in the current study, a
positive $\alpha$ is necessary in our fitting. It means that $\epsilon_{\rm
e}$ is increasing with time, i.e., the fraction of energy going to internal
energy increases with the deceleration of the shock. Although the assumption
of a varying $\epsilon_{\rm e}$ has also been engaged in modeling some special
GRBs and other transient objects (Rossi & Rees 2003; Ioka et al. 2006; Kong et
al. 2010), note that the underlying physical mechanism that leads to the
variation of $\epsilon_{\rm e}$ is still much uncertain.
The mutiband emission and spectra are quite different between SN 2007gr and
normal GRBs. As for GRBs, we can detect the prompt gamma-ray emission and
multiband afterglows. The peak frequency of prompt emission of GRBs is
typically in gamma-ray band (about several hundred keV). The peak frequency of
GRB afterglows is typically in UV or X-ray band, and the X-ray afterglow can
usually be observed. However, the emission from the jet of SN 2007gr is more
like a failed GRB rather than a normal GRB, since it is only a midly
relativistic outflow. It has been proposed that the prompt emission of a midly
relativistic SN jet come from the photosphere and bright in UV or soft X-ray
band (Xu et al. 2010). The peak frequency of afterglow should then be in near
infrared or radio band. The X-ray emission should be weak and hard to detect.
In the future, more and more relativistic SN ejecta will be detected. These
relativistic ejecta are considered as central engine driven. In fact,
supernova 2009bb was observed as another example of relativistic supernova
ejecta recently (Soderberg et al. 2010b). If the very early emission of
supernovae could be detected in the future, it will be helpful for determining
the speed of the supernova ejecta directly.
We thank the anonymous referee for helpful comments and suggestions. This work
was supported by the National Natural Science Foundation of China (Grant No.
10625313 and 11033002), the National Basic Research Program of China (973
Program, Grant No. 2009CB824800) and the Grant-in-Aid for the ‘Global COE
Bilateral International Exchange Program’ of Japan, Grant-in-Aid for
Scientific Research on Priority Areas No. 19047004 and Scientific Research on
Innovative Areas No. 21105509 by Ministry of Education, Culture, Sports,
Science and Technology (MEXT), Grant-in-Aid for Scientific Research (S) No.
19104006 and Scientific Research (C) No. 21540404 by Japan Society for the
Promotion of Science (JSPS).
## References
* Arnett (1982) Arnett, W. D. 1982, ApJ, 253, 785
* Barkat et al. (1974) Barkat, Z., Wheeler, J. C., Buchler, J.-R., & Rakavy, G. 1974, Ap&SS, 29, 267
* Begelman & Sarazin (1986) Begelman, M. C., & Sarazin, C. L. 1986, ApJ, 302, L59
* Beger et al. (2003) Berger, E. et al. 2003, Nature, 426, 154
* Campana et al. (2006) Campana, S. et al. 2006, Nature, 442, 1008
* Cappa et al. (2004) Cappa, C., Goss, W. M., & van der Hucht, K. A. 2004, AJ, 127, 2885
* Chevalier (98) Chevalier, R. A. 1998, ApJ, 499, 810
* Cobb et al. (2004) Cobb, B. E., Bailyn, C. D., van Dokkum, P. G., Buxton, M. M., & Bloom, J. S. 2004, ApJ, 608, L93
* Cobb et al. (2010) Cobb, B. E., Bloom, J. S., Perley, D. A., Morgan, A. N., Cenko, S. B., & Filippenko, A. V. ApJ, 2010, 718, 150
* Crockett et al. (2008) Crockett, R. M. et al. 2008, ApJ, 672, L99
* Dai et al. (1999) Dai, Z. G., Huang, Y. F., & Lu, T. 1999, ApJ, 520, 634
* Eenens & Williams (1994) Eenens, P. R. J., & Williams, P. M. 1994, MNRAS, 269, 1082
* Fan et al. (2010) Fan, Y. Z., Zhang, B. B., Xu, D., Liang, E. W., & Zhang, B. 2010, ApJ, 762, 32
* Filippenko (1997) Filippenko, A. V. 1997, ARA&A, 35, 309.
* Frail et al. (2001) Frail, D. A. et al. 2001. ApJ, 562, L55
* Gehrels (2009) Gehrels, N., Ramirez-Ruiz, E., & Fox, D. B. 2009, ARA&A, 47, 567
* Galama et al. (1998) Galama, T. J. et al. 1998, Nature 395, 670
* Gao & Dai (2010) Gao, Y., & Dai, Z.G. 2010, RAA, 10, 142
* Granot & Loeb (2003) Granot, J., & Loeb, A. 2003, ApJ, 593, L81
* Huang & Cheng (1999) Huang, Y. F., Cheng, K. S. 2003, MNRAS, 341, 263
* Huang et al. (1999) Huang, Y. F., Dai, Z. G., & Lu, T. 1999, MNRAS, 309, 513
* Huang et al. (2000) Huang, Y. F., Gou, L. J., Dai, Z. G., & Lu, T. 2000, ApJ, 543, 90
* Hunter et al. (2009) Hunter, D. J. et al. 2009, A&A, 508, 371
* Ioka et al. (2006) Ioka, K., Toma, K., Yamazaki, R., & Nakamura, N. 2006, A&A, 458, 7
* Kong et al. (2009) Kong, S. W., Huang, Y. F., Cheng, K. S., & Lu T. 2009, Science in China Series G, 52, 2047
* Kong et al. (2010) Kong, S. W., Wong, A. Y. L., Huang, Y. F., & Cheng, K. S. 2010, MNRAS, 402, 409
* Madison & Li (2007) Madison, D., & Li, W. 2007, CBET, 1034, 1
* Maeder & Lequeux (1982) Maeder, A., & Lequeux, J. 1982, A&A, 114, 409
* Mészáros (2002) Mészáros, P. 2002, ARA&A, 40,137
* Nagataki et al. (2007) Nagataki, S., Takahashi, R., Mizuta, A., Takiwaki, T. 2007, ApJ, 659, 512
* Nagataki (2009) Nagataki, S. 2009, ApJ, 704, 937
* Nagataki (2010) Nagataki, S. 2010, PASJ, submitted (arXiv:1010.4964)
* Paragi et al. (2010) Paragi, Z. et al. 2010, Nature, 463, 516
* Rossi & Rees (2003) Rossi, E., & Rees, M. J. 2003, MNRAS, 339, 881
* Rybicki & Lightman (1979) Rybicki, G. B., & Lightman, A. P. 1979, _Radiative Processes in Astrophysics_ , New York: Wiley
* Silbermann et al. (1996) Silbermann, N. A. et al. 1996, ApJ, 470, 1
* Soderberg (2007) Soderberg, A. M. 2007, ATel, 1205, 1
* Soderberg et al. (2005) Soderberg, A. M. et al. 2005, ApJ, 621, 908
* Soderberg et al. (2006) Soderberg, A. M., Nakar, E., Berger, E., & Kulkarni, S. R. 2006, ApJ, 638, 930
* Soderberg et al. (2010) Soderberg, A, M, et al. 2010a, Nature, 463, 513
* Soderberg et al. (2010) Soderberg, A. M., Brunthaler, A., Nakar, E., & Chevalier, R. A. 2010b, ApJ, 725, 922
* Sutherland & Wheeler (1984) Sutherland, P. G., & Wheeler, J. C. 1984, ApJ, 280, 282
* Valenti et al. (2008) Valenti, S. et al. 2008, ApJ, 673, L155
* Wheeler (1993) Wheeler, J. C. 1993, PKAS, 8, 169
* Wheeler & Levreault (1985) Wheeler, J. C., & Levreault, R. 1985, ApJ, 294, L17
* Wheeler et al. (1993) Wheeler, J. C., Swartz, D. A., & Harkness, R. P. 1993, PhR, 227, 113
* Wilson (1971) Wilson, J. R. 1971, ApJ, 163, 209
* Woosley & Bloom (2006) Woosley, S. E., & Bloom, J. S. 2006, ARA&A, 44, 507
* Woosley & Janka (2005) Woosley, S. E., & Janka, T. 2005, Nat. Phys., 3,147
* Xu et al. (2010) Xu, M, Nagataki, S., & Huang, Y. F. 2010, ApJ submitted
* Zhang (2007) Zhang, B. 2007, ChJAA, 7, 1
|
arxiv-papers
| 2011-04-05T07:45:49 |
2024-09-04T02:49:18.113399
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "M. Xu, S. Nagataki, and Y. F. Huang",
"submitter": "Ming Xu",
"url": "https://arxiv.org/abs/1104.0754"
}
|
1104.0787
|
-titleHot and Cold Baryonic Matter – HCBM 2010 11institutetext: KFKI Research Institute for Particle and Nuclear Physics of the HAS,
29-33 Konkoly-Thege M. Str. H-1121 Budapest, Hungary 22institutetext: Eötvös
University,
1/A Pázmány Péter Sétány, H-1117 Budapest, Hungary
# Underlying events in $p+p$ collisions at LHC energies
András G. Agócs 1122 agocs@rmki.kfki.hu Gergely G. Barnaföldi 11 Péter Lévai
11
###### Abstract
General properties of hadron production are investigated in proton-proton
collisions at LHC energies. We are interested in the characteristics of hadron
production outside the identified jet cones. We improve earlier definitions
and introduce surrounding rings/belts around the cone of identified jets. In
this way even multiple jet events can be studied in details. We define the
underlying event as collected hadrons from outside jet cones and outside
surrounding belts, and investigate the features of these hadrons. We use a
PYTHIA generated data sample of proton-proton collisions at $\sqrt{s}=7$ TeV.
This data sample is analysed by our new method and the widely applied CDF
method. Angular correlations and momentum distributions have been studied and
the obtained results are compared and discussed.
## 1 Introduction
The production of hadron showers with large transverse energy (named as
”jets”) is one of the most interesting phenomena in high energy proton-proton
(and hadron-hadron) collisions. Jet production is explained by the interaction
of the ingredients of protons, namely partons: quarks, antiquarks, and gluons.
The perturbativ quantum chromodynamics (pQCD) improved parton model FieldQCD
can describe the momentum distribution of produced secondary partons very
successfully. The hadronization of the secondary partons is followed by parton
fragmentation functions and the resulted hadron showers carry the energy and
momenta of the original partons. Jet analysis efforts are aiming the
identification of these original partons and to determine their properties.
However, during proton-proton ($pp$) collisions the leading jet production is
only one part of the activity, in parallel many partonic subprocesses take
place with smaller interaction energy.
First of all, multiple parton-parton collisions with relatively large momentum
exchange can happen with a reasonable probability generating the problem of
multiple-jet identification. At LHC energies these multiple-jet events have a
reasonable yield, their investigation will become one of the focus study of
strong interaction in forthcoming LHC experiments. These interactions form a
background event, and it is very important to separate these jets properly
from the leading jets, which is not an easy task. The basic problem is the
overlap of two (or more) hadron showers and the uncertainties on jet
identifications. Under a limiting momenta the separation can not be done, even
more the reconstruction slightly depends on the applied method, especially
with decreasing parton energies.
If momentum exchange become small, then the background events can not be
separated into jets, but considered together. This is the ”underlying event”
(UE), where the application of pQCD is ambiguous and phenomenological
descriptions appears to catch certain features of the hadron ensamble. The
underlying event is important especially in such a cases, when large hadron
multiplicities appear in the final state, leaving open the question if these
extra hadrons are produced during jet fragmentation or in the center of the
collisions with small momentum excange. In this latter case we expect
increasing hadron multiplicities with increasing energy even in $pp$
collisions, which indicates the enhancement of entropy production connected to
low energy parton-parton interactions. This hadron ensamble could display
collective behaviours including thermalization or the appearance of different
flow phenomena. The characteristics of the UE event in $pp$ collisions can be
connected to the study of heavy ion collisions, where the UE state is
dominantly collective and jet-matter effects can be investigated. Our aim is
to collect information about the jets and UE event, and to study their
measurable features if jet-UE interaction appears.
Underlying event was originally defined by the CDF Collaboration CDFUE at the
energy of TEVATRON. Since multiple jet events were very rare, then UE has
denoted the remaining hadrons of a proton-antiproton ($p{\bar{p}}$)
collisions, after leading jets were identified and hadrons of jet cones were
removed from the whole event. The CDF definition of the underlying event is a
simple tool. However, with this definition we are unable to analyze multiple
jet production. We have increased the quality of the extracted information by
a modified definition introducing multiple surrounding belts (SB) around
identified jets Agocs:2009 ; Agocs:2010 . This new definition immediately
leads to a more sophisticated analysis of UE.
We have already performed analysis with the new SB-method and summarized our
results in Ref. bggmex:2010 . However, many question remained open. Especially
the discussion on quantitative differences between the consequences of CDF-
definition and the SB-method is missing.
Here we focus on this comparision. We recall the original CDF-based and our
SB-based definition of the UE and compare them qualitatively. We perform
quantitative calculations and compare several physical quantities: (i) the
average hadron multiplicities within the defined areas relative to the total
event multiplicities; (ii) simple geometrical properties such as azimuthal
angle distributions and pseudorapidity distributions in the close-to-
midrapidity regions; (iii) transverse momentum distributions in the discussed
regions.
We are interested in these physical quantities, because we want to use them to
look jet-matter interactions and other collective effects in nucleus-nucleus
($AA$) collisions. Such investigation demands better understanding of UE
events in $pp$ collisions, since UE can serve as baseline for the detailed
study of $AA$ results.
We have performed our analysis on a data sample generated by PYTHIA 6.2
pythia62 for $pp$ collisions at 7 TeV. The jets have been identified by the
cone-based UA1 jet finder algorithm UA1 , setting jet cone angle $R=0.4$.
## 2 Generalized definition of the UE
The basic definition of underlying events is very simple: remove all particles
connected to identified jets from the total event and the wanted UE consists
of the remaining particles. This kind of definition may have some dependence
on the applied jet-finding algorithm (and its parameters), especially on the
momentum threshold of identified jets. However, if we trust in the correctness
of recent state-of-the-art jet-finding algorithms (see Refs. Salam:2009 ;
Salam:2010 ), then the remaining particles must be unambiguously defined and
we can focus on the properties of this particle ensamble.
Historically the CDF collaboration was the first to establish a plausible
definition to determine UE CDFUE , which was commonly used for decades in
various analysis. This CDF-definition corresponds to jet identification in
one-jet events, where the second jet is assumed to move automatically into the
away side. The CDF event geometry can be fixed easily, since the position of
the leading jet defines the toward region, and the away region will be chosen
respectively. The left side of Fig. 1 displays this definition.
Our concept was to improve the CDF-based definition on the following points:
* •
to develop an new UE definition, which is strongly connected to the identified
jets (excluding all hadrons from all identified jets), independently on the
number of jets. Thus it can be used for multijet events, also.
* •
to gain the capability of investigating the surrounding areas around
identified jets, without major changes in jet-finding parameters. Thus we can
study $pp$ and $AA$ collisions in the same frame and compare the obtained
results, directly.
These requirements led us to the definition of surrounding belts (SB) around
jet cones. Thus the UE is defined via excluding hadrons both from jet cones
and SBs. We are using jet-finding algorithms to define jets and jet-cones,
than the concentrical surrounding belts are selected by fixing the width of
the belts. The correctness of jet finding algorithm can be easily checked by
comparing the properties of hadrons inside jet cone and surrounding belts.
Figure 1: (Color online) The schematic view of the underlying event (UE)
defined by the CDF-method (left panel) and the SB-method with surrounding
belts (right panel). Detailed explanation can be found in the text and Ref.
bggmex:2010 .
Fig. 1 serves for the visual comparison of the two definitions. The CDF-based
definition is seen on the left side, the SB-based new definition is indicated
on the right side. The two definitions can be summarized by means of the
azimuthal angle, $\Phi$, and (pseudo)rapidity, ($\eta$) $y$ plane:
CDF-based definition
of the UE is based on the subtraction of two areas of the whole measured
acceptance: one around the identified near jet (toward region) and another to
the opposite (away) direction. Both regions are
$\Delta\Phi\times\Delta\eta$-slices of the measured acceptance around the near
jet and to the opposite, with the full $\Delta\eta$ range and limited
$\Phi$-range, namely $\Delta\Phi=\pm 60^{o}$ in azimuth.
SB-based definition
means the substraction of all hadrons connected to identified jets and their
SB-areas. Each jet is characterized by an approximate dial-like area, around
which concentric bands (or rings) are defined. If the jet cone angle,
$R=\sqrt{\Delta\Phi^{2}+\Delta\eta^{2}}$ is given, then a first ’$SB_{1}$’ and
a second ’$SB_{2}$’ surrounding belt will be defined for all jets, where the
thickness of $\delta R_{SB1}$ and $\delta R_{SB2}$ are introduced. Usually
$\delta R_{SB1}=\delta R_{SB2}=0.1$ at jet-cone values $R=[0.5,1]$. It is easy
to see that this UE definition is independent from the identified jets, even
if we have more jets in one collision.
Furthermore, increasing the $\delta R_{SBi}$ values, similar (but not the
same) area can be covered by the SB-method as by the original CDF-definition.
In this way the covered areas of the two models can become close to be
identical at large SB-thickness.
Figure 2: (Color online.) The charged hadron multiplicities, $N_{i}$, of the
selected areas depending on the total multiplicities of the events, $N_{tot}$.
The CDF-based results are displayed on the left panel, SB-based results can be
found on the right panel. Details can be found in the text.
## 3 Qualitative comparison of different UE sets
We have already performed an SB-based analysis of a simulated data set for
$pp$ collisions at 7 TeV and published the results in Ref. bggmex:2010 . Here
we extend this early investigation for a larger data set of 500 000 $pp$
events created by PYTHIA-6 simulation pythia62 , applying the Perugia tune
Perugia0 . This sample is similar to the LHC10e14 samplecreated at the ALICE
experiment for 7 TeV $pp$ collisions. In the available sample the jets are
identified by the UA1 method UA1 . We restrict our analysis to a limited
sample, where the cuts of $p_{THardMin}=10$ GeV/c and $p_{THardMax}=20$ GeV/c
have been applied. After these cuts we still had around 156 000 events, which
contains at least one jet. All quantitative results displayed in this paper
are extracted from this data sample.
Figure 3: (Color online.) The azimuthal angle ($\Phi$) dependence of particle
production in the CDF-based (left panel) and SB-based (right panel) analysis
of UE. Details can be found in the text.
Figure 4: (Color online.) The pseudorapidity ($\eta$) dependence in the CDF-
based (left panel) and SB-based (right panel) analysis of UE. Details can be
found in the text.
### 3.1 Multiplicity dependences in both analysis
First we analyse the charged hadron multiplicities in the various regions of
both geometrical definitions. We apply UA1 jet-finding algorithm to identify
jets, than we collect the multiplicities in the different regions of the CDF-
method and the SB-method. Since the statistics is higher than previously, then
we expect results with higher precision than in Ref. bggmex:2010 .
In Fig. 2 we display the multiplicities, $N_{i}$ of the different areas
depending on the total multiplicities of the events. In the left panel the
displayed results are obtained by the CDF-method. Here $N_{i}$ refers to the
following contributions: the multiplicities of the identified ’leading/near
jet’ (blue squares), the jet-excluded ’toward’ area (green dots), the ’away’
side area at the opposite direction (purple dots), and the CDF-defined UE
yield (pink dots).
The right panel of Fig. 2 displays the multiplicities correspond to the SB-
based definition of the underlying event and contains detailed information on
more areas: the multiplicities of the identified leading jet (blue squares),
the away side jet (blue dots), multiplicities for the surrounding belts,
$SB_{lead,1}$, $SB_{lead,2}$, $SB_{away,1}$, and $SB_{away,2}$ (open red
squares, open purple triangles, open red circles, open purple diamonds
respectively). Finally orange crosses denote multiplicities for the newly
defined underlying events, which collect hadrons outside all identified jets.
We denote this quantity by $UE_{2}$. (Note, all color codes correspond to the
areas of Fig 1.)
One can see in Fig. 2 that all multiplicities, $N_{i}$, increases linearly
with the total multiplicity in the region $N_{tot}<120$ in both cases.
Applying the CDF-based definition, the away region gives the largest
contribution, and the leading jet contribution is the smallest one. The
transverse area (named as UE) yield an intermediate size contribution.
Moreover, it is interesting to see, that after excluding the jet from the
toward region, the remaining area has almost the same multiplicity as the
underlying event. This shows the correctness of the jet finding algorithm and
the ”safety” of the CDF-based underlying event definition (e.g. 1/3 of the
whole acceptance is far from any jet-contaminated areas).
The multiplicities with the SB-based definition differ from the results of the
CDF-based analysis. The near side jet and away side jet have similar
contribution, since jet-finding algorithm was working properly. The
contributions from the belts, $SB_{i}$, have small multiplicities, since they
cover very small areas. On the other hand, the contribution from the newly
defined underlying event, $UE_{2}$ dominates the plot, since it covers almost
the whole acceptance.
In general, the multiplicity fraction of the defined areas are almost
proportional to the geometrical surface, only the jet-content part violates
this dependence, as Fig. 2 displays. Thus, the SB-based $UE_{2}$ is
characterized by larger multiplicity value than the CDF-based UE.
Figure 5: (Color online) The transverse momentum spectra ($p_{T}$) for the
CDF-based (left panel) and SB-based (right panel) underlying event and related
geometrical regions. More details can be found in the text.
### 3.2 Test of geometry
On Fig. 3 we plotted the azimuth-angle ($\Phi$) distribution of the selected
areas for CDF-based (left panel) and SB-based (right panel) definitions. Here
we used the color codes of Fig. 1 and Fig. 2.
Due to the geometrical roots of the underlying event definitions, the azimuth-
angle distribution is clearly separated for the CDF-based definition but very
complicated in the SB-based case. The CDF-based results display a clear
envelope curve including red ”wings”. The UE yield — indicated by magenta — is
almost flat. Opposite to the leading jet, at $\Phi\approx\pi$, the away side
region is characterized by a well-defined Gaussian-like distribution.
One can see that the SB-based analysis is more sophisticated, it carries more
information. The jet ”wings” at the sides and the envelop curve is the same as
the results from the CDF-based analysis. However, the yields from the away-
side jet and surrounding belts $SB_{away,1}$ and $SB_{away,2}$ shows complex
structure (blue dots, purple diamonds and orange circles). The validity of our
underlying definition ($UE_{2}$)is supported by the appearance of a wide flat
area in $\eta$ (indicated by orange crosses), clearly spearated from jet cone
and SB contributions.
On Fig. 3 the pseudorapidity distributions are shown for CDF-based (left side)
and SB-based (right side) analysis. Since we have limited acceptance in the
rapidity direction, we focus on the close-to-midrapidity regions:
$\eta\in\left[-0.9;0.9\right]$. Within this areas, both underlying event
definitions yield flat rapidity distributions. Since particle yields
correspond to the surface of the defined areas, thus $SB$s yields are small.
However, the $UE$ and $UE_{2}$ contributions dominate the yields.
### 3.3 The $p_{T}$-distribution for the selected areas
We are interested in the transverse momentum spectra of charged hadrons
detected in the different areas of the CDF- and SB-setup. Fig. 5 displays
results for $p_{T}\leq 60$ GeV/c. We can see relatively large difference
between the CDF and SB cases. The left panel shows the CDF-based results, the
color encoding corresponds to Fig. 1. The obtained spectra are quite similar
for the different regions, a close to linear shift characterizes the
differences, which is connected to the proper geometrical surface. Thus jet-
excluded near and the UE areas (pink and green dots) display the smallest
yield at high-$p_{T}$. The spectra of away side hadrons (purple dots)
dominates the yields, but without any structure. It seems to us, the CDF-based
underlying event definition generates separation between the spectra of the
selected areas.
On the other hand, the SB-based definition results in a better separation for
the different momentum distributions. The transverse momentum spectra within
the identified jet cone, plotted by blue squares and points for comparision
and corresponds to the CDF-based definition. The underlying event has similar
spectra, however the SB-based definition results in a more steeper
distribution at lower-$p_{T}$ values. Spectra for the surrounding belts have
the lowest yields and starts together with the SB-defined underlying event
spectra, but with a cut-off at $p_{T}\approx 10$ GeV/c.
## 4 Conclusions
We have studied underlying event definitions in proton-proton collisions at
$\sqrt{s}=7$ TeV. We have investigated and compared the multiplicities,
azimuth-angle and pseudorapidity distributions, and transverse momentum
spectra for the CDF-based and SB-defined regions.
We tested the multiplicity fraction of the defined regions and repeated our
earlier study bggmex:2010 . Using a larger data sample a clear separation of
the undelying event has been found. Comparing the CDF-based and SB-based
underlying event definitions the latter gives a better separation in sense of
highest multiplicity relatively to the total multiplicits of the event,
$N_{tot}$. This originates from the more sophysticated definition and the
larger geometrical surface to be considered. Oppossite to this the original
CDF-based definition does not give a good multplicity separation, since away,
transverse and jet-excluded toward regions have similar multiplicity content.
A more clear explanation of the geometrical distribution is arising from the
azimuth angle and pseudorapidity distribution. The CDF-based azimuth angle
distribution is quite clear, it results in two small, almost flat $\Phi$
distribution of the UE. In the SB-setup, the obtained azimuth angle
distributions are overlapping, but the whole underlying event region becomes a
well defined background. Moreover, in the investigated close-to-midrapidity
region, $\left|\eta\right|\lesssim 1$, the magnitude of the constant $\eta$
distrubution is proportional to the area of the CDF- or SB-defined regions.
Finally, we compared the transverse momentum spectra for the different
regions. The SB-method gives a more sophisticated separation of the charged
hadron yields from different regions and its general use is much more
supported to study the properties of the UE and any jet-matter interactions
inside the surrounding belts.
## Acknowledgments
This work was supported by Hungarian OTKA NK77816, PD73596 and Eötvös
University. One of the authors (GGB) thanks for the János Bolyai Research
Scholarship of the Hungarian Academy of Sciences.
## References
* (1) R.D. Field, Applications of Perturbative QCD, Addison-Wesley Publishing, 1989.
* (2) A. A. Affolder et al. [CDF Collaboration], Phys. Rev. D65, (2002) 092002
* (3) P. Lévai and A. G. Agócs, PoS EPS-HEP2009, (2009) 472
* (4) A. G. Agócs, G. G. Barnaföldi, and P. Lévai, J. Phys. Conf. Ser. 270, (2011) 012017.
* (5) G. G. Barnaföldi, A. G. Agócs, and P. Lévai, arXiv:1101.4155 [hep-ph], accepted in the Proc. of 5th Int. Workshop on Hight-$p_{T}$ Physics at LHC, Mexico City, 2010.
* (6) T. Sjöstrand, S. Mrenna, and P. Z. Skands, JHEP 0605, (2006) 026.
* (7) G. Arnison et al. [UA1 Collaboration], CERN-EP/83-118; Phys. Lett. B132, (1983) 214.
* (8) G. P. Salam, Eur. Phys. J. C67, (2010) 637.
* (9) M. Cacciari, G. P. Salam, S. Sapeta, JHEP 1004, (2010) 065.
* (10) P. Z. Skands, MCNET-10-08; CERN-PH-TH-2010-113; Phys. Rev. D82, (2010) 074018
* (11) S. Salur, Nucl. Phys. A830, (2009) 139.
|
arxiv-papers
| 2011-04-05T09:43:49 |
2024-09-04T02:49:18.118678
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "A. G. Ag\\'ocs, G. G. Barnaf\\\"oldi and P. L\\'evai",
"submitter": "Andr\\'as G. Ag\\'ocs",
"url": "https://arxiv.org/abs/1104.0787"
}
|
1104.0862
|
# Causal Rate Distortion Function and Relations to Filtering Theory
Photios A. Stavrou Ph.D. student at ECE Department, University of Cyprus,
Green Park, Aglantzias 91, P.O. Box 20537, 1687, Nicosia, Cyprus
(photios.stavrou@ucy.ac.cy). Charalambos D. Charalambous Professor at ECE
Department, University of Cyprus, Green Park, Aglantzias 91, P.O. Box 20537,
1687, Nicosia, Cyprus (chadcha@ucy.ac.cy).
###### Abstract
A causal rate distortion function (RDF) is defined, existence of extremum
solution is described via weak∗-convergence, and its relation to filtering
theory is discussed. The relation to filtering is obtained via a causal
constraint imposed on the reconstruction kernel to be realizable while the
extremum solution is given for the stationary case.
###### keywords:
causal rate distortion function (RDF), realization, causal filter,
weak∗-convergence, optimal reconstruction kernel
###### AMS:
28A33, 46E15, 60G07
## 1 Introduction
Shannon’s information theory for reliable communication evo-lved over the
years without much emphasis on real-time realizability or causality imposed on
the communication sub-systems. In particular, the classical rate distortion
function (RDF) for source data compression deals with the characterization of
the optimal reconstruction conditional distribution subject to a fidelity
criterion [1], without regard for realizability. Hence, coding schemes which
achieve the RDF are not realizable.
On the other hand, filtering theory is developed by imposing real-time
realizability on estimators with respect to measurement data. Although, both
reliable communication and filtering (state estimation for control) are
concerned with reconstruction of processes, the main underlying assumptions
characterizing them are different.
In this paper, the intersection of rate distortion function (RDF) and
realizable filtering theory is established by invoking the additional
assumption that the reconstruction kernel is realizable via causal operations,
while the optimal causal reconstruction kernel is derived. Consequently, the
connection between causal RDF, its characterization via the optimal
reconstruction kernel, and realizable filtering theory are established under
very general conditions on the source (including Markov sources). The
fundamental advantage of the new filtering approach based on causal RDF, is
the ability to ensure average or probabilistic bounds on the estimation error,
which is a non-trivial task when dealing with Bayesian filtering techniques.
The first relation between information theory and filtering via distortion
rate function is discussed by R. S. Bucy in [2], by carrying out the
computation of a realizable distortion rate function with square criteria for
two samples of the Ornstein-Uhlenbeck process. The earlier work of A. K.
Gorbunov and M. S. Pinsker [7] on $\epsilon$-entropy defined via a causal
constraint on the reproduction distribution of the RDF, although not directly
related to the realizability question pursued by Bucy, computes the causal RDF
for stationary Gaussian processes via power spectral densities. The
realizability constraints imposed on the reproduction conditional distribution
in [2] and [7] are different. The actual computation of the distortion rate or
RDF in these works is based on the Gaussianity of the process, while no
general theory is developed to handle arbitrary processes.
The main results described are the following.
1. 1)
Existence of the causal RDF using the topology of weak∗-convergence.
2. 2)
Closed form expression of the optimal reconstruction conditional distribution
for stationary processes, which is realizable via causal operations.
3. 3)
Realization procedure of the filter based on the causal RDF.
Next,we give a high level discussion on Bayesian filtering theory and we
present some aspects of the problem and results pursued in this paper.
Consider a discrete-time process
$X^{n}\triangleq\\{X_{0},X_{1},\ldots,X_{n}\\}\in{\cal
X}_{0,n}\triangleq\times_{i=0}^{n}{\cal X}_{i}$, and its reconstruction
$Y^{n}\triangleq\\{Y_{0},Y_{1},\ldots,Y_{n}\\}\in{\cal
Y}_{0,n}\triangleq\times_{i=0}^{n}{\cal Y}_{i}$, where ${\cal X}_{i}$ and
${\cal Y}_{i}$ are Polish spaces (complete separable metric spaces). The
objective is to reconstruct $X^{n}$ by $Y^{n}$ causally subject to a
distortion or fidelity criterion.
In classical filtering, one is given a mathematical model that generates the
process $X^{n}$, $\\{P_{X_{i}|X^{i-1}}(dx_{i}|x^{i-1}):i=0,1,\ldots,n\\}$
often induced via discrete-time recursive dynamics, a mathematical model that
generates observed data obtained from sensors, say, $Z^{n}$,
$\\{P_{Z_{i}|Z^{i-1},X^{i}}$ $(dz_{i}|z^{i-1},x^{i}):i=0,1,\ldots,n\\}$ while
$Y^{n}$ are the causal estimates of some function of the process $X^{n}$ based
on the observed data $Z^{n}$. Thus, in classical filtering theory both models
which generate the unobserved and observed processes, $X^{n}$ and $Z^{n}$,
respectively, are given á priori. Fig. 1 illustrates the cascade block diagram
of the filtering problem.
Fig. 1: Block Diagram of Filtering Problem
In causal rate distortion theory one is given the process $X^{n}$, which
induces $\\{P_{X_{i}|X^{i-1}}(dx_{i}|x^{i-1}):~{}i=0,1,\ldots,n\\}$, and
determines the causal reconstruction conditional distribution
$\\{P_{Y_{i}|Y^{i-1},X^{i}}(dy_{i}|y^{i-1},x^{i}):~{}i=0,1,\ldots,n\\}$ which
minimizes the mutual information between $X^{n}$ and $Y^{n}$ subject to a
distortion or fidelity constraint, via a causal (realizability) constraint.
The filter $\\{Y_{i}:~{}i=0,1,\ldots,n\\}$ of $\\{X_{i}:~{}i=0,1,\ldots,n\\}$
is found by realizing the reconstruction distribution
$\\{P_{Y_{i}|Y^{i-1},X^{i}}(dy_{i}|y^{i-1}$,
$x^{i}):~{}i=0,1,\ldots,n\\}$ via a cascade of sub-systems as shown in Fig. 2.
Fig. 2: Block Diagram of Filtering via Causal Rate Distortion Function
The distortion function or fidelity constraint between $x^{n}$ and its
reconstruction $y^{n}$, is a measurable function defined by
$\displaystyle d_{0,n}:{\cal X}_{0,n}\times{\cal
Y}_{0,n}\mapsto[0,\infty],\>\>d_{0,n}(x^{n},y^{n})\triangleq\sum^{n}_{i=0}\rho_{0,i}(x^{i},y^{i})$
The mutual information between $X^{n}$ and $Y^{n}$, for a given distribution
${P}_{X^{n}}(dx^{n})$, and conditional distribution
$P_{Y^{n}|X^{n}}(dy^{n}|x^{n})$, is defined by
${I}(X^{n};Y^{n})\triangleq\int_{{\cal X}_{0,n}\times{\cal
Y}_{0,n}}\log\Big{(}\frac{P_{Y^{n}|X^{n}}(dy^{n}|x^{n})}{{P}_{Y^{n}}(dy^{n})}\Big{)}P_{Y^{n}|X^{n}}(dy^{n}|x^{n})\otimes{P}_{X^{n}}(dx^{n})$
(1)
Define the $(n+1)-$fold causal convolution measure
${\overrightarrow{P}}_{Y^{n}|X^{n}}(dy^{n}|x^{n})\triangleq\otimes^{n}_{i=0}P_{Y_{i}|Y^{i-1},X^{i}}(dy_{i}|y^{i-1},x^{i})-a.s.$
(2)
The realizability constraint for a causal filter is defined by
$\displaystyle{\overrightarrow{Q}}_{ad}\triangleq\Big{\\{}P_{Y^{n}|X^{n}}(dy^{n}|x^{n}):~{}P_{Y^{n}|X^{n}}(dy^{n}|x^{n})={\overrightarrow{P}}_{Y^{n}|X^{n}}(dy^{n}|x^{n})-a.s.\Big{\\}}$
(3)
The realizability condition (3) is necessary, otherwise the connection between
filtering and realizable rate distortion theory cannot be established. This is
due to the fact that
$P_{Y^{n}|X^{n}}(dy^{n}|x^{n})=\otimes_{i=0}^{n}{P}_{Y_{i}|Y^{i-1},X^{n}}(dy_{i}|y^{i-1},x^{n})-a.s.$,
and hence in general, for each $i=0,1,\ldots,n$, the conditional distribution
of $Y_{i}$ depends on future symbols $\\{X_{i+1},X_{i+2},\ldots,X_{n}\\}$ in
addition to the past and present symbols $\\{Y^{i-1},X^{i}\\}$.
Causal Rate Distortion Function. The causal RDF is defined by
${R}^{c}_{0,n}(D)\triangleq\inf_{{P}_{Y^{n}|X^{n}}(dy^{n}|x^{n})\in\overrightarrow{Q}_{ad}:~{}{E}\big{\\{}d_{0,n}(X^{n},Y^{n})\leq{D}\big{\\}}}I(X^{n};Y^{n})$
(4)
Note that realizability condition (3) is different from the realizability
condition in [2], which is defined under the assumption that $Y_{i}$ is
independent of $X_{j|i}^{*}\triangleq
X_{j}-\mathbb{E}\Big{(}X_{j}|X^{i}\Big{)},j=i+1,i+2,\ldots,$. The claim here
is that realizability condition (3) is more natural and applies to processes
which are not necessarily Gaussian having square error distortion function.
Realizability condition (3) is weaker than the causality condition in [7]
defined by $X_{n+1}^{\infty}\leftrightarrow X^{n}\leftrightarrow Y^{n}$ forms
a Markov chain.
The point to be made regarding (4) is that (see also Lemma 3):
$\displaystyle{P}_{Y^{n}|X^{n}}(dy^{n}|x^{n})={\overrightarrow{P}}_{Y^{n}|X^{n}}(dy^{n}|x^{n})-a.s.{\Longleftrightarrow}$
$\displaystyle
I(X^{n};Y^{n})=\int\log\Big{(}\frac{{\overrightarrow{P}}_{Y^{n}|X^{n}}(dy^{n}|x^{n})}{{P}_{Y^{n}}(dy^{n})}\Big{)}{\overrightarrow{P}}_{Y^{n}|X^{n}}(dy^{n}|x^{n}){P}_{X^{n}}(dx^{n})\equiv{\mathbb{I}}(P_{X^{n}},{\overrightarrow{P}}_{Y^{n}|X^{n}})$
(5)
where ${\mathbb{I}}(P_{X^{n}},{\overrightarrow{P}}_{Y^{n}|X^{n}})$ points out
the functional dependence of $I(X^{n};{Y^{n}})$ on $\\{P_{X^{n}}$,
${\overrightarrow{P}}_{Y^{n}|X^{n}}\\}$.
The paper is organized as follows. Section 2 discusses the formulation on
abstract spaces. Section 3 establishes existence of optimal minimizing kernel,
and Section 4 derives the stationary solution. Section 5 describes the
realization of causal RDF. Throughout the manuscript proofs are omitted due to
space limitation.
## 2 Problem Formulation
Let $\mathbb{N}^{n}\triangleq\\{0,1,\ldots,n\\}$,
$n\in\mathbb{N}\triangleq\\{0,1,2,\ldots\\}$. The source and reconstruction
alphabets, respectively, are sequences of Polish spaces $\\{{\cal
X}_{t}:t\in\mathbb{N}\\}$ and $\\{{\cal Y}_{t}:t\in\mathbb{N}\\}$, associated
with their corresponding measurable spaces $({\cal X}_{t},{\cal B}({\cal
X}_{t}))$ and $({\cal Y}_{t},{\cal B}({\cal Y}_{t}))$, $t\in\mathbb{N}$.
Sequences of alphabets are identified with the product spaces $({\cal
X}_{0,n},{\cal B}({\cal X}_{0,n}))\triangleq{\times}_{k=0}^{n}({\cal
X}_{k},{\cal B}({\cal X}_{k}))$, and $({\cal Y}_{0,n},{\cal B}({\cal
Y}_{0,n}))\triangleq\times_{k=0}^{n}({\cal Y}_{k},{\cal B}({\cal Y}_{k}))$.
The source and reconstruction are processes denoted by
$X^{n}\triangleq\\{X_{t}:t\in\mathbb{N}^{n}\\}$,
$X:\mathbb{N}^{n}\times\Omega\mapsto{\cal X}_{t}$, and by
$Y^{n}\triangleq\\{Y_{t}:t\in\mathbb{N}^{n}\\}$,
$Y:\mathbb{N}^{n}\times\Omega\mapsto{\cal Y}_{t}$, respectively. Probability
measures on any measurable space $({\cal Z},{\cal B}({\cal Z}))$ are denoted
by ${\cal M}_{1}({\cal Z})$. It is assumed that the $\sigma$-algebras
$\sigma\\{X^{-1}\\}=\sigma\\{Y^{-1}\\}=\\{\emptyset,\Omega\\}$.
###### Definition 1.
Let $({\cal X},{\cal B}({\cal X})),({\cal Y},{\cal B}({\cal Y}))$ be
measurable spaces in which $\cal Y$ is a Polish Space. A stochastic kernel on
$\cal Y$ given $\cal X$ is a mapping $q:{\cal B}({\cal Y})\times{\cal
X}\rightarrow[0,1]$ satisfying the following two properties:
1) For every $x\in{\cal X}$, the set function $q(\cdot;x)$ is a probability
measure (possibly finitely additive) on ${\cal B}({\cal Y}).$
2) For every $F\in{\cal B}({\cal Y})$, the function $q(F;\cdot)$ is ${\cal
B}({\cal X})$-measurable.
The set of all such stochastic Kernels is denoted by ${\cal Q}({\cal Y};{\cal
X})$.
###### Definition 2.
Given measurable spaces $({\cal X}_{0,n},{\cal B}({\cal X}_{0,n}))$, $({\cal
Y}_{0,n},{\cal B}({\cal Y}_{0,n}))$, then
1) A Non-Causal Data Compression Channel is a stochastic kernel
$q_{0,n}(dy^{n};x^{n})\in{\cal Q}({\cal Y}_{0,n};{\cal X}_{0,n})$ which admits
a factorization into a non-causal sequence
$\displaystyle
q_{0,n}(dy^{n};x^{n})=\otimes_{i=0}^{n}q_{i}(dy_{i};y^{i-1},x^{n})$
where $q_{i}(dy_{i};y^{i-1},x^{n})\in{\cal Q}({\cal Y}_{i};{\cal
Y}_{0,i-1}\times{\cal X}_{0,n}),i=0,\ldots,n,~{}n\in\mathbb{N}$.
2) A Causally Restricted Data Compression Channel is a stochastic kernel
$q_{0,n}(dy^{n}$
$;x^{n})\in{\cal Q}({\cal Y}_{0,n};{\cal X}_{0,n})$ which admits a
factorization into a causal sequence
$\displaystyle
q_{0,n}(dy^{n};x^{n})=\otimes_{i=0}^{n}q_{i}(dy_{i};y^{i-1},x^{i})-a.s.,$
where $q_{i}\in{\cal Q}({\cal Y}_{i};{\cal Y}_{0,i-1}\times{\cal
X}_{0,i}),i=0,\ldots,n,~{}n\in\mathbb{N}$.
### 2.1 Causal Rate Distortion Function
In this subsection the causal RDF is defined. Given a source probability
measure ${\cal\mu}_{0,n}\in{\cal M}_{1}({\cal X}_{0,n})$ (possibly finite
additive) and a reconstruction Kernel $q_{0,n}\in{\cal Q}({\cal Y}_{0,n};{\cal
X}_{0,n})$, one can define three probability measures as follows.
(P1): The joint measure $P_{0,n}\in{\cal M}_{1}({\cal Y}_{0,n}\times{\cal
X}_{0,n})$:
$\displaystyle P_{0,n}(G_{0,n})$ $\displaystyle\triangleq$
$\displaystyle(\mu_{0,n}\otimes q_{0,n})(G_{0,n}),\>G_{0,n}\in{\cal B}({\cal
X}_{0,n})\times{\cal B}({\cal Y}_{0,n})$ $\displaystyle=$
$\displaystyle\int_{{\cal
X}_{0,n}}q_{0,n}(G_{0,n,x^{n}};x^{n})\mu_{0,n}(d{x^{n}})$
where $G_{0,n,x^{n}}$ is the $x^{n}-$section of $G_{0,n}$ at point ${x^{n}}$
defined by $G_{0,n,x^{n}}\triangleq\\{y^{n}\in{\cal Y}_{0,n}:(x^{n},y^{n})\in
G_{0,n}\\}$ and $\otimes$ denotes the convolution.
(P2): The marginal measure $\nu_{0,n}\in{\cal M}_{1}({\cal Y}_{0,n})$:
$\displaystyle\nu_{0,n}(F_{0,n})$ $\displaystyle\triangleq$ $\displaystyle
P_{0,n}({\cal X}_{0,n}\times F_{0,n}),~{}F_{0,n}\in{\cal B}({\cal Y}_{0,n})$
$\displaystyle=$ $\displaystyle\int_{{\cal X}_{0,n}}q_{0,n}(({\cal
X}_{0,n}\times F_{0,n})_{{x}^{n}};{x}^{n})\mu_{0,n}(d{x^{n}})=\int_{{\cal
X}_{0,n}}q_{0,n}(F_{0,n};x^{n})\mu_{0,n}(dx^{n})$
(P3): The product measure $\pi_{0,n}:{\cal B}({\cal X}_{0,n})\times{\cal
B}({\cal Y}_{0,n})\mapsto[0,1]$ of $\mu_{0,n}\in{\cal M}_{1}({\cal X}_{0,n})$
and $\nu_{0,n}\in{\cal M}_{1}({\cal Y}_{0,n})$ for $G_{0,n}\in{\cal B}({\cal
X}_{0,n})\times{\cal B}({\cal Y}_{0,n})$:
$\displaystyle\pi_{0,n}(G_{0,n})\triangleq(\mu_{0,n}\times\nu_{0,n})(G_{0,n})=\int_{{\cal
X}_{0,n}}\nu_{0,n}(G_{0,n,x^{n}})\mu_{0,n}(dx^{n})$
The precise definition of mutual information between two sequences of Random
Variables $X^{n}$ and $Y^{n}$, denoted $I(X^{n};Y^{n})$ is defined via the
Kullback-Leibler distance (or relative entropy) between the joint probability
distribution of $(X^{n},Y^{n})$ and the product of its marginal probability
distributions of $X^{n}$ and $Y^{n}$, using the Radon-Nikodym derivative.
Hence, by the chain rule of relative entropy:
$\displaystyle I(X^{n};Y^{n})$ $\displaystyle\triangleq$
$\displaystyle\mathbb{D}(P_{0,n}||\pi_{0,n})=\int_{{\cal X}_{0,n}\times{\cal
Y}_{0,n}}\log\Big{(}\frac{d(\mu_{0,n}\otimes
q_{0,n})}{d(\mu_{0,n}\times\nu_{0,n})}\Big{)}d(\mu_{0,n}\otimes q_{0,n})$ (6)
$\displaystyle=$ $\displaystyle\int_{{\cal X}_{0,n}\times{\cal
Y}_{0,n}}\log\Big{(}\frac{q_{0,n}(dy^{n};x^{n})}{\nu_{0,n}(dy^{n})}\Big{)}q_{0,n}(dy^{n};dx^{n})\otimes\mu_{0,n}(dx^{n})$
$\displaystyle=$ $\displaystyle\int_{{\cal
X}_{0,n}}\mathbb{D}(q_{0,n}(\cdot;x^{n})||\nu_{0,n}(\cdot))\mu_{0,n}(dx^{n})\equiv\mathbb{I}(\mu_{0,n},q_{0,n})$
The next lemma relates causal product reconstruction kernels and conditional
independence.
###### Lemma 3.
The following are equivalent for each $n\in\mathbb{N}$.
1. 1)
$q_{0,n}(dy^{n};x^{n})={\overrightarrow{q}}_{0,n}(dy^{n};x^{n})$-a.s., defined
in Definition 2-2).
2. 2)
For each $i=0,1,\ldots,n-1$,
$Y_{i}\leftrightarrow(X^{i},Y^{i-1})\leftrightarrow(X_{i+1},X_{i+2},\ldots,X_{n})$,
forms a Markov chain.
3. 3)
For each $i=0,1,\ldots,n-1$, $Y^{i}\leftrightarrow X^{i}\leftrightarrow
X_{i+1}$ forms a Markov chain.
According to Lemma 3, for causally restricted kernels
$\displaystyle I(X^{n};Y^{n})$ $\displaystyle=$ $\displaystyle\int_{{\cal
X}_{0,n}\times{\cal
Y}_{0,n}}\log\Big{(}\frac{\overrightarrow{q}_{0,n}(dy^{n};x^{n})}{\nu_{0,n}(dy^{n})}\Big{)}{\overrightarrow{q}}_{0,n}(dy^{n};dx^{n})\otimes\mu_{0,n}(dx^{n})$
(7) $\displaystyle\equiv$
$\displaystyle{\mathbb{I}}(\mu_{0,n},\overrightarrow{q}_{0,n})$
where (7) states that $I(X^{n};Y^{n})$ is a functional of
$\\{\mu_{0,n},{\overrightarrow{q}}_{0,n}\\}$. Hence, causal RDF is defined by
optimizing ${\mathbb{I}}(\mu_{0,n},{q}_{0,n})$ over ${q}_{0,n}{\in}Q_{0,n}(D)$
where $Q_{0,n}(D)=\\{q_{0,n}\in{\cal Q}({\cal Y}_{0,n};{\cal
X}_{0,n}):\int_{{\cal X}_{0,n}}\int_{{\cal
Y}_{0,n}}d_{0,n}(x^{n},y^{n})q_{0,n}(dy^{n};x^{n})\otimes\mu_{0,n}(dx^{n})\leq
D\\}$ subject to the realizability constraint
$q_{0,n}(dy^{n};x^{n})={\overrightarrow{q}}_{0,n}(dy^{n};x^{n})-a.s.,$ which
satisfies a distortion constraint, or via (7).
###### Definition 4.
$($Causal Rate Distortion Function$)$ Suppose
$d_{0,n}(x^{n},y^{n})\triangleq\sum^{n}_{i=0}\rho_{0,i}(x^{i},y^{i})$, where
$\rho_{0,i}:{\cal X}_{0,i}\times{\cal Y}_{0,i}\rightarrow[0,\infty)$, is a
sequence of ${\cal B}({\cal X}_{0,i})\times{\cal B}({\cal
Y}_{0,i})$-measurable distortion functions, and let
$\overrightarrow{Q}_{0,n}(D)$ (assuming is non-empty) denotes the average
distortion or fidelity constraint defined by
$\displaystyle\overrightarrow{Q}_{0,n}(D)\triangleq
Q_{0,n}(D)\bigcap{\overrightarrow{Q}}_{ad},~{}D\geq 0$
The causal RDF associated with the causally restricted kernel is defined by
$\displaystyle{R}^{c}_{0,n}(D)\triangleq\inf_{{{q}_{0,n}\in\overrightarrow{Q}_{0,n}(D)}}{\mathbb{I}}(\mu_{0,n},{q}_{0,n})$
(8)
## 3 Existence of Optimal Causal Reconstruction Kernel
In this section, appropriate topologies and function spaces are introduced and
existence of the minimizing causal product kernel in $(\ref{ex12})$ is shown.
### 3.1 Abstract Spaces
Let $BC({\cal Y}_{0,n})$ denote the vector space of bounded continuous real
valued functions defined on the Polish space ${\cal Y}_{0,n}$. Furnished with
the sup norm topology, this is a Banach space. The topological dual of
$BC({\cal Y}_{0,n})$ denoted by $\Big{(}BC({\cal Y}_{0,n})\Big{)}^{*}$ is
isometrically isomorphic to the Banach space of finitely additive regular
bounded signed measures on ${\cal Y}_{0,n}$ [5], denoted by $M_{rba}({\cal
Y}_{0,n})$. Let $\Pi_{rba}({\cal Y}_{0,n})\subset M_{rba}({\cal Y}_{0,n})$
denote the set of regular bounded finitely additive probability measures on
${\cal Y}_{0,n}$. Clearly if ${\cal Y}_{0,n}$ is compact, then
$\Big{(}BC({\cal Y}_{0,n})\Big{)}^{*}$ will be isometrically isomorphic to the
space of countably additive signed measures, as in [4]. Denote by
$L_{1}(\mu_{0,n},BC({\cal Y}_{0,n}))$ the space of all $\mu_{0,n}$-integrable
functions defined on ${\cal X}_{0,n}$ with values in $BC({\cal Y}_{0,n}),$ so
that for each $\phi\in L_{1}(\mu_{0,n},BC({\cal Y}_{0,n}))$ its norm is
defined by
$\displaystyle\parallel\phi\parallel_{\mu_{0,n}}\triangleq\int_{{\cal
X}_{0,n}}||\phi(x^{n},\cdot)||_{BC({\cal Y}_{0,n})}\mu_{0,n}(dx^{n})<\infty$
The norm topology $\parallel{\phi}\parallel_{\mu_{0,n}}$, makes
$L_{1}(\mu_{0,n},BC({\cal Y}_{0,n}))$ a Banach space, and it follows from the
theory of “lifting” [8] that the dual of this space is
$L_{\infty}^{w}(\mu_{0,n},M_{rba}({\cal Y}_{0,n}))$, denoting the space of all
$M_{rba}({\cal Y}_{0,n})$ valued functions $\\{q\\}$ which are
weak∗-measurable in the sense that for each $\phi\in BC({\cal Y}_{0,n}),$
$x^{n}\rightarrow q_{x^{n}}(\phi)\triangleq\int_{{\cal
Y}_{0,n}}\phi(y^{n})q(dy^{n};x^{n})$ is $\mu_{0,n}$-measurable and
$\mu_{0,n}$-essentially bounded.
### 3.2 Weak∗-Compactness and Existence
Define an admissible set of stochastic kernels associated with classical RDF
by
$\displaystyle Q_{ad}\triangleq L_{\infty}^{w}(\mu_{0,n},\Pi_{rba}({\cal
Y}_{0,n}))\subset L_{\infty}^{w}(\mu_{0,n},M_{rba}({\cal Y}_{0,n}))$
Clearly, $Q_{ad}$ is a unit sphere in $L_{\infty}^{w}(\mu_{0,n},M_{rba}({\cal
Y}_{0,n}))$. For each $\phi{\in}L_{1}(\mu_{0,n},BC({\cal Y}_{0,n}))$ we can
define a linear functional on $L_{\infty}^{w}(\mu_{0,n},M_{rba}({\cal
Y}_{0,n}))$ by
$\displaystyle\ell_{\phi}(q_{0,n})\triangleq\int_{{\cal
X}_{0,n}}\Big{(}\int_{{\cal
Y}_{0,n}}\phi(x^{n},y^{n})q_{0,n}(dy^{n};x^{n})\Big{)}\mu_{0,n}(dx^{n})$
This is a bounded, linear and weak∗-continuous functional on
$L_{\infty}^{w}(\mu_{0,n},M_{rba}({\cal Y}_{0,n}))$.
For $d_{0,n}:{\cal X}_{0,n}\times{\cal Y}_{0,n}\mapsto[0,\infty)$ measurable
and $d_{0,n}{\in}L_{1}(\mu_{0,n},BC({\cal Y}_{0,n}))$ the distortion
constraint set of the classical RDF is
$Q_{0,n}(D)\triangleq\\{q{\in}Q_{ad}:\ell_{d_{0,n}}(q_{0,n}){\leq}D\\}$.
###### Lemma 5.
For $\ell_{d_{0,n}}{\in}L_{1}(\mu_{0,n},BC({\cal Y}_{0,n}))$, the set
$Q_{0,n}(D)$ is weak∗-bounded and weak∗-closed subset of $Q_{ad}$.
Hence $Q_{0,n}(D)$ is weak∗-compact (compactness of $Q_{ad}$ follows from
Alaoglu’s Theorem [5]).
###### Lemma 6.
Let ${\cal X}_{0,n},{\cal Y}_{0,n}$ be two Polish spaces and $d_{0,n}:{\cal
X}_{0,n}\times{\cal Y}_{0,n}\mapsto[0,\infty]$, a measurable, nonnegative,
extended real valued function, such that for a fixed $x^{n}\in{\cal X}_{0,n}$,
$y^{n}\rightarrow d(x^{n},\cdot)$ is continuous on ${\cal Y}_{0,n}$, for
$\mu_{0,n}$-almost all $x^{n}\in{\cal X}_{0,n}$, and $d_{0,n}\in
L_{1}(\mu_{0,n},BC({\cal Y}_{0,n}))$. For any $D\in[0,\infty)$, introduce the
set
$\displaystyle{Q}_{0,n}(D)\triangleq\\{q_{0,n}\in{Q}_{ad}:\int_{{\cal
X}_{0,n}}\biggl{(}\int_{{\cal
Y}_{0,n}}d_{0,n}(x^{n},y^{n}){q}_{0,n}(dy^{n};x^{n})\biggr{)}\mu_{0,n}(dx^{n})\leq
D\\}$
and suppose it is nonempty.
Then ${Q}_{0,n}(D)$ is a weak∗-closed subset of $Q_{ad}$ and hence
weak∗-compact.
Next, we define the realizability constraint via causally restricted kernels
as follows
$\displaystyle{\overrightarrow{Q}}_{ad}=\Big{\\{}q_{0,n}\in{Q_{ad}}:q_{0,n}(dy^{n};x^{n})={\overrightarrow{q}}_{0,n}(dy^{n};x^{n})-a.s.\Big{\\}}$
which satisfy an average distortion function as follows:
$\displaystyle{\overrightarrow{Q}_{{0,n}}(D)}$ $\displaystyle\triangleq$
$\displaystyle Q_{0,n}(D)\bigcap{\overrightarrow{Q}}_{ad}$ $\displaystyle=$
$\displaystyle\Big{\\{}{q}_{0,n}\in{\overrightarrow{Q}}_{ad}:\ell_{d_{0,n}}({\overrightarrow{q}}_{0,n})\triangleq\int_{{\cal
X}_{0,n}}\biggr{(}\int_{{\cal
Y}_{0,n}}d_{0,n}(x^{n},y^{n}){\overrightarrow{q}}_{0,n}(dy^{n};x^{n})\biggr{)}$
$\displaystyle\otimes$ $\displaystyle\mu_{0,n}(dx^{n})\Big{\\}}$
The following is assumed.
###### Assumption 7.
Let ${\cal X}_{0,n}$ and ${\cal Y}_{0,n}$ be Polish spaces and
$\overrightarrow{Q}_{ad}$ weak∗-closed.
###### Remark 8.
The conditions 1) ${\cal Y}_{0,n}$ is a compact Polish space, and 2) for all
$h(\cdot){\in}BC({\cal Y}_{n})$, the function $(x^{n},y^{n-1})\in{\cal
X}_{0,n}\times{\cal Y}_{0,n-1}\mapsto\int_{{\cal
Y}_{n}}h(y)q_{n}(dy;y^{n-1},x^{n})\in\mathbb{R}$ is continuous jointly in the
variables $(x^{n},y^{n-1})\in{\cal X}_{0,n}\times{\cal Y}_{0,n-1}$ are
sufficient for ${\overrightarrow{Q}}_{ad}$ to be weak∗-closed.
###### Theorem 9.
Suppose Assumption 7 and the conditions of Lemma 6 hold. For any
$D\in[0,\infty)$, introduce the set
$\displaystyle{Q}_{0,n}(D)\triangleq\\{q_{0,n}\in{\overrightarrow{Q}}_{ad}:\int_{{\cal
X}_{0,n}}\biggl{(}\int_{{\cal
Y}_{0,n}}d(x^{n},y^{n}){\overrightarrow{q}}_{0,n}(dy^{n};x^{n})\biggr{)}\mu_{0,n}(dx^{n})\leq
D\\}$
and suppose it is nonempty.
Then ${\overrightarrow{Q}}_{0,n}(D)$ is a weak∗-closed subset of
${\overrightarrow{Q}}_{ad}$ and hence weak∗-compact.
###### Theorem 10.
Under Theorem 9, $R^{c}_{0,n}(D)$ has a minimum.
###### Proof.
Follows from weak∗-compactness of $\overrightarrow{Q}_{ad}$ and lower
semicontinuity of $\mathbb{I}(\mu_{0,n},q_{0,n})$ with respect to $q_{0,n}$
for a fixed $\mu_{0,n}$.
## 4 Necessary Conditions of Optimality of Causal Rate Distortion Function
In this section the form of the optimal causal product reconstruction kernels
is derived under a stationarity assumption. The method is based on calculus of
variations on the space of measures [9].
###### Assumption 11.
The family of measures
$\overrightarrow{q}_{0,n}(dy^{n};x^{n})=\otimes^{n}_{i=0}q_{i}(dy_{i};y^{i-1},x^{i})-a.s.$,
is the convolution of stationary conditional distributions.
Assumption 11 holds for stationary process
$\\{(X_{i},Y_{i}):i\in\mathbb{N}\\}$ and
$\rho_{0,i}(x^{i},y^{i})\equiv\rho(T^{i}{x^{n}},T^{i}{y^{n}})$, where
$T^{i}{x^{n}}$ is the shift operator on $x^{n}$. Utilizing Assumption 11,
which holds for stationary processes and a single letter distortion function,
the Gateaux differential of $\mathbb{I}(\mu_{0,n},{q}_{0,n})$ is done in only
one direction $\big{(}$since $q_{i}(dy_{i};y^{i-1},x^{i})$ are
stationary$\big{)}$.
The constrained problem defined by (8) can be reformulated using Lagrange
multipliers as follows (equivalence of constrained and unconstrained problems
follows similarly as in [9]).
${R}_{0,n}^{c}(D)=\inf_{{q}_{0,n}\in{\overrightarrow{Q}}_{ad}}\Big{\\{}{{\mathbb{I}}}(\mu_{0,n},{q}_{0,n})-s(\ell_{{d}_{0,n}}({q}_{0,n})-D)\Big{\\}}$
(9)
and $s\in(-\infty,0]$ is the Lagrange multiplier.
Note that ${\overrightarrow{Q}}_{ad}$ is a proper subset of the vector space
$L_{\infty}^{w}(\mu_{0,n},M_{rba}({\cal Y}_{0,n}))$ which represent the
realizability constraint. Therefore, one should introduce another set of
Lagrange multipliers to obtain an optimization on the vector space
$L_{\infty}^{w}(\mu_{0,n},M_{rba}({\cal Y}_{0,n}))$ without constraints.
###### Theorem 12.
Suppose $d_{0,n}(x^{n},y^{n})=\sum_{i=0}^{n}\rho(T^{i}{x^{n}},T^{i}{y^{n}})$
and the Assumption 7 holds. The infimum in $(\ref{ex13})$ is attained at
${q}^{*}_{0,n}\in L_{\infty}^{w}(\mu_{0,n},{\Pi}_{rba}({\cal
Y}_{0,n}))\cap{\overrightarrow{Q}}_{ad}$ given by
$\displaystyle{q}_{0,n}^{*}(dy^{n};x^{n})$ $\displaystyle=$
$\displaystyle\overrightarrow{q}^{*}_{0,n}(dy^{n};x^{n})-a.s.$
$\displaystyle=$
$\displaystyle\otimes_{i=0}^{n}q_{i}^{*}(dy_{i};y^{i-1},x^{i})-a.s$
$\displaystyle=$
$\displaystyle\otimes_{i=0}^{n}\frac{e^{s\rho(T^{i}{x^{n}},T^{i}{y^{n}})}\nu^{*}_{i}(dy_{i};y^{i-1})}{\int_{{\cal
Y}_{i}}e^{s\rho(T^{i}{x^{n}},T^{i}{y^{n}})}\nu^{*}_{i}(dy_{i};y^{i-1})},~{}s\leq{0}$
and $\nu^{*}_{i}(dy_{i};y^{i-1})\in{\cal Q}({\cal Y}_{i};{\cal Y}_{0,{i-1}})$.
The causal RDF is given by
$\displaystyle{R}_{0,n}^{c}(D)$ $\displaystyle=sD-\sum_{i=0}^{n}\int_{{{\cal
X}_{0,i}}\times{{\cal Y}_{0,i-1}}}\log\left(\int_{{\cal
Y}_{i}}e^{s\rho(T^{i}{x^{n}},T^{i}{y^{n}})}\nu^{*}_{i}(dy_{i};y^{i-1})\bigg{)}\right.$
$\displaystyle\quad\left.\times{{\overrightarrow{q}}^{*}_{0,i-1}}(dy^{i-1};x^{i-1})\otimes\mu_{0,i}(dx^{i})\right.$
If ${R}_{0,n}^{c}(D)>0$ then $s<0$ and
$\displaystyle\sum_{i=0}^{n}\int_{{\cal X}_{0,i}}\int_{{\cal
Y}_{0,i}}\rho(T^{i}{x^{n}},T^{i}{y^{n}}){\overrightarrow{q}}^{*}_{0,i}(dy^{i};x^{i})\mu_{0,i}(dx^{i})=D$
###### Remark 13.
Note that if the distortion function satisfies
$\rho(T^{i}{x^{n}},T^{i}{y^{n}})=\rho(x_{i},T^{i}{y^{n}})$ then
${q}^{*}_{i}(dy_{i};y^{i-1},x^{i})=q_{i}^{*}(dy_{i};y^{i-1},x_{i})-a.s.,~{}i\in{\mathbb{N}^{n}}$,
that is, the reconstruction kernel is Markov in $X^{n}$.
## 5 Realization of Causal Rate Distortion Function
Fig. 3 illustrates a cascade of sub-systems which realizes the causal RDF.
This is called source-channel matching in information theory [6]. It is also
described in [3] and [11] and is essential in control applications since this
technique allows us to design encoding/decoding schemes without delays.
Fig. 3: Block Diagram of Realizable Causal Rate Distortion Function
Examples to illustrate the concepts can be found in [3, 10].
## References
* [1] T. Berger, Rate Distortion Theory: A Mathematical Basis for Data Compression, Englewood Cliffs, NJ: Prentice-Hall, 1971.
* [2] R. S. Bucy, Distortion rate theory and filtering, IEEE Transactions on Information Theory, 28 (1982), pp. 336–340.
* [3] C. D. Charalambous and A. Farhadi, LQG optimality and separation principle for general discrete time partially observed stochastic systems over finite capacity communication channels, Automatica, 44 (2008), pp. 3181–3188.
* [4] I. Csiszár, On an extremum problem of information theory, Studia Scientiarum Mathematicarum Hungarica, 9 (1974), pp. 57–71.
* [5] N. Dunford and J. T. Schwartz, Linear Operators Part I: General Theory, John Wiley & Sons, Inc., Hoboken, New Jersey, 1988.
* [6] M. Gastpar, B. Rimoldi, and M. Vetterli, To code, or not to code: Lossy source-channel communication revisited, IEEE Transactions on Information Theory, 49 (2003), pp. 1147–1158.
* [7] A. K. Gorbunov and M. S. Pinsker, Asymptotic behavior of nonanticipative epsilon-entropy for Gaussian processes, Problems of Information Transmission, 27 (1991), pp. 361–365.
* [8] A. Ionescu Tulcea and C. Ionescu Tulcea, Topics in the Theory of Lifting, Springer-Verlag, Berlin, Heidelberg, New York, 1969.
* [9] D. G. Luenberger, Optimization by Vector Space Methods, John Wiley & Sons, Inc., New York, 1969.
* [10] P. A. Stavrou, C. D. Charalambous, and C. K. Kourtellaris, Realizable rate distortion function and Bayesian filtering theory, submitted to IEEE Information Theory Workshop (ITW), abs/1204.2980 (2012).
* [11] S. C. Tatikonda, Control Over Communication Constraints, PhD thesis, Mass. Inst. of Tech. (M.I.T.), Cambridge, MA, 2000.
|
arxiv-papers
| 2011-04-05T14:45:08 |
2024-09-04T02:49:18.130334
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Photios A. Stavrou and Charalambos D. Charalambous",
"submitter": "Photios Stavrou",
"url": "https://arxiv.org/abs/1104.0862"
}
|
1104.0900
|
# Power and momentum dependent soliton dynamics in lattices with longitudinal
modulation
Panagiotis Papagiannis, Yannis Kominis and Kyriakos Hizanidis School of
Electrical and Computer Engineering, National Technical University of Athens,
157 73 Athens, Greece
###### Abstract
Soliton dynamics in a large variety of longitudinally modulated lattices are
studied in terms of phase space analysis for an effective particle approach
and direct numerical simulations. Complex soliton dynamics are shown to depend
strongly on both their power/width and their initial momentum as well as on
lattice parameters. A rish set of qualitatively distinct dynamical features of
soliton propagation that have no counterpart in longitudinally uniform
lattices is illustrated. This set includes cases of enhanced soliton mobility,
dynamical switching, extended trapping in several transverse lattice periods,
and quasiperiodic trapping, which are promising for soliton control
applications.
###### pacs:
42.65.Sf, 42.65.Tg, 03.75.Lm, 05.45.Yv
## I Introduction
The propagation of spatially localized waves in inhomogeneous nonlinear media
is a subject of general interest that rises in many branches of physics such
as light propagation in photonic lattices, matter-wave formation in Bose-
Einstein condensates and solid state physics. Periodic modulation of the
material properties in the form of a photonic lattice configuration has been
shown to alter solitary wave formation and dynamics so that lattice solitons
have features and properties with no counterpart in homogeneous media Chris .
The inhomogeneity of the medium can be either written in the form of waveguide
arrays and optical networks or dynamically formed and leads to additional
functionality with respect to wave control allowing for a variety of
interesting applications. In such lattices stable solitons can be formed in
specific positions determined by the lattice geometry. For the case of
photonic structures transversely modulated by monochromatic (single
wavenumber) variations of the linear or nonlinear refractive index stable
solitons are always formed in the positions corresponding to the minima of the
respective potential mono_static ; OpEx_08 . In the case of polychromatic
lattices the effective potential is different for solitons having different
power and spatial width so that the number and the positions of stable
solitons depend on their properties OpEx_08 . Therefore, the increased
complexity of the medium modulation results in additional functionality of the
photonic structure with respect to soliton discrimination.
An additional degree of freedom in lattice modulation is related to the
longitudinal periodic modulation of the properties of the medium. Such cases
have been studied in the motion of charged-particle wave packets along
periodic potentials in an a.c. electric field DuKe_86 ; Holthaus_92 ; DiSt_02
and in matter-wave dynamics in Bose-Einstein condensates under the influence
of a time-periodic potential MaMa_06 ; PoAlOst_08 ; StLo_08 . In the context
of light propagation in photonic lattices, previous studies include
diffraction managed solitons dm as well as Rabi-like oscillations and
stimulated mode transitions with linear and nonlinear wave states in properly
modulated waveguides and lattices rabi . Longitudinally modulated lattices
have been considered for soliton steering in dynamically induced lattices
Kominis_steering ; Tsopelas_steering ; Other_dragging . Dynamical localization
and control has also been studied in longitudinally modulated waveguide arrays
including periodically curved wgd_curved waveguides and waveguides with
periodically varying width wgd_varwidth as well as lattices with
longitudinally varying refractive index var_refractive .
In this work we investigate the effect of the a wide variety of different
types of longitudinal lattice modulation on soliton dynamics and explore the
additional functionality of the corresponding photonic structure in terms of
soliton control based on both their power/width and momentum. Strong momentum
dependence of soliton dynamics results from the fact that longitudinally
modulated lattices actually carry momentum if seen as wave modulations of the
medium Kominis_steering . The two types of longitudinal modulations considered
are amplitude and transverse wavenumber modulation along the propagation
distance, being capable of describing several realistic configurations as well
as providing fundamental understanding of the dynamical features related to
even more general modulations. Soliton propagation in such inhomogeneous media
is modeled by a NonLinear Schrödinger (NLS) equation with transversely and
longitudinally varying linear refractive index. Utilization of this continuous
model allows for the study of soliton dynamical trapping and detrapping
dynamics in contrast to discrete models where discrete solitons are locked at
high powers to their input waveguides and are not allowed to travel sideways.
In contrast to many previous studies where longitudinal and transverse length
scales are well separated, our study focuses on cases where length scale
interplay takes place. Therefore, the periods of longitudinal lattice
modulation are comparable to periods of soliton oscillation in a
longitudinally homogeneous lattices. Resonances between these periodicities
result in drastic modification of soliton dynamics and gives rise to a
plethora of novel dynamical features depending on soliton power and momentum,
including enhanced mobility, dynamical switching and trapping in extended
areas of the lattice, periodic and quasiperiodic oscillations. An effective
particle approach Kaup_Newell is utilized in order to obtain a nonintegrable
Hamiltonian system describing soliton ”center of mass” motion. The complex
dynamics of the system are analyzed in terms of Poincare surfaces of section
providing a comprehensive illustration of soliton dynamical features. The
remarkable agreement of direct numerical simulations with the phase space
analysis of the effective particle dynamics suggest that the latter provides a
useful tool for analyzing dynamics of different solitons propagating in
lattices with different configurations as well as for designing lattices with
desirable features.
## II Model and effective particle approach
The wave propagation in an inhomogeneous medium with Kerr-type nonlinearity is
described by the perturbed NonLinear Schrödinger (NLS) equation
$i\frac{\partial u}{\partial z}+\frac{\partial^{2}u}{\partial
x^{2}}+2|u|^{2}u+\epsilon n(x,z)u=0$ (1)
where $z$ and $x$ are the normalized propagation distance and transverse
coordinates respectively. We consider relatively weakly inhomogeneous media
where the potential function $n(x,z)$ models longitudinally modulated
lattices. $\epsilon$ is a small dimensionless parameter indicating the
strength of the potential and $n(x,z)$ is periodic in $x$ and $z$.
The lattice potential has a fundamental transverse ($x$) periodicity
$(K_{0}^{-1})$ and we consider two types of longitudinal ($z$) modulations.
The first one will be called AM modulation, as we modulate the amplitude of
the potential function according to
$n_{AM}(x,z)=[A_{0}+\alpha\sin(K_{1}x+\phi)\sin(\Omega z)]\sin(K_{0}x)$ (2)
In Figs. 1(a)-(c) we demonstrate three examples of such lattice patterns in
the $(x,z)$ plane. For the second type of modulation, which we call WM, the
transverse wavenumber of the lattice varies periodically with $z$ as follows
$n_{WM}(x,z)=A_{0}\sin[K_{0}x+\alpha\sin(K_{1}x+\phi)\sin(\Omega z)]$ (3)
and the corresponding patterns are shown in Figs. 1(d)-(f). $A_{0}$ is the
strength of the unperturbed transverse lattice which is set to unity without
loss of generality, and $\alpha$ can be seen as the relative strength of the
AM or WM modulation. The wavenumber of the longitudinal modulation is $\Omega$
and the secondary transverse wavenumber $K_{1}$ corresponds to a nonuniform
longitudinal modulation depending also on the transverse coordinate, thus
allowing for the description of a very wide variety of lattice patterns: We
are able to model modulations where the positions of the potential maxima and
minima are either varying in-phase or out-of-phase as in Figs. 1(a)-(c) or are
curved as in Figs. 1(d)-(f), with respect to $z$.
The unperturbed NLS ($\epsilon=0$) has a fundamental singe-soliton solution of
the form
$u(x,z)=\eta sech[\eta(x-x_{0})]e^{i(\frac{v}{2}z+2\sigma)}$
where $\dot{x}_{0}=v$, $\dot{\sigma}=-v^{2}/8+\eta^{2}/2$. $\eta$ is the
amplitude or the inverse width of the soliton solution, $x_{0}$ its center
(called sometimes the center of mass due to the effective-particle analogy of
the solitons), $v$ the velocity, and $\sigma$ the nonlinear phase shift.
The longitudinal evolution of the center $x_{0}$ under the lattice
perturbation is obtained by applying the effective-particle method. According
to this method we assume that the functional form and the properties (width,
power) of the soliton are conserved in the case of the weakly perturbed NLS.
This assumption has to be verified through numerical integration of Eq. (1) at
least to a good approximation. The equation for $x_{0}$ is equivalent to an
equation that describes the motion of a particle under the influence of an
effective potential (periodic in our case) and is given by
$m\ddot{x}_{0}=-\frac{\partial V_{eff}}{\partial x_{0}}$ (4)
where $m=\int{|u|^{2}dx}$ the integral of the non-dimensional soliton power,
equivalent to the particle mass, and
$V_{eff}(x_{0})=2\int{n(x,z)|u(x;x_{0})|^{2}dx}$ the aforementioned effective
potential determining soliton dynamics. By substituting the expressions of
fundamental soliton and lattice potential Eq. (2) or Eq. (3), we obtain
$m=2\eta$ and
$\displaystyle V_{eff}^{(AM)}$ $\displaystyle=$ $\displaystyle
2A_{0}\pi\epsilon\frac{K_{0}\sin(K_{0}x_{0})}{\sinh(\pi
K_{0}/2\eta)}+\pi\epsilon\alpha\sin(\Omega z)$ (5)
$\displaystyle\times\sum_{j=\pm}\frac{K_{j}\cos(K_{j}x_{0}+\phi)}{\sinh(\pi
K_{j}/2\eta)}$
with $K_{\pm}=K_{1}\pm K_{0}$, and
$V_{eff}^{(WM)}=2A_{0}\pi\epsilon\sum_{m=0}^{\infty}J_{m}(\alpha\sin(\Omega
z))\frac{K_{m}\cos(K_{m}x_{0}+\phi)}{\sinh(\pi K_{m}/2\eta)}$ (6)
with $K_{m}=mK_{1}+K_{0}$, for the two modulation types respectively.
The soliton dynamics, as described by Eq. (4), with the above effective
potentials are determined by a nonautonomous Hamiltonian system
$H=\frac{mv^{2}}{2}+V_{eff}(x,z)$ (7)
where $z$ is considered as ”time” and $v=\dot{x_{0}}$ is the velocity of
soliton center of mass.
For the case of zero longitudinal modulation ($\alpha=0$) the system is
integrable and soliton dynamics are completely determined by the form of the
$z$-independent effective potential. In such case solitons are either trapped,
oscillating between two maxima of the effective potential, or traveling along
the transversely inhomogeneous medium. For trapped solitons the oscillation
frequency varies from a maximum frequency
$\omega_{0}=K_{0}\sqrt{2\epsilon A_{0}\frac{\pi K_{0}/2\eta}{\sinh(\pi
K_{0}/2\eta)}}$ (8)
corresponding to small harmonic oscillations around the minimum of the
effective potential to a minimum zero frequency (infinite period)
corresponding to an heteroclinic orbit connecting the unstable saddle points
located at the maxima of the effective potential. The heteroclinic orbit is
the separatrix between trapped and traveling solitons. Conditions for soliton
trapping are determined by the initial soliton energy $H$, which depends on
both soliton position and velocity, and occurs when
$-\omega_{0}^{2}/K_{0}^{2}\leq 2\eta H<\omega_{0}^{2}/K_{0}^{2}$. Moreover,
solitons located at the stable points (corresponding to the minima of the
effective potential) can be detrapped if their initial velocities exceed a
critical value
$v_{cr}=\pm 2\sqrt{max(V_{eff})/m}=\pm 2\omega_{0}/K_{0}$ (9)
and travel at a direction determined by the sign of their initial velocity.
The presence of an explicit $z$ dependence in the effective potential
($\alpha\neq 0$) results in the nonintegrability of the Hamiltonian system
which describes the soliton motion and allows for a plethora of qualitatively
different soliton evolution scenarios. The corresponding richness and
complexity of soliton dynamics opens a large range of possibilities for
interesting applications where the underlying inhomogeneity results in
advanced functionality of the medium. The nonintegrability results in the
destruction of the heteroclinic orbit (separatrix), allowing for dynamical
trapping and detrapping of solitons. Therefore, we can have conditions for
enhanced soliton mobility: Solitons with small initial energy can travel
through the lattice as well as hop between adjacent potential wells and
dynamically be trapped in a much wider area, including several potential
minima. Moreover, resonances between the frequencies of the internal
unperturbed soliton motion and the $z$-modulation frequencies result in
quasiperiodic trapping and symmetry breaking with respect to the velocity
sign. It is worth mentioning that all these properties depend strongly on
soliton characteristics, namely $\eta$, so that different solitons undergo
qualitatively and quantitatively distinct dynamical evolution in the same
inhomogeneous medium.
## III Results and Discussion
In the following, we consider relatively weakly modulated lattices being of
interest in most applications and set $\epsilon=10^{-2}$ . For example, in
typical optical lattices consisting of nonlinear material of AlGaAs type
(refractive index $n_{0}\simeq 3.32$ at $\lambda_{0}\simeq 1.53\mu m$), when
the trasnverse dimension $x$ is normalized to $1\div 3\mu m$,
$\epsilon=10^{-2}$ corresponds to an actual refractive index contrast $\Delta
n_{0}=10^{-4}\div 10^{-5}$ which is relevant for experimental configurations.
For such cases a normalized propagation distance $z=500$, which is large
enough in order to observe the dynamical features presented in the following,
corresponds to an actual length of $13\div 123mm$. Moreover, as we show in
following, weak lattices are characterized by complex but not completely
chaotic soliton dynamics, allowing for controllable evolution features. It is
expected that interesting soliton dynamics occur when the spatial scales of
the system, namely transverse and longitudinal modulation periods and soliton
width, become comparable, so that length scale competition takes place. As
shown from Eq. (8), the maximum oscillation frequency of a trapped soliton in
an unmodulated lattice depends strongly on the ratio $K_{0}/2\eta$, so that
solitons with different $\eta$ can have different $\omega_{0}$ as depicted in
Fig. 2, where it is shown that the value of $\omega_{0}$ saturates quickly to
an upper bound $\omega_{0}(\eta\rightarrow\infty)=\sqrt{2\epsilon A_{0}}K_{0}$
as the width $(\eta^{-1})$ of the soliton becomes comparable or smaller than
the fundamental transverse period ($2\pi/K_{0}$) of the unmodulated lattice.
The value of $\omega_{0}$ for each soliton and $K_{0}$ is crucial since by
introducing the longitudinal modulation, we expect the frequencies $\Omega$
that have significant effect in the soliton behavior are those that fulfill a
resonance condition with the unperturbed frequencies ($\alpha=0)$ ranging from
0 to $\omega_{0}$. These resonant interactions give rise to qualitative and
quantitative different evolution of $(x_{0},v)$. In order to investigate the
new features the longitudinal modulation induces, we use the phase space
representation of (4) on a Poincare surface of section stroboscopically
produced with period $\Omega$. The phase space topology visualizes all the
qualitative features of soliton dynamics and provides a significant amount of
comprehensive information which is useful for categorizing and analyzing
characteristic cases and conceptual design of potential applications.
As shown in Figs.3 and 4, in both AM and WM lattices the phase space topology
changes significantly from the corresponding case with no longitudinal
modulation (depicted with solid curves). The topology depends strongly on both
the characteristics of the lattice and the characteristics of the soliton
($\eta$), so that not only different lattices but also different solitons on
the same lattice can have drastically different dynamical features. This is
shown in different columns and rows of Figs. 3 and 4, for transverse lattice
modulations of period $\Lambda_{0}=2\pi(K_{0}=1)$ and soliton parameter
$\eta=0.5\div 2.5$ corresponding to a FWHM value range $5.3\div 1$ in
normalized transverse dimensions. Before proceeding to the investigation of
specific characteristic cases of soliton evolution we discuss the qualitative
topological characteristics of phase spaces corresponding to different
lattices and soliton power ($\eta$). As expected for a nonintegrable system,
the typical Poincare surface of section consists of regular curves
corresponding to perturbed tori of nonresonant quasiperiodic orbits, resonant
islands around periodic closed orbits and densely filled chaotic regions
related to complex motion. In all cases, the separatrix between bounded and
unbounded motion has been destroyed and replaced by a chaotic region while the
regions corresponding to bounded motion in longitudinally modulated lattices
are downsized. The extent of these areas depends strongly on the amplitude of
the effective potential which varies exponentially on the ratio $K_{m}/\eta$
as shown in Eqs. (5) and (6). Therefore, solitons having different power
($\eta$) have completely different trapping conditions in the same lattice,
while soliton mobility can be drastically enhanced. In addition, changing the
soliton power for a specific lattice configuration results in a drastic change
on the frequency spectrum of the unperturbed soliton oscillations through
$\omega_{0}(\eta)$. This leads to the possibility of fulfilling resonance
conditions with the frequency of the longitudinal lattice modulation $\Omega$
and to the appearance of periodic orbits and surrounding resonant islands.
Another qualitative differentiation between the phase spaces is the
bifurcation of the stable center corresponding to the minimum of the effective
potential in the unmodulated lattice to a saddle point seen, for example, in
Fig. 3(c),(d) for $\eta=1.75,2.5$ ($\Omega/\omega_{0}=1,0.97$
correspondingly). In addition to the power dependence of the phase space
structure, a common characteristic of all phase spaces for longitudinally
modulated lattices is the symmetry breaking of the Poincare surface of section
with respect to $v=0$ and $K_{0}x_{0}=-\pi/2$. This feature reveals a
selectivity of the lattice on initial velocity (momentum) and displacement:
Solitons with opposite velocities or symmetrically placed from the former
minimum of the effective potential (which in the unmodulated case would remain
in the same orbit) they undergo now qualitatively distinct evolution. In such
cases one of the solitons can be trapped while the other is detrapped. The
velocity selectivity is a direct consequence of the momentum that is
incorporated in the lattice pattern due to the biperiodic lattice potential
and even though is met in both AM and WM lattices it seems to be more
prominent in the later as seen from the Poincare surfaces of section. The
dependence of motion on its initial displacement is related to the fact that
the local minima of the transverse lattice profile changes periodically with
$z$.
Having discussed the topological features of phase spaces corresponding to
different combinations of lattice configurations and soliton powers ($\eta$),
we now show specific characteristic cases of soliton motion having
qualitatively distinct properties and being promising for potential
applications. Although similar cases can be met in many different cases, we
focus our analysis on the cases depicted in Figs. 3(c) and 4(h) with parts of
them illustrated in more detail in Figs. 5(a) and (b). In the following
figures white thick lines and black dashed lines depict soliton center motion
as obtained from numerical simulation of the original perturbed NLS Eq. (1)
and the effective particle approach respectively, showing a remarkable
agreement. The thick solid lines depict numerical simulations for the case of
an unmodulated lattice for comparison.
In Fig. 6(a) we illustrate soliton propagation for an initial center position
and velocity corresponding to point (i) of Fig. 5(a). It is shown that soliton
undergoes a complex evolution being dynamically trapped between several
transverse lattice periods, in contrast to the case of same initial conditions
in an unmodulated lattice. Soliton mobility is even more pronounced in the
case shown in Fig. 6(b) where complete dynamical detrapping takes place
allowing for a soliton to travel across the lattice. The latter is the typical
case of soliton evolution for initial conditions located outside the region
occupied by regular orbits in the corresponding phase spaces. It is worth
mentioning that the degree of mobility enhancement is not the same for
different solitons in the same lattice as it can be seen by the comparison of
the size of the regular area in Figs. 3(a)-(d). Initial conditions located on
the regular or island curves of the phase space, as point (iii) in Fig. 5(a),
correspond to quasiperiodic soliton oscillations as shown in Fig. 6(c). In
Fig. 6(d) the soliton evolution for an initial condition close to the saddle
point (iv) of Fig. 5(a) is depicted exhibiting an unstable (hyperbolic) type
of periodic orbit.
A very interesting evolution scenario is illustrated in Fig. 7(a) for initial
conditions depicted by points (i) and (ii) of Fig. 5(b), corresponding to
solitons having the same initial positions but opposite initial velocities:
The soliton with the positive velocity remains trapped and periodically
oscillating (since it corresponds to a center of a resonant island - exact
resonance) while the soliton with the negative velocity undergoes a dynamical
switching to a neighbor lattice position where undergoes persistent trapping.
This type of feature can be considered for promising power and velocity
dependent soliton switching applications. In the same fashion, the effect of
symmetry breaking with respect to $v=0$ can lead to velocity sign dependent
soliton trapping or traveling across the lattice, as shown in Fig. 7(b). A
trapped soliton propagation having the form of a beat is depicted in Fig. 7(c)
for an initial condition corresponding to a resonant island.
As a final case we consider soliton dynamical trapping within an extended area
including many transverse periods of the lattice in a persistent periodic
fashion, as shown in Figs. 8(b) and (c). The initial conditions leading to
such evolution correspond to two families of interconnected resonant islands
(1) and (2) shown in Fig. 8(a). These islands are located outside the
separatrix of the respective unmodulated lattice showing that outside but
close to the separatrix the longitudinal modulation can induce interesting
persistent dynamical trapping for initial conditions for which traveling
solitons are expected in the unmodulated lattice. In accordance with previous
cases, the lattice possesses a selectivity property with respect to initial
soliton velocity direction as we have not a symmetry with respect to the
initial velocity axis. The type of evolution depicted in Figs. 8(b) and (c) is
interesting for power and momentum dependent multi-port soliton switching
applications, where different input/output ports correspond to different
potential minima.
## IV Conclusions
We have studied soliton dynamics in a large variety of longitudinally
modulated lattices in terms of direct numerical simulations as well as phase
space analysis for an effective particle approach. The remarkable agreement of
the results suggest that the effective particle approach and the phase space
analysis with the utilization of Poincare surfaces of section provides a
useful tool for studying complex soliton dynamics in such lattices as well as
analyzing and/or designing lattices having desirable properties. It is shown
that soliton dynamics depend strongly on their power and the corresponding
spatial width through its relation with the transverse lattice period as well
as on both magnitude and direction of their initial velocity. A large variety
of qualitatively distinct dynamical features of soliton propagation have been
shown that have no counterpart in longitudinally uniform lattices. Therefore,
cases of enhanced soliton mobility, dynamical switching and trapping in
several transverse lattice periods as well as quasiperiodic and periodic
trapping have been shown, suggesting that the corresponding complexity of the
effective particle phase space gives rise to a plethora of dynamical features
which are promising for applications.
## References
* (1) D.N. Christodoulides and R.I. Joseph, Opt. Lett. 13, 794 (1988); H.S. Eisenberg, Y.Silberberg, R. Morandotti, and J.S. Aitchison, Phys. Rev. Lett. 81, 3383 (1998); J.W. Fleischer, M. Segev, N.K. Efremidis, and D.N. Christodoulides, Nature 422, 147 (2003); D. N. Christodoulides, F. Lederer, and Y. Silberberg, Nature 424, 817 (2003).
* (2) N.K. Efremidis and D.N. Christodoulides, Phys. Rev. A 67, 063608 (2003); P.J.Y. Louis, E.A. Ostrovskaya, C.M. Savage, and Y.S. Kivshar, Phys. Rev. A 67, 013602 (2003); D.E. Pelinovsky, A.A. Sukhorukov and Y.S. Kivshar, Phys. Rev. E 70, 036618 (2004).
* (3) Y. Kominis and K. Hizanidis, Optics Express 16, 12124 (2008).
* (4) D. H. Dunlap, and V. M. Kenkre, Phys. Rev. B 34, 3625 (1986).
* (5) M. Holthaus, Phys. Rev. Lett. 69, 351 (1992).
* (6) M. M. Dignam, and C. M. de Sterke, Phys. Rev. Lett. 88, 046806 (2002).
* (7) T. Mayteevarunyoo and B.A. Malomed, Phys. Rev. A 74, 033616 (2006).
* (8) D. Poletti, T.J. Alexander, E.A. Ostrovskaya, B. Li, and Yu.S. Kivshar, Phys. Rev. Lett. 101, 150403 (2008); D. Poletti, E.A. Ostrovskaya, T.J. Alexander, B. Li, Y.S. Kivshar, Physica D 238, 1338 (2009); J. Abdullaev, D. Poletti, E.A. Ostrovskaya,and Y.S. Kivshar, Phys. Rev. Lett. 105, 090401 (2010).
* (9) K. Staliunas, S. Longhi, Phys. Rev. A 78, 33606(2008).
* (10) H. S. Eisenberg, Y. Silberberg, R. Morandotti and J. S. Aitchison, Phys. Rev. Lett. 85, 1863 (2000); M. J. Ablowitz and Z. H. Musslimani, Phys. Rev. Lett. 87, 254102 (2001); A. Szameit, I.L. Garanovich, M. Heinrich, A. Minovich, F. Dreisow, A.A. Sukhorukov, T. Pertsch, D.N. Neshev, S. Nolte, W. Krolikowski, A. T nnermann, A. Mitchell, and Y.S. Kivshar, Phys. Rev. A 78, R031801 (2008).
* (11) Y.V. Kartashov, V.A. Vysloukh and L. Torner, Phys. Rev. Lett. 99, 233903 (2007); K.G. Makris, D.N. Christodoulides, O. Peleg, M. Segev and D. Kip, Optics Express 16, 10309 (2008); K. Shandarova, C. E. Ruter, D. Kip, K. G. Makris, D. N. Christodoulides, O. Peleg, and M. Segev, Phys. Rev. Lett. 102, 123905 (2009).
* (12) Y. Kominis and K. Hizanidis, J. Opt. Soc Am. B 21, 562 (2004); J. Opt. Soc Am. B 22, 1360 (2005).
* (13) I. Tsopelas, Y. Kominis and K. Hizanidis K, Phys. Rev. E 74, 036613 (2006); Phys. Rev. E 76, 046609 (2007).
* (14) Y.V. Kartashov, L. Torner and D.N. Christodoulides, Opt. Lett. 30, 1378 (2005); C.R. Rosberg, I.L. Garanovich, A.A. Sukhorukov, D.N. Neshev, W. Krolikowski and Y.S. Kivshar, Opt. Lett. 31, 1498 (2006); G. Assanto, L.A. Cisneros, A.A. Minzoni, B.D. Skuse, N.F. Smyth and A.L. Worthy, Phys. Rev. Lett. 104, 053903 (2010).
* (15) S. Longhi, M. Marangoni, M. Lobino, R. Ramponi, P. Laporta, E. Cianci and V. Foglietti, Phys. Rev. Lett. 96, 243901 (2006); R. Iyer, J.S. Aitchison, J. Wan, M.M. Dignam and C. M. de Sterke, Optics Express 15, 3212 (2007); I.L. Garanovich, A. Szameit, A.A. Sukhorukov, T. Pertsch, W. Krolikowski, S. Nolte, D. Neshev, A. Tuennermann and Y.S. Kivshar, Optics Express 15, 9737 (2007); A. Szameit, I.L. Garanovich, M. Heinrich, A.A. Sukhorukov, F. Dreisow, T. Pertsch, S. Nolte, A. T nnermann and Y.S. Kivshar, Nature Physics 5, 271 (2009).
* (16) K. Staliunas and C. Masoller, Optics Express 14, 10669 (2006); S. Longhi and K. Staliunas, Opt. Commun. 281, 4343 (2008).
* (17) A. Szameit, Y.V. Kartashov, F.Dreisow, M. Heinrich, T. Pertsch,S. Nolte, A. Tunnermann, V. A. Vysloukh, F. Lederer, and L. Torner, Phys. Rev. Lett. 102, 153901 (2009); A. Szameit, Y.V. Kartashov, M. Heinrich, F. Dreisow, R. Keil, S. Nolte, A. T nnermann, V.A. Vysloukh, F. Lederer and L. Torner, Opt. Lett. 34, 2700 (2009); Y.V. Kartashov, A. Szameit, V.A. Vysloukh and L. Torner, Opt. Lett. 34, 2906 (2009).
* (18) D.J. Kaup and A.C. Newell, Proc. R. Soc. London, Ser. A 361, 413 (1978).
Figure 1: (Color online) Lattice patterns in ($x,z$) plane for AM (a-c) and WM
(d-f) modulation. $K_{0}=1$ and $\Omega=0.1$ in all cases and
$K_{1}=0,\phi=\pi/2$ (a,d), $K_{1}=1,\phi=0$ (b,e), $K_{1}=1/2,\phi=0$ (c,f).
Blue (light gray) colored areas indicate negative values and contain the
minima of the lattice potential while red (colored areas indicate positive
values where the corresponding maxima are located.
Figure 2: The spatial frequency of trapped soliton small oscillations
$\omega_{0}$ as a function of inverse width $\eta$ for different values of the
transverse wavenumber $K_{0}$ in an unmodulated lattice.
Figure 3: (Color online) Poincare surfaces of section of the effective-
particle system for AM lattice modulations. Each row corresponds to the same
lattice pattern (Figs.1(a)-(c)) and each column to the same soliton width with
values $\eta=0.5,1,1.75,2$ from left to right . In all cases $K_{0}=1$. From
top row to bottom, we have $\Omega=0.1325$, $\Omega=0.4$, $\Omega=0.15$
correspondingly. All surfaces of section are superimposed on the corresponding
phase space without longitudinal modulation (continuous, red curves). The
separatrix between the trapped and untrapped soliton motion in the unmodulated
lattice (blue curve) is also shown.
Figure 4: (Color online) Poincare surfaces of sections of the effective-
particle system for WM lattice modulations. Each row corresponds to the same
lattice pattern (Figs.1(d)-(f)) and each column to the same soliton width with
values $\eta=0.5,1,1.75,2$ from left to right . From top row to bottom, we
have $\Omega=0.1$, $\Omega=0.4$, $\Omega=0.175$ correspondingly. All surfaces
of section are superimposed on the corresponding phase space without
longitudinal modulation (continuous, red curves). The separatrix between the
trapped and untrapped soliton motion in the unmodulated lattice (blue curve)
is also shown.
Figure 5: (Color online) (a) Detail of the Poincare surface of section from
Fig.3(c) (AM modulation). The values of the initial parameters $(x_{0},u)$ at
the points depicted in figure are $(i)(-\pi/2,0.202)$, $(ii)(-1.57,-0.1425)$,
$(iii)(-1,-0.21)$, $(iv)(-1.5308,-0.005)$.(b) Detail of the Poincare surface
of section from Fig.4(h), (WM modulation). The values of the initial
parameters $(x_{0},u)$ at the points depicted in figure are $(i),(ii)(-2.5,\pm
0.07)$, $(iii),(iv)(-0.34,\pm 0.085)$, $(v)(-1.58,0.046)$.
Figure 6: (Color online) Evolution of the soliton center in an AM lattice
pattern with initial conditions taken from the corresponding points $(i)-(iv)$
of Fig. 5(a). (a) point $(i)$: enhanced mobility with dynamic trapping, (b)
point $(ii)$: enhanced mobility with complete detrapping (c) point $(iii)$:
quasiperiodic oscillations and (d) point $(iv)$: hyperbolic periodic
oscillations. Thick white curves correspond to results from direct numerical
simulations, dashed black curves from the effective particle model and solid
black ones from direct simulations for the unmodulated lattice. The
corresponding lattice pattern is shown in the background.
Figure 7: (Color online) Evolution of the soliton center in an WM lattice
pattern with initial conditions taken from the corresponding points $(i)-(v)$
of Fig. 5(b). (a) point $(i)$ (upper, yellow curve): Trapping with periodic
oscillations for the soliton with the positive initial velocity, point $(ii)$
(lower, white curve): Dynamic switching for the soliton with the negative
velocity. (b) point $(iii)$ (upper, yellow curve): Trapping with quasiperiodic
oscillations for the soliton with positive initial velocity, point $(iv)$
(lower, white curve): Detrapped motion for the soliton with the negative
velocity. (c) point $(v)$ (white curve): Beat oscillations. Thick white/yellow
curves correspond to results from direct numerical simulations, dashed black
curves from the effective particle model and solid black ones from direct
simulations for the unmodulated lattice. The corresponding lattice pattern is
shown in the background.
Figure 8: (Color online) (a) Poincare surface of section for a soliton with
$\eta=5$ at a WM lattice potential of Fig. 1(d), superimposed on the phase
space of the unmodulated lattice. For illustration purposes, the surface of
section is produced with initial values at one of the four periods of the
phase space. (1) and (2) are two different families (belonging to different
tori) of interconnected resonant islands. (b) Soliton center trajectories of
each fammily with initial conditions (1) $(x_{0},v)=(0.05,0.178)$ and (2)
$(x_{0},v)=(-\pi/2,-0.28)$. Thick white curves correspond to results obtained
from direct numerical simulations, dashed black curves are obtained form the
effective particle model.
|
arxiv-papers
| 2011-04-05T17:33:30 |
2024-09-04T02:49:18.135672
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Panagiotis Papagiannis, Yannis Kominis and Kyriakos Hizanidis",
"submitter": "Panagiotis Papagiannis",
"url": "https://arxiv.org/abs/1104.0900"
}
|
1104.1124
|
11institutetext: MPIfR, Auf dem Hügel 69, 53121 Bonn, Germany
22institutetext: Purple Mountain Observatory, Chinese Academy of Sciences,
Nanjing 210008, China 33institutetext: National Radio Astronomy Observatory,
520 Edgemont Rd., Charlottesville, VA 22903, USA 44institutetext: Joint
Institute for VLBI in Europe, Postbus 2, 7990 AA Dwingeloo, The Netherlands
# Ammonia ($J$,$K$) = (1,1) to (4,4) and (6,6) inversion lines detected in the
Seyfert 2 galaxy NGC 1068
Y. Ao email: ypao@mpifr-bonn.mpg.de1122 C. Henkel 11 J.A. Braatz 33 A. Weiß 11
K. M. Menten & S. Mühle 1144
We present the detection of the ammonia (NH3) ($J$,$K$) = (1,1) to (4,4) and
(6,6) inversion lines toward the prototypical Seyfert 2 galaxy NGC 1068, made
with the Green Bank Telescope (GBT). This is the first detection of ammonia in
a Seyfert galaxy. The ortho-to-para-NH3 abundance ratio suggests that the
molecule was formed in a warm medium of at least 20 K. For the NH3 column
density and fractional abundance, we find (1.09$\pm$0.23)$\times$1014 cm-2 and
(2.9$\pm$0.6)$\times$10-8, respectively, from the inner $\sim$1.2 kpc of NGC
1068. The kinetic temperature can be constrained to 80$\pm$20 K for the bulk
of the molecular gas, while some fraction has an even higher temperature of
140$\pm$30 K.
###### Key Words.:
galaxies:individual: NGC 1068 – galaxies: Seyfert – galaxies: ISM – ISM:
molecules – radio lines: galaxies
## 1 Introduction
Studies of molecular gas provide information about the gas density, the
temperature, and kinematics within galaxies, help us to understand chemical
evolution over cosmic time and allow us to study the triggering and fueling
mechanisms of star formation and active galactic nuclei (AGN). Most stars are
formed in dense gas cores, which are embedded in giant molecular clouds
(GMCs). The star-forming activity is related to the dense gas and not to the
bulk of the GMC’s material (Gao & Solomon 2004). Determining the physical
properties of the dense gas in galaxies is therefore of fundamental importance
for our understanding of star formation and the evolution of galaxies. Among
the most commonly observed species are CO, CS, HCN, and HCO+. In local dark
clouds, the temperature can be constrained by observations of the $J$ = 1$-$0
transition of CO, both because this transition is opaque and thermalized and
because the emission is often extended enough to fill the beam of a single-
dish telescope. However, in external galaxies, filling factors are much less
than unity. Furthermore, CO, CS, HCN, and HCO+ suffer from a coupled
sensitivity to the kinetic temperature and spatial density, making an observed
line ratio consistent with both a high density at a low temperature and a low
density at a high temperature. Specific information about the individual
physical parameters therefore requires a molecular tracer that possesses a
singular sensitivity to either density or temperature. Ammonia (NH3) is such a
molecule.
Ammonia is widespread and its level scheme contains inversion doublets owing
to the tunneling of the nitrogen atom through the plane defined by the three
hydrogen atoms. The metastable ($J,K\thinspace=\thinspace J$) rotational
levels, for which the total angular momentum quantum number $J$ is equal to
its projection on the molecule’s symmetry axis, are not radiatively but
collisionally coupled. NH3 is therefore a good tracer of the gas kinetic
temperature. Another advantage is that the inversion lines are quite close in
frequency, thus allowing us to measure sources with similar beam sizes and to
use the same telescope-receiver combination, which minimizes calibration
uncertainties of the measured line ratios. As a consequence, NH3 is widely
studied to investigate the physical properties of dark clouds and massive
star-forming regions in our Galaxy (e.g., Ho & Townes 1983; Walmsley &
Ungerechts 1983; Bourke et al. 1995; Ceccarelli et al. 2002; Pillai et al.
2006; Wilson et al. 2006). However, in spite of the high sensitivity and
stable baselines of present state-of-the-art facilities, ammonia multilevel
studies in extragalactic sources are still rare and are limited to the Large
Magellanic Cloud, IC 342, NGC 253, M 51, M 82, Maffei 2, and Arp 220 (Martin &
Ho 1986; Henkel et al. 2000; Weiß et al. 2001; Takano et al. 2002;
Mauersberger et al. 2003; Ott et al. 2005, 2010; Takano et al. 2005 ) in the
local Universe, and in absorption to the gravitationally lensed B0218+357 and
PKS 1830$-$211 at redshifts of $z\sim$ 0.7 and 0.9 (Henkel et al. 2005, 2008).
NGC 1068, a prototypical Seyfert 2 galaxy (Antonucci & Miller 1985), is
located at a distance of 15.5 Mpc (a heliocentric systemic velocity $cz$ =
1137 km s-1 is used throughout this paper, adopted from the NASA/IPA
Extragalactic Database), making it one of the nearest Seyfert galaxies and
thus an ideal target to investigate the physical properties of the molecular
gas in the vicinity of an AGN. Here we report the first detection of ammonia
in this prominent Seyfert 2 galaxy to evaluate the kinetic temperature of its
dense molecular gas.
## 2 Observations
We observed ammonia inversion lines toward the nucleus of NGC 1068 with the
Green Bank Telescope (GBT) of the National Radio Astronomy Observatory111The
National Radio Astronomy Observatory is a facility of the National Science
Foundation operated under cooperative agreement with Associated Universities,
Inc. on 2005 October 14 and October 19. We configured the telescope to observe
both circular polarizations simultaneously using a total-power nodding mode in
which the target position was placed alternately in one of the two beams of
the 22$-$26 GHz K-band receiver. The full-width-at-half-maximum (FWHM) beam
size was approximately 31$\arcsec$ in the observed frequency range, between
23.6 and 25.0 GHz. The pointing accuracy was $\sim$ 5$\arcsec$. To observe the
NH3(1,1) through (4,4) lines, we configured the spectrometer with an 800 MHz
spectral window centered on the NH3(3,3) line. The channel spacing was 390
kHz, corresponding to $\sim$ 5 km s-1. We averaged the data from the two dates
to produce the final spectrum, which has an on-source integration time of 4
hours. We observed the NH3(6,6) line only on the second date, 2005 October 19.
Here we used a 200 MHz spectral window observed with a channel spacing of 24
kHz, and then averaged channels in post-processing to achieve 390 kHz channel
spacing. The total integration time for the NH3(6,6) line was 1 hour.
The data were calibrated and averaged in GBTIDL. For calibration, we used an
estimate of the atmospheric opacity obtained from a weather model. We
subtracted a polynomial baseline fitted to the line-free channels adjacent to
each ammonia line. For the (1,1) to (4,4) lines observed simultaneously, the
relative calibration is very good and is limited primarily by the uncertainty
in the baseline fits, which is 10% for the (1,1) to (3,3) lines, 15% for the
(4,4) line, and 27% for the (6,6) line, respectively. The (1,1) and (2,2) line
profiles are close to each other but are marginally separated and the Gaussian
fit for each line does not suffer from this effect. With the calibration error
of 15% for the GBT telescope itself, we estimate the absolute calibration
uncertainties, including the error from the baseline fits, to be $\sim$ 18-31%
for the ammonia lines.
## 3 Results
### 3.1 NH3 lines
This is the first time that ammonia is detected in NGC 1068, making it the
first Seyfert galaxy observed in this molecule. Also, because Arp 220,
B0218+357, and PKS 1830-211 were detected in absorption, NGC 1068 is so far
the most distant galaxy where ammonia is detected in emission. Three para,
($J,K$) = (1,1), (2,2), and (4,4), and two ortho, ($J,K$) = (3,3) and (6,6),
transitions are covered by the observations, and all five are detected. Figure
1 shows the spectra before and after subtracting a polynomial baseline to the
line free channels. The line parameters are listed in Table 1, where
integrated flux, $I$, peak flux density, $S$, central velocity, $V$, and line
width, $\Delta V_{\rm 1/2}$, are obtained from the results of Gaussian
fitting.
The line profiles are similar for the (1,1), (2,2) and (3,3) lines, which
share a similar central velocity of around $-$35 km s-1, relative to a
heliocentric systemic velocity of $cz$ = 1137 km s-1, and a FWHM line width of
220$-$274 km s-1. The similar central velocities and the comparable widths of
the (1,1), (2,2) and (3,3) lines suggest that the line emission comes from the
same region. The (4,4) and (6,6) lines have different central velocities and a
narrower line width. The (4,4) line is located at a frequency where the
baseline is particularly steep. The (6,6) line is weak and falls into a window
with a slightly distorted baseline, making it difficult to determine accurate
line parameters. The comparatively narrow line widths of the (4,4) and (6,6)
lines may suggest that their emission originates from a less extended region
than that of the lower excitation lines.
Our measured continuum flux density of NGC 1068 ranges from 0.33 to 0.37 Jy at
frequencies of 23.4 to 25.1 GHz. This is consistent with the value of
0.342$\pm$0.034 at 22 GHz reported by Ricci et al. (2006).
Figure 1: Metastable ammonia inversion lines observed toward NGC 1068
($\alpha_{\rm 2000}$ = 02h42m40.7s, $\delta_{\rm 2000}$ =
$-$00o00$\arcmin$48$\arcsec$). Vertical lines mark a heliocentric systemic
velocity of $cz$ = 1137 km s-1. Velocities are displayed relative to this
value (in the upper left panel, the velocity scale refers to the (1,1) line).
Left: Spectra without baseline subtraction, overlaid with a polynomial
baseline fit to the line-free channels. Right: Final baseline-subtracted
spectra overlaid with Gaussian fits. Table 1: NH3 line parameters
(J,K) | $I$a | $S$a | $V^{a}$ | $\Delta V_{\rm 1/2}$a | $N(J,K)$b
---|---|---|---|---|---
| (Jy km s-1) | (mJy) | ( km s-1) | ( km s-1) | (1013 cm-2)
(1,1) | 0.89( 0.07) | 3.26( 0.13) | $-$30(6) | 274(17) | 2.52(0.24)
(2,2) | 0.79( 0.06) | 3.10( 0.14) | $-$37(6) | 254(19) | 1.67(0.19)
(3,3) | 1.19( 0.10) | 5.40( 0.14) | $-$37(3) | 220( 8) | 2.22(0.13)
(4,4) | 0.55( 0.05) | 3.72( 0.14) | $-$2(3) | 149( 7) | 0.96(0.07)
(6,6) | 0.28( 0.04) | 1.88( 0.23) | $-$56(9) | 147(27) | 0.43(0.12)
(5,5) | | | | | 0.53
(0,0) | | | | | 2.55
$N_{\rm total}$ | | | | | 10.9(2.3)
* a The values are derived from Gaussian fits to the spectra. Given velocities are relative to a heliocentric velocity of $cz$ = 1137 km s-1.
* b The column density for the (5,5) state is extrapolated from the (2,2) and (4,4) states using $T_{\rm rot}$ = 119 K, while the value for the (0,0) state is extrapolated from the (3,3) state by adopting $T_{\rm rot}$ = 44 K. The total column density, $N_{\rm total}$, includes the populations of the levels from (0,0) to (6,6). Because relative calibration is excellent up to at least the (4,4) transition, the errors given in parenthesis refer exclusively to the Gaussian fits. For the cumulative column density, $N_{\rm total}$, however, the given error accounts for the absolute calibration uncertainty ($\S$ 2).
### 3.2 NH3 column density and rotation temperature
Figure 2: Rotation diagram of metastable ammonia transitions toward NGC 1068
(see § 3.2). The open squares show the normalized column densities determined
from the integrated line intensities. The numbers mark the rotational
temperatures in K. The absolute calibration uncertainties, including the
dominant contribution from the baseline fits as well as uncertainties in the
Gaussian fits and in the overall calibration uncertainty, have been taken as
error bars ($\S$ 2). Note, however, that relative calibration is excellent up
to at least the (4,4) transition.
Assuming that the line emission is optically thin and the contribution from
the cosmic background is negligible, the sum of the beam-averaged column
densities of the two states of an inversion doublet can be calculated using
$N(J,K)\thinspace=\thinspace\frac{\rm 1.55\times
10^{14}}{\nu}\frac{J(J+1)}{K^{2}}\int T_{\rm mb}{\rm dv}$ (1)
(e.g., Mauersberger et al. 2003), where the column density $N(J,K)$, the
frequency $\nu$, and the integrated line intensity $\int T_{\rm mb}{\rm dv}$,
based on the main beam brightness temperature, $T_{\rm mb}$, are in units of
cm-2, GHz, and K km s-1, respectively. The calculated column densities (1 Jy
corresponds to 2.16 K on a $T_{\rm mb}$ scale222for the details see the memo
of “Calibration of GBT Spectral Line Data in GBTIDL”
$\rm{http://www.gb.nrao.edu/GBT/DA/gbtidl/gbtidl\\_calibration.pdf}$) are
given in Table 1.
Following the analysis described by Henkel et al. (2000), the rotational
temperature ($T_{\rm rot}$) between different energy levels can be determined
from the slope, $a$, of a linear fit in the rotation diagram (i.e., Boltzmann
plot, which is normalized column density against energy above the ground state
expressed in $E/k$) by $T_{\rm rot}$ = $-{\rm log}\thinspace
e/a$$\thinspace\approx\thinspace-0.434/a$. Figure 2 shows the rotation diagram
including the five measured metastable NH3 inversion lines. Ignoring
differences in line shape, we obtain $T_{\rm rot}$ = 92${}^{+29}_{-18}$ K by
fitting the para-NH3 species, i.e. the (1,1), (2,2) and (4,4) transitions, and
$T_{\rm rot}$ = 125${}^{+11}_{-8}$ K for the (3,3) and (6,6) ortho-NH3
transitions. As discussed in $\S\leavevmode\nobreak\ \ref{ammonia}$, the
(1,1), (2,2) and (3,3) line emission may originate in a region that is
different from that of the (4,4) and (6,6) lines. The former three lines can
be fitted by $T_{\rm rot}$ = 61${}^{+12}_{-9}$ K, and the latter two by
$T_{\rm rot}$ = 111${}^{+12}_{-9}$ K. The rotation temperature between the
lowest inversion doublets of para-ammonia, (1,1) and (2,2), is $T_{\rm rot}$ =
44${}^{+6}_{-4}$ K. Without the (1,1) state, the other two para-NH3
transitions, (2,2) and (4,4), yield $T_{\rm rot}$ = 119${}^{+15}_{-11}$ K.
### 3.3 NH3 abundance
To estimate the populations of the $(J,K)$ = (0,0) and (5,5) states, we assume
the rotation temperature $T_{\rm rot}$ = 44 K, derived from the two lowest
inversion doublets, for the (0,0) ground state, and $T_{\rm rot}$ = 119 K from
the fit to the (2,2) and (4,4) lines, for the (5,5) state. The derived column
densities are 2.55$\times$1013 cm-2 for the (0,0) state, which is not a
doublet, and 0.53$\times$1013 cm-2 for the (5,5) state, respectively. The
former is extrapolated from the (3,3) state and the latter from the (2,2) and
(4,4) states. The resulting total column density of ammonia is
(1.09$\pm$0.23)$\times$1014 cm-2 if only considering the populations in the
metastable states up to ($J,K$) = (6,6). This yields an ammonia gas mass of
68$\pm$14 M⊙ within a beam size of 31$\arcsec$, which corresponds to a
physical size of 2.3 kpc. To estimate the mass of molecular gas in NGC 1068,
we convolved the 17$\arcsec$ resolution map in CO $J$ = 1$-$0 by Kaneko et al.
(1989) to the GBT’s resolution of 31$\arcsec$ and derived an integrated
intensity of 103 K km s-1. Adopting a conversion factor of 0.8 M⊙ ${(\rm
K\thinspace km\thinspace s^{-1}\thinspace pc^{2})^{-1}}$, which describes
ultraluminous galaxies (Downes & Solomon 1998) and less conspicuous nuclear
starbursts (Mauersberger et al. 2003), we obtain a molecular gas mass of
3.5$\times$108 M⊙. Therefore, the ammonia abundance relative to molecular
hydrogen is estimated to be (2.9$\pm$0.6)$\times$10-8 within a radius of
$\sim$1.2 kpc around the center of NGC 1068.
## 4 Discussion
Figure 3: NH3 inversion line ratios as a function of the H2 density ($n_{\rm
H_{2}}$) and kinetic temperature ($T_{\rm kin}$). The red and blue lines mark
the (2,2)/(1,1) and (6,6)/(4,4) line ratios, respectively, obtained from the
large velocity gradient model outlined in § 4. Dashed lines show the line
ratios measured with the GBT. [NH3]/(d$v$/d$r$) = 10-8 pc (km s-1)-1 ([NH3] =
N(NH3)/N(H2) is the fractional abundance of ammonia). At high densities the
(1,1) and (2,2) lines saturate, which causes the steeply declining $T_{\rm
kin}$ values with rising density for a given (2,2)/(1,1) line ratio.
The ammonia abundance in NGC 1068 is consistent with the values of
(1.3$-$2.0)$\times$10-8 reported by Mauersberger et al. (2003) for the central
regions of several nearby galaxies such as NGC 253, Maffei 2 and IC 342. It is
lower than that of a few $\times$10-7 for the central $\sim$ 500 pc of the
Milky Way (Rodríguez-Fernández et al. 2001), but higher than the extremely low
value of 5$\times$10-10 determined for M 82 by Weiß et al. (2001). M 82 is a
special case, because it contains a starburst in a late stage of its evolution
leaving NH3, which is a molecule particularly sensitive to UV radiation (e.g.,
Suto & Lee 1983), in only a few molecular cores that are still sufficiently
shielded against the intense radiation field.
With the three para- and two ortho-NH3 lines observed, we can estimate the
ortho-to-para abundance ratio to be $R$ $\sim$ 0.9 (we do not give an error
here because of the extrapolated (0,0) column density) as an upper limit
because the (0,0) column density might be overestimated by extrapolating from
the (3,3) column density, which could be affected by maser activity (e.g.,
Walmsley & Ungerechts 1983; Ott et al. 2005). A value near or below unity is
expected, if ammonia was formed in a warm medium at a gas kinetic temperature
of at least 20 K and only $R$ values above $\sim$1.5 would hint at a cool
formation temperature of $\sim$ 20 K (e.g., Takano et al. 2002).
The rotation temperatures from the multilevel study of the ammonia inversion
lines are a good approximation to the kinetic gas temperature only for low
($T_{\rm kin}$ $\leq$ 20 K) temperatures (Walmsley & Ungerechts 1983; Danby et
al. 1988). At higher temperatures, the rotation temperature provides a robust
lower limit to the kinetic gas temperature as long as saturation effects do
not play a role.
To determine the kinetic temperature itself, we here use a one-component large
velocity gradient (LVG) analysis adopting a spherical cloud geometry as
described by Ott et al. (2005). Dahmen et al. (1998) estimated that the
velocity gradient ranges from 3 to 6 km s-1 pc-1 for Galactic center clouds
and Meier et al. (2008) found a typical value of 1 to 2 km s-1 pc-1 for GMCs.
Here we adopt a median value of 3 km s-1 pc-1 for the LVG models presented in
this paper. The ammonia abundance derived in $\S$ 3.3 is adopted, which yields
an NH3 abundance per velocity gradient of [NH3]/(d$v$/d$r$) $\sim$ 10-8 pc (km
s-1)-1 ([NH3] = N(NH3)/N(H2) is the fractional abundance of ammonia). Changing
the velocity gradient by a factor of 3 will affect the derived kinetic
temperature by less than 10 K. Figure 3 shows NH3 inversion line ratios as a
function of the H2 density ($n_{\rm H_{2}}$) and kinetic temperature ($T_{\rm
kin}$). As long as the NH3 lines are optically thin, the ratios are almost
independent of the gas density and are therefore a good indicator for gas
kinetic temperature. At high densities the (1,1) and (2,2) lines saturate,
which causes the steeply declining $T_{\rm kin}$ values with rising density
for a given (2,2)/(1,1) line ratio. While the optical depth may be well above
unity in molecular cores (see, Güsten et al. 1981 for the Galactic center
region), the bulk of the ammonia emission should be optically thin and arises
from gas densities $n_{\rm H_{2}}$ $<$ 105 cm-3 (e.g., Tieftrunk et al. 1998).
For a typical gas density, $n_{\rm H_{2}}$, of 103.0-4.8 cm-3, the observed
(2,2)/(1,1) line ratio constrains the kinetic temperature $T_{\rm kin}$ to
80$\pm$20 K, which characterizes the bulk of the molecular gas. The
(6,6)/(4,4) line ratio yields a gas temperature of 140$\pm$30 K for a gas
density within the range of 103.0-5.5 cm-3. The latter indicates the existence
of a hotter component of the gas, as is also suggested by the high rotational
temperature revealed by the rotation diagram (Fig. 2). Here the (6,6)/(3,3)
line ratio is not used to estimate the temperature because the (3,3) line
emission is partly affected by the maser activity in some parameter ranges
(Walmsley & Ungerechts 1983), and because the (6,6) line was not observed
simultaneously as the (3,3) line and the uncertainty of the baseline fit of
the (6,6) line is largest among all spectra as described in
$\S\leavevmode\nobreak\ \ref{observation}$.
The GBT beam size covers the two inner spiral arms and the circumnuclear disk
(CND) of NGC 1068 (see, e.g., Schinnerer et al. 2000). Our observed ammonia
line emission likely has contributions from the spiral arms and the CND. The
CND may be warmer because it is heated not only by young stars but also by the
AGN. There are observations that directly support higher temperatures in the
CND. Infrared rotational transitions of molecular hydrogen (Lutz et al. 1997)
indicate a wide range of gas temperatures between 100 and 800 K. The dust near
the AGN has been particularly thoroughly studied (e.g., Tomono et al. 2006;
Poncelet et al. 2007; Raban et al. 2009). Because densities may be high within
the central arcsecond ($\ga$ 105 cm-3), the dust and the gas phase may be
coupled and thus these results may be relevant for the kinetic temperature of
the gas component. With mid-infrared (MIR) multi-filter data, Tomono et al.
(2006) obtained a dust temperature of $\sim$200 K within a 1.0$\arcsec$-sized
region ($\sim$80 pc).
Our data provide a direct estimate of the gas properties for molecular gas in
the inner 1.2 kpc of the galaxy. However, higher angular resolution data are
needed to separate the different components and to isolate the CND in this
galaxy.
## 5 Conclusions
Our main results from the ammonia observations toward the prototypical Seyfert
2 galaxy NGC 1068 are:
(1) The metastable NH3 ($J$,$K$) = (1,1) to (4,4) and (6,6) inversion lines
are for the first time detected in emission toward NGC 1068. This opens up a
new avenue to determine kinetic temperatures of the dense gas in nearby highly
obscured AGN.
(2) For the NH3 column density and fractional abundance, we find
(1.09$\pm$0.23)$\times$1014 cm-2 and (2.9$\pm$0.6)$\times$10-8 in the inner
$\sim$1.2 kpc of NGC 1068.
(3) With an ortho-to-para-NH3 abundance ratio of $\sim$0.9, the ammonia should
have been formed in a warm medium of at least 20 K.
(4) The kinetic temperature can be constrained to 80$\pm$20 K for the bulk of
the molecular gas, while some gas fraction has an even higher temperature of
140$\pm$30 K.
###### Acknowledgements.
We thank the referee for thoughtful comments that improved this paper and the
staff at the GBT for their supporting during the observations. Y.A.
acknowledges the supports by CAS/SAFEA International Partnership Program for
Creative Research Teams (No. KTCX2-YW-T14), grant 11003044 from the National
Natural Science Foundation of China, and 2009’s President Excellent Thesis
Award of the Chinese Academy of Sciences. This research has made use of NASA’s
Astrophysical Data System (ADS).
## References
* (1) Antonucci, R. R. J., & Miller, J. S. 1985, ApJ, 297, 621
* (2) Bourke, T. L., Hyland, A. R., Robinson, G., James, S. D., & Wright, C. M. 1995, MNRAS, 276, 1067
* (3) Ceccarelli, C., Baluteau, J.-P., Walmsley, M., et al. 2002, A&A, 383, 603
* (4) Dahmen, G., Huttemeister, S., Wilson, T. L., & Mauersberger, R. 1998, A&A, 331, 959
* (5) Danby, G., Flower, D. R., Valiron, P., Schilke, P., & Walmsley, C. M. 1988, MNRAS, 235, 229
* (6) Downes, D., & Solomon, P. M. 1998, ApJ, 507, 615
* (7) Gao, Y., & Solomon, P. M. 2004, ApJ, 606, 271
* (8) Guesten, R., Walmsley, C. M., & Pauls, T. 1981, A&A, 103, 197
* (9) Henkel, C., Braatz, J. A., Menten, K. M., & Ott, J. 2008, A&A, 485, 451
* (10) Henkel, C., Jethava, N., Kraus, A., et al. 2005, A&A, 440, 893
* (11) Henkel, C., Mauersberger, R., Peck, A. B., Falcke, H., & Hagiwara, Y. 2000, A&A, 361, L45
* (12) Ho, P. T. P., & Townes, C. H. 1983, ARA&A, 21, 239
* (13) Kaneko, N., Morita, K., Fukui, Y., et al. 1989, ApJ, 337, 691
* (14) Lutz, D., Sturm, E., Genzel, R., Moorwood, A. F. M., & Sternberg, A. 1997, Ap&SS, 248, 217
* (15) Martin, R. N., & Ho, P. T. P. 1986, ApJ, 308, L7
* (16) Mauersberger, R., Henkel, C., Weiß, A., Peck, A. B., & Hagiwara, Y. 2003, A&A, 403, 561
* (17) Meier, D. S., Turner, J. L., & Hurt, R. L. 2008, ApJ, 675, 281
* (18) Ott, J., Henkel, C., Staveley-Smith, L., & Weiß, A. 2010, ApJ, 710, 105
* (19) Ott, J., Weiss, A., Henkel, C., & Walter, F. 2005, ApJ, 629, 767
* (20) Pillai, T., Wyrowski, F., Carey, S. J., & Menten, K. M. 2006, A&A, 450, 569
* (21) Poncelet, A., Doucet, C., Perrin, G., Sol, H., & Lagage, P. O. 2007, A&A, 472, 823
* (22) Raban, D., Jaffe, W., Röttgering, H., Meisenheimer, K., & Tristram, K. R. W. 2009, MNRAS, 394, 1325
* (23) Ricci, R., Prandoni, I., Gruppioni, C., Sault, R. J., & de Zotti, G. 2006, A&A, 445, 465
* (24) Schinnerer, E., Eckart, A., Tacconi, L. J., Genzel, R., & Downes, D. 2000, ApJ, 533, 850
* Suto & (1983) Suto, M., & Lee, L. C. 1983, J. Chem. Phys., 78, 4515
* (26) Takano, S., Hofner, P., Winnewisser, G., Nakai, N., & Kawaguchi, K. 2005, PASJ, 57, 549
* (27) Takano, S., Nakai, N., & Kawaguchi, K. 2002, PASJ, 54, 195
* (28) Tieftrunk, A. R., Megeath, S. T., Wilson, T. L., & Rayner, J. T. 1998, A&A, 336, 991
* (29) Tomono, D., Terada, H., & Kobayashi, N. 2006, ApJ, 646, 774
* (30) Walmsley, C. M., & Ungerechts, H. 1983, A&A, 122, 164
* (31) Weiß, A., Neininger, N., Henkel, C., Stutzki, J., & Klein, U. 2001, ApJ, 554, L143
* (32) Wilson, T. L., Henkel, C., & Hüttemeister, S. 2006, A&A, 460, 533
|
arxiv-papers
| 2011-04-06T15:04:39 |
2024-09-04T02:49:18.145043
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Y. Ao, C. Henkel, J.A. Braatz, A. Weiss, K. M. Menten and S. Muehle",
"submitter": "Yiping Ao",
"url": "https://arxiv.org/abs/1104.1124"
}
|
1104.1414
|
The first draft
# Some Fractional Functional Inequalities and Applications to some
Minimization constrained Problems involving a local linearity
Hichem Hajaiej Hichem Hajaiej: King Saud University, P.O. Box 2455, 11451
Riyadh, Saudi Arabia hichem.hajaiej@gmail.com
###### Abstract.
….
###### Key words and phrases:
…
###### 2010 Mathematics Subject Classification:
….
## 1\. Introduction
\- introduction will be added -
The fractional Laplacian is characterized as
$\sqrt{-\Delta}^{\,s}\phi:={\mathcal{F}}^{-1}(|\cdot|^{s}{\mathcal{F}}(\phi)),$
where $\hat{u}={\mathcal{F}}(u)$ represents the Fourier transform of $u$ on
${\mathbb{R}}^{n}$ defined by
$\hat{f}(\xi)={\mathcal{F}}(f)(\xi)=\int_{{\mathbb{R}}^{n}}f(x)e^{-ix\cdot\xi}\,dx,$
if $f\in L^{1}({\mathbb{R}}^{n})\cap L^{2}({\mathbb{R}}^{n})$.
## 2\. Fractional integral inequalities and compact embedding
In this section, we will construct the fractional Polya-Szegö inequality, and
present a fractional version of Gargliardo-Nirenberg inequality. As an
application, we show that the fractional Sobolev space
$W^{s,p}(\mathbb{R}^{n})$ is compactly embedded into Lebesgue spaces
$L^{q}(\Omega)$.
### 2.1. Fractional Polya-Szegö inequality
We investigate the nonexpansivity of Schwarz symmetric decreasing
rearrangement of functions with respect to the fractional actions
$(-\Delta)^{s/2}$ for $s\geq 0$. For the basic terminology and some properties
of Schwarz symmetric decreasing rearrangement, we refer Chapter 3 in [9], also
[4].
###### Theorem 2.1.
Let $0\leq s\leq 1$. Let $u^{*}$ denote the Schwarz symmetric radial
decreasing rearrangement of $u$. Then we have
(2.1)
$\displaystyle\int_{\mathbb{R}^{n}}|\sqrt{-\Delta}^{\,s}u^{*}(x)|^{2}dx\leq\int_{\mathbb{R}^{n}}|\sqrt{-\Delta}^{\,s}u(x)|^{2}dx,$
in the sense that the finiteness of of the right side implies the finiteness
of the left side.
###### Proof.
When $s=0$, we have the equality in (2.1). We now are going to present how the
kinetic energy decreases via the symmetric radial decreasing rearrangement as
the differential index $s$ increases.
To show (2.1), it is enough to prove the following:
(2.2)
$\displaystyle\int_{{\mathbb{R}}^{n}}|\xi|^{2s}|{\mathcal{F}}[f^{*}](\xi)|^{2}d\xi\leq\int_{{\mathbb{R}}^{n}}|\xi|^{2s}|\hat{f}(\xi)|^{2}d\xi.$
The main idea of the proof is that the inequality (2.2) can be followed from
proving the assertion: for any $\varepsilon>0$,
(2.3)
$\displaystyle\int_{{\mathbb{R}}^{n}}\left(\frac{|\eta|^{2}}{1+\varepsilon^{2}|\eta|^{2}}\right)^{s}|{\mathcal{F}}[u^{*}](\eta)|^{2}d\eta\leq\int_{{\mathbb{R}}^{n}}\left(\frac{|\eta|^{2}}{1+\varepsilon^{2}|\eta|^{2}}\right)^{s}|\hat{u}(\eta)|^{2}d\eta.$
With change of variables $\xi=\varepsilon\eta$, (2.3) becomes
(2.4)
$\displaystyle\frac{1}{\varepsilon^{2n}}\int_{{\mathbb{R}}^{n}}\left(\frac{|\xi|^{2}}{1+|\xi|^{2}}\right)^{s}|{\mathcal{F}}[u^{*}](\xi/\varepsilon)|^{2}d\xi\leq\frac{1}{\varepsilon^{2n}}\int_{{\mathbb{R}}^{n}}\left(\frac{|\xi|^{2}}{1+|\xi|^{2}}\right)^{s}|\hat{u}(\xi/\varepsilon)|^{2}d\xi.$
Replace $u(x)$ by $u(x/\varepsilon)$, and we have
$[u(x/\varepsilon)]^{*}=u^{*}(x/\varepsilon)$ since rearrangement commutes
with uniform dilation on the space. Then (2.4) is equivalent to saying
(2.5)
$\displaystyle\int_{{\mathbb{R}}^{n}}\left(\frac{|\xi|^{2}}{1+|\xi|^{2}}\right)^{s}|{\mathcal{F}}[u^{*}](\xi)|^{2}d\xi\leq\int_{{\mathbb{R}}^{n}}\left(\frac{|\xi|^{2}}{1+|\xi|^{2}}\right)^{s}|\hat{u}(\xi)|^{2}d\xi.$
So it suffices to prove (2.5). Incorporating the following expression
$\displaystyle\left(\frac{|\xi|^{2}}{1+|\xi|^{2}}\right)^{s}=\left(1-\frac{1}{1+|\xi|^{2}}\right)^{s}=1-\sum_{k=1}^{\infty}(-1)^{k+1}\left(\begin{array}[]{c}s\\\
k\end{array}\right)\left(\frac{1}{1+|\xi|^{2}}\right)^{k}$
with ${s\choose k}=\frac{s(s-1)\cdots(s-(k-1))}{k!}$ into each side of
inequality (2.5) yields
$\displaystyle\int_{{\mathbb{R}}^{n}}|{\mathcal{F}}[u^{*}](\xi)|^{2}d\xi-\sum_{k=1}^{\infty}(-1)^{k+1}\left(\begin{array}[]{c}s\\\
k\end{array}\right)\int_{{\mathbb{R}}^{n}}\frac{1}{(1+|\xi|^{2})^{k}}|{\mathcal{F}}[u^{*}](\xi)|^{2}d\xi$
and
$\displaystyle\int_{{\mathbb{R}}^{n}}|{\mathcal{F}}[u](\xi)|^{2}d\xi-\sum_{k=1}^{\infty}(-1)^{k+1}\left(\begin{array}[]{c}s\\\
k\end{array}\right)\int_{{\mathbb{R}}^{n}}\frac{1}{(1+|\xi|^{2})^{k}}|{\mathcal{F}}[u](\xi)|^{2}d\xi.$
Since $(-1)^{k+1}\left(\begin{array}[]{c}s\\\ k\end{array}\right)>0$ with
$0<s<1$, it remains to show that for each positive integer $k$
$\int_{{\mathbb{R}}^{n}}\frac{1}{(1+|\xi|^{2})^{k}}|{\mathcal{F}}[u^{*}](\xi)|^{2}d\xi\geq\int_{{\mathbb{R}}^{n}}\frac{1}{(1+|\xi|^{2})^{k}}|{\mathcal{F}}[u](\xi)|^{2}d\xi.$
We consider a Bessel kernel $G_{2k}$ of order $2k$:
$(1+|\xi|^{2})^{-k}={\mathcal{F}}[G_{2k}](\xi).$ Therefore with
$\tilde{u}(x)=u(-x)$, we arrive at
$\displaystyle\int_{{\mathbb{R}}^{n}}\frac{1}{(1+|\xi|^{2})^{k}}|{\mathcal{F}}[u](\xi)|^{2}d\xi$
$\displaystyle=\int_{{\mathbb{R}}^{n}}{\hat{G}_{2k}}(\xi)\overline{{\hat{u}}}(\xi){\hat{u}}(\xi)d\xi$
$\displaystyle=(2\pi)^{n}\int_{{\mathbb{R}}^{n}}G_{2k}(-x)(u*\tilde{u})(x)dx$
$\displaystyle=(2\pi)^{n}[G_{2k}(x)*(u*\tilde{u})(x)](0)$
$\displaystyle=(2\pi)^{n}\int_{{\mathbb{R}}^{n}\times{\mathbb{R}}^{n}}G_{2k}(y-z)\bar{u}(z)u(y)dydz$
(2.6)
$\displaystyle\leq(2\pi)^{n}\int_{{\mathbb{R}}^{n}\times{\mathbb{R}}^{n}}G_{2k}(y-z)\bar{u}^{*}(z)u^{*}(y)dydz$
$\displaystyle=\int_{{\mathbb{R}}^{n}}\frac{1}{(1+|\xi|^{2})^{k}}|{\mathcal{F}}[u^{*}](\xi)|^{2}d\xi.$
The Symmetrization lemma in [3, 9] yields the inequality (2.6) where is the
only place that inequality occurs. The proof is now completed. $\Box$
### 2.2. Fractional Gargliardo-Nirenberg Inequality
Gargliardo-Nirenberg inequality for fractional Laplacian is presented, and
sharp form of the fractional Sobolev inequality is obtained as a corollary.
Throughout this paper, $C$ denotes various real positive constants which do
not depend on functions in discussion.
###### Theorem 2.2.
Let $m,q,\theta\in\mathbb{R}\setminus\\{0\\}$ with $q\neq m\theta>0$, $0<s<n$,
$1<p<\frac{n}{s}$ and $1<\frac{r}{q-m\theta}$. Then the inequality
(2.7) $\displaystyle\int_{\mathbb{R}^{n}}|u(x)|^{q}dx\leq
C\left(\int_{\mathbb{R}^{n}}\left(\sqrt{-\Delta}^{\,s}u(x)\right)^{p}dx\right)^{\frac{m\theta}{p}}\left(\int_{\mathbb{R}^{n}}|u(x)|^{r}dx\right)^{\frac{q-m\theta}{r}}$
holds for the indices with the relation
(2.8) $\displaystyle
m\theta\left(\frac{1}{p}-\frac{s}{n}\right)+\frac{q-m\theta}{r}=1.$
In particular, when $m=q$, we have a fractional version of Gargliardo-
Nirenberg inequality:
(2.9)
$\displaystyle\left(\int_{\mathbb{R}^{n}}|u(x)|^{q}dx\right)^{\frac{1}{q}}\leq
C\left(\int_{\mathbb{R}^{n}}|\sqrt{-\Delta}^{\,s}u(x)|^{p}dx\right)^{\frac{\theta}{p}}\left(\int_{\mathbb{R}^{n}}|u(x)|^{r}dx\right)^{\frac{1-\theta}{r}}$
for the indices with the relation
(2.10)
$\displaystyle\theta\left(\frac{1}{p}-\frac{s}{n}\right)+\frac{1-\theta}{r}=\frac{1}{q}.$
###### Proof.
For convenience, we use the notation
$\|u\|_{L^{t}}:=\left(\int_{\mathbb{R}^{n}}|u(x)|^{t}dx\right)^{\frac{1}{t}}$,
and $L^{t}(\mathbb{R}^{n})$ for any $t\in\mathbb{R}\setminus\\{0\\}$. First we
point out that by the standard dilation argument the index relation (2.8) is
necessary. In fact, by replacing $u(\cdot)$ with $u(\delta\;\cdot)$, we can
observe
$\delta^{-n}\|u\|_{L^{q}}^{q}\leq
C\delta^{{}^{(s-\frac{n}{p})\theta+\frac{n(m\theta-q)}{r}}}\|\sqrt{-\Delta}^{\,s}|u|^{m}\|_{L^{p}}^{\theta}\|u\|_{L^{r}}^{q-m\theta},$
for all $\delta>0$, which implies that
$-n=(s-\frac{n}{p})\theta+\frac{n(m\theta-q)}{r}$.
Now, for any $u\in{\mathcal{S}}(\mathbb{R}^{n})$, we have
$\displaystyle\int_{\mathbb{R}^{n}}|u(x)|^{q}dx$
$\displaystyle=\int_{\mathbb{R}^{n}}|u(x)|^{m\theta}|u(x)|^{q-m\theta}dx$
(2.11)
$\displaystyle\leq\||u|^{m\theta}\|_{L^{\bar{p}}}\||u|^{q-m\theta}\|_{L^{\bar{r}}},\qquad\frac{1}{\bar{p}}+\frac{1}{\bar{r}}=1$
$\displaystyle=\|u\|_{L^{m\theta\bar{p}}}^{m\theta}\|u\|_{L^{(q-m\theta)\bar{r}}}^{q-m\theta}.$
We set $m\theta\bar{p}:=p_{0}$ and $(q-m\theta)\bar{r}:=r$ to have
(2.12)
$\displaystyle\int_{\mathbb{R}^{n}}|u(x)|^{q}dx\leq\|u\|_{L^{p_{0}}}^{m\theta}\|u\|_{L^{r}}^{q-m\theta},$
and $\frac{m\theta}{p_{0}}+\frac{q-m\theta}{r}=1$. Let
$\sqrt{-\Delta}^{\,s}u:=f$, and we have
$\displaystyle u(x)$
$\displaystyle=\frac{c_{n-s}}{c_{s}}\int_{\mathbb{R}^{n}}\frac{f(y)}{|x-y|^{n-s}}dy=\frac{c_{n-s}}{c_{s}}\int_{\mathbb{R}^{n}}\frac{\sqrt{-\Delta}^{\,s}u(y)}{|x-y|^{n-s}}dy,$
where $c_{s}=\frac{\Gamma(s/2)}{\pi^{s/2}}.$ Indeed, we may take the Fourier
transform on $\sqrt{-\Delta}^{\,s}u=f$, and take it back to have $u$ after
solving for $\widehat{u}$. Therefore the Hardy-Littlewood-Sobolev inequality
yields
(2.13) $\displaystyle\|u\|_{L^{p_{0}}}$
$\displaystyle\leq\frac{c_{n-s}}{c_{s}}C_{1}\left\|\sqrt{-\Delta}^{\,s}u\right\|_{L^{p}},$
where $C_{1}$ is a positive constant(see the remark after the proof) and $p$
satisfies
(2.14) $\displaystyle\frac{1}{p}+\frac{n-s}{n}=1+\frac{1}{p_{0}}.$
This index relation combining with the index relation appeared at (2.12)
implies (2.8), and (2.12) together with (2.13) implies (2.7). $\square$
It is known the best constant $C_{1}$ and the extremals of the Hardy-
Littlewood-Sobolev inequality for some special cases(see [8] or Section 4.3 in
[9]). Thanks to those cases, we have a sharp form of the fractional Sobolev
inequality:
###### Corollary 2.3 (Fractional Sobolev inequality).
For $0<s<n$, $1<p<\frac{n}{s}$ and $q=\frac{pn}{n-sp}$, we have
$\|u\|_{L^{q}}\leq C_{0}\left\|\sqrt{-\Delta}^{\,s}u\right\|_{L^{p}}.$
The sharp constant for the inequality is
$\pi^{s/2}\frac{\Gamma(\frac{n-s}{2})}{\Gamma(\frac{n+s}{2})}\left\\{\frac{\Gamma(n)}{\Gamma(\frac{n}{2})}\right\\}^{s/n}.$
For a special case, we emphasize the $L^{2}$-estimate of the fractional
Gargliardo-Nirenberg inequality which is applied at Section 3.
###### Corollary 2.4.
For $0<s<\frac{n}{2}$, $0<\theta<1$ and $\theta=\frac{n(q-2)}{2qs}$, we have
$\|u\|_{L^{q}}\leq
C\left\|\nabla_{s}u\right\|_{L^{2}}^{\theta}\left\|u\right\|_{L^{2}}^{1-\theta}$
with the notation
$\|\nabla_{s}u\|_{L^{2}}:=\left(\int_{\mathbb{R}^{n}}|(-\Delta)^{\frac{s}{2}}u(x)|^{2}dx\right)^{\frac{1}{2}}$.
### 2.3. Fractional Rellich-Kondrachov Compactness theorem
The following theorem illustrates that the fractional Sobolev space
$W^{s,p}(\mathbb{R}^{n})$ is compactly embedded into Lebesgue spaces
$L^{q}(\Omega)$, where $\Omega$ is bounded.
###### Theorem 2.5.
Let $0<s<n$, $1\leq p<\frac{n}{s}$ and $1\leq q<\frac{np}{n-sp}$. Also, let
$\\{u_{m}\\}$ be a sequence in $L^{q}(\mathbb{R}^{n})$ and $\Omega$ be a
bounded open set with smooth boundary. Suppose that
$\displaystyle\int_{\mathbb{R}^{n}}|\sqrt{-\Delta+1}^{\,s}u_{m}(x)|^{p}dx$
are uniformly bounded, then $\\{u_{m}\\}$ has a convergent subsequence in
$L^{q}(\Omega)$.
###### Proof.
Let $\phi$ be a smooth non-negative function with support in $\\{x:|x|\leq
1\\}$ and with $\int_{|x|\leq 1}\phi(x)dx=1$. We also define
$\phi^{\ell}(x):={\ell}^{n}\phi(\ell x)$. By virtue of the Fractional Sobolev
inequality(Corollary 2.3), it can be observed that
(2.15) $\displaystyle\|u_{m}\|_{L^{q}(\mathbb{R}^{n})}\leq
C\left\|\sqrt{-\Delta}^{\,s}u_{m}\right\|_{L^{p}(\mathbb{R}^{n})}\leq
C\left\|\sqrt{-\Delta+1}^{\,s}u_{m}\right\|_{L^{p}(\mathbb{R}^{n})}\leq\widetilde{C}$
for some $\widetilde{C}>0$. Hence in the spirit of Frechet-Kolmogorov theorem,
it suffices to show the following (see page 50 in [11]): for any
$\varepsilon>0$ and any compact subset $K$ of $\Omega$, there is a constant
$M>0$ such that for $m\geq M$,
$\|\phi^{\ell}*u-u\|_{L^{q}(K)}<\varepsilon,$
for all $u\in\mathcal{S}(\mathbb{R}^{n})$ with
$\|\sqrt{-\Delta+1}^{\,s}u\|_{L^{p}(\mathbb{R}^{n})}\leq\widetilde{C}/C$. Then
using the interpolation inequality (2.12), we have
$\displaystyle\|\phi^{\ell}*u-u\|_{L^{q}(K)}\leq
C2^{1-\theta}\|u\|_{L^{r}(\mathbb{R}^{n})}^{1-\theta}\|\phi^{\ell}*u-u\|_{L^{1}(K)}^{\theta},$
with $\frac{1-\theta}{r}+\theta=\frac{1}{q}$, $r=\frac{np}{n-sp}$.
Consequently, (2.15) and the fractional Gargliardo-Nirenberg
inequality(Theorem 2.2) imply that
$\displaystyle\|\phi^{\ell}*u-u\|_{L^{q}(K)}\leq
C\|\phi^{\ell}*u-u\|_{L^{1}(K)}^{\theta}.$
Now we define $f:=\sqrt{-\Delta+1}^{\,s}u$ to have $u=G_{s}*f$ and
$\|f\|_{L^{p}}\leq\widetilde{C}/C$, where $G_{s}$ is the Bessel kernel of
order $s$. Therefore we obtain
$\|\phi^{\ell}*u-u\|_{L^{1}(K)}\leq
C\|(\phi^{\ell}*G_{s}-G_{s})*f\|_{L^{p}(\mathbb{R}^{n})}\leq
C\|\phi^{\ell}*G_{s}-G_{s}\|_{L^{1}(\mathbb{R}^{n})}\to 0$
as $m\to\infty$. $\Box$
## 3\. Ground state solution of fractional Schrödinger flows
We consider the following variational problem:
(3.1) $\displaystyle I_{c}$ $\displaystyle=\inf\left\\{E(u):u\in
S_{c}\right\\}$
where $c$ is a prescribed number, $0<s<1$ and $E$ is the energy functional
$E(u)=\int_{{\mathbb{R}}^{n}}|\sqrt{-\Delta}^{\,s}u(x)|^{2}dx-\int_{{\mathbb{R}}^{n}}F(|x|,u(x))dx$
on an admissible collection $S_{c}:=\left\\{u\in
H^{s}(\mathbb{R}^{n}):\int_{\mathbb{R}^{n}}{u^{2}}(x)\,dx=c^{2}\right\\}$.
The aim of this work is to study the symmetry properties of minimizers of
(3.1). We can also note that the solutions of (3.1) lie on the curve
(3.2) $\displaystyle(-{\Delta})^{s}u+f(|x|,u)+\lambda u=0,$
where $\lambda$ is a Lagrange multiplier and $F(r,s)=\int_{0}^{s}f(r,t)dt.$ It
will be interesting to study the above identity (3.2) and to find suitable
assumptions on $f$ for which all the solutions of (3.2) are radial and
radially decreasing. Note that for the classical Laplacian, H. JeanJean(?) and
C. Stuart have completely solved the problem. It is also worth to study a
fractional Schrödinger equation
(3.5)
$\displaystyle\left\\{\begin{array}[]{cc}i\partial_{t}\Phi+(-{\Delta})^{s}\Phi+f(|x|,\Phi)=0\\\
\Phi(x,0)=\Phi_{0}(x)\end{array}\right.$
for which ground state solutions $u$ of (3.2) give rise to ground state
solitary wave $\Phi$ of (3.5). The minimizing problem (3.1) that we are going
to look at imposes the following assumptions on the function $F$:
$(F_{0})$ $F:[0,\infty)\times\mathbb{R}\rightarrow\mathbb{R}$ is a
Carathéodory function, that is to say:
$\bullet$ $F(\cdot,s):[0,\infty)\rightarrow\mathbb{R}$ is measurable for all
$s\in\mathbb{R}$ and
$\bullet$ $F(r,\cdot):\mathbb{R}\rightarrow\mathbb{R}$ is continuous for
almost every $r\in[0,\infty)$.
$(F_{1})$ $F(r,s)\leq F(r,|s|)$ for almost every $r\geq 0$ and all
$s\in\mathbb{R}$.
$(F_{2})$ There are $K>0$ and $0<l<\frac{4s}{n}$ satisfying for any $r,s\geq
0$,
$0\leq F(r,s)\leq K(s^{2}+s^{l+2}).$
$(F_{3})$ For every $\varepsilon>0$, there exist $R_{0},s_{0}>0$ such that
$F(r,s)\leq\varepsilon|s|^{2}$ for almost every $r\geq R_{0}$ and all $0\leq
s<s_{0}$.
$(F_{4})$ The mapping $(t,y)\mapsto F(\frac{1}{t},y)$ is super-modular on
$\mathbb{R}_{+}\\!\\!\\!\times\mathbb{R}_{+}$, in other words,
$F(r,a)+F(R,A)\geq F(r,A)+F(R,a)$
for all $r<R$ and $a<A$.
###### Theorem 3.1.
Under the conditions $(F0)\sim(F4)$, the minimizing problem (3.1) admits a
Schwarz symmetric minimizer for any fixed constant $c$. Moreover if $(F4)$
holds with a strict sign, then for any c, all minimizers of (3.1) are Schwarz
symmetric.
A Schwarz symmetric function is a radial decreasing function. For more
detailed accounts, we refer [BH].
###### Proof.
1\. Well-posedness of the problem (3.1) (that is, $I_{c}>-\infty$): We first
show that all minimizing sequences are bounded in $H^{s}(\mathbb{R}^{n})$. By
$(F1)$ and $(F2)$, we can write
$\displaystyle\int F(|x|,u(x))dx\leq\int F\left(|x|,|u(x)|\right)dx\leq
Kc^{2}+K\int|u(x)|^{l+2}dx.$
By virtue of the fractional Gagliardo-Nirenberg inequality(Corollary 2.4) and
Young’s inequality, there exists constant $K^{\prime}$ such that
(3.6) $\displaystyle\int_{\mathbb{R}^{n}}|u(x)|^{l+2}dx$ $\displaystyle\leq
K^{\prime}\biggl{\\{}\int_{\mathbb{R}^{n}}u^{2}(x)dx\biggl{\\}}^{(1-\theta)\frac{(l+2)}{2}}\|\nabla_{s}u\|_{L^{2}}^{\theta(l+2)}.$
$\displaystyle\leq
K^{\prime}\frac{\varepsilon^{p}}{p}\biggl{\\{}\|\nabla_{s}u\|_{L^{2}}^{2}\biggl{\\}}^{p\theta\frac{(l+2)}{2}}\\!\\!+\frac{K^{\prime}}{q\varepsilon^{q}}\biggl{\\{}\int_{\mathbb{R}^{n}}u^{2}(x)dx\biggl{\\}}^{q(1-\theta)\frac{(l+2)}{2}}$
for any $\varepsilon>0$, $p>1$, $\frac{1}{p}+\frac{1}{q}=1$ and
$\theta=\frac{nl}{2s(l+2)}$. We choose $p=\frac{2}{\theta(l+2)}=\frac{4s}{nl}$
to get
$\displaystyle\int_{\mathbb{R}^{n}}|u(x)|^{l+2}dx$
$\displaystyle\leq\frac{K^{\prime}}{p}\varepsilon^{p}\biggl{\\{}\|\nabla_{s}u\|_{L^{2}}^{2}\biggl{\\}}+\frac{K^{\prime}}{q\varepsilon^{q}}\biggl{\\{}\int_{\mathbb{R}^{n}}u^{2}(x)dx\biggl{\\}}^{q(1-\theta)\frac{l+2}{2}}$
$\displaystyle=\frac{K^{\prime}}{p}\varepsilon^{p}\|\nabla_{s}u\|_{L^{2}}^{2}+\frac{K^{\prime}}{q\varepsilon^{q}}c^{q(1-\theta)(l+2)}.$
Therefore applying $(F2)$, we conclude
$\displaystyle E(u)$
$\displaystyle\geq\frac{1}{2}\|\nabla_{s}u\|_{L^{2}}^{2}-Kc^{2}-\frac{K^{\prime}K}{p}\varepsilon^{p}\|\nabla_{s}u\|_{L^{2}}^{2}-\frac{K^{\prime}K}{q\varepsilon^{q}}c^{q(1-\theta)(l+2)}$
$\displaystyle=\left(\frac{1}{2}-\frac{K^{\prime}k}{p}\varepsilon^{p}\right)\|\nabla_{s}u\|_{L^{2}}^{2}-Kc^{2}-\frac{K^{\prime}K}{q\varepsilon^{q}}c^{q(1-\theta)(l+2)}.$
###### Remark 3.2.
1\. If we allow $l=\frac{4s}{n}$ in $(F2)$, the problem (3.1) still makes
sense for sufficiently small values of $c$. In fact, with
$\theta=\frac{2}{l+2}$ and in view of (3.6) we have
$\int_{\mathbb{R}^{n}}|u(x)|^{l+2}dx\leq
K^{\prime}c^{\frac{4s}{n}}\|\nabla_{s}u\|_{L^{2}}^{2}$
for $u\in S_{c}$. Hence we get
$\displaystyle E(u)$
$\displaystyle\geq\frac{1}{2}\|\nabla_{s}u\|_{L^{2}}^{2}-Kc^{2}-K^{\prime}Kc^{\frac{4s}{n}}\|\nabla_{s}u\|_{L^{2}}^{2}$
$\displaystyle=\left(\frac{1}{2}-K^{\prime}Kc^{\frac{4s}{n}}\right)\|\nabla_{s}u\|_{L^{2}}^{2}-Kc^{2}.$
Thus $I_{c}>-\infty$ and all minimizing sequences are bounded in
$H^{s}(\mathbb{R}^{n})$ provided that
$0<c<(\frac{1}{2KK^{\prime}})^{\frac{n}{4s}}$.
2\. We can prove that $I_{c}=-\infty$ for $l>\frac{4s}{n}$.
2\. Existence of a Schwarz symmetric minimizing sequence. First note that if
$u\in H^{s}(\mathbb{R}^{n})$, then $|u|\in H^{s}(\mathbb{R}^{n})$. In view of
$(F1)$, we certainly have that
$E(|u|)\leq E(u),\quad\mbox{for all }u\in H^{s}(\mathbb{R}^{n}).$
Now by virtue of the fractional Polya-Szegö inequality(Theorem 2.1):
$\|\nabla_{s}|u|^{*}\|_{L^{2}}\leq\|\nabla_{s}|u|\|_{L^{2}}$
and Theorem 1 of $[BH]$, we can observe that
$\int_{\mathbb{R}^{n}}F(|x|,|u|(x))dx\leq\int_{\mathbb{R}^{n}}F(|x|,|u|^{*}(x))dx.$
Thus, without loss of generality, we may say that (3.1) always admits a
Schwarz symmetric minimizing sequence.
3\. Let $\\{u_{m}\\}=\\{u_{m}^{*}\\}$ be a Schwarz symmetric minimizing
sequence. If $\\{u_{m}\\}$ converges weakly to $u$ in $H^{s}(\mathbb{R}^{n})$,
then
$E(u)\,\,\leq\,\,\liminf_{m\to\infty}E\\{u_{m}\\}.$
The weak lower semi-continuity of $L^{2}$-norm yields
$\|\nabla_{s}u\|_{L^{2}}\,\,\leq\,\,\liminf_{m\to\infty}\|\nabla_{s}u_{m}\|_{L^{2}}.$
Hence the assertion will follow by showing that
$\lim_{m\rightarrow\infty}\int_{\mathbb{R}^{n}}F(|x|,u_{m}(x))dx=\int_{\mathbb{R}^{n}}F(|x|,u(x))dx.$
For $R>0$, let us first prove that:
$\lim_{m\rightarrow\infty}\int_{|x|\leq R}F(|x|,u_{m}(x))dx=\int_{|x|\leq
R}F(|x|,u(x))\,dx.$
By the fractional Rellich-Kondrachov theorem(Theorem 2.5), $\\{u_{m}\\}$
converges strongly to u in $L^{l+2}(\\{x:|x|\leq R\\})$. Thus there exists a
subsequence $\\{u_{m_{k}}\\}$ of $\\{u_{m}\\}$ such that
$u_{m_{k}}(x)\rightarrow u(x)$ for almost every $|x|\leq R$ and there is $h\in
L^{l+2}(\\{x:|x|\leq R\\})$ satisfying $|u_{m_{k}}|\leq h$. We apply $(F2)$ to
have
$F(|x|,u_{m_{k}}(x))\leq K(h^{2}(x)+h^{l+2}(x)).$
Noticing that $h^{2}+h^{l+2}\in L^{1}(\\{x:|x|\leq R\\})$, the dominated
convergence theorem gives
$\lim_{m\rightarrow\infty}\int_{|x|\leq R}F(|x|,u_{m}(x))dx=\int_{|x|\leq
R}F(|x|,u(x))\,dx.$
Since $u_{m}=u_{m}^{*}$, we now have
$\omega_{n}|x|^{n}u^{2}_{m}(x)\leq\int_{|y|\leq|x|}u^{2}_{m}(y)dy\leq c^{2},$
where $\omega_{n}$ is the measure of the $n$-dimensional unit ball. Thus we
get
$u_{m}(x)\leq\frac{c}{{\omega_{n}^{\frac{n}{2}}}|x|^{\frac{n}{2}}}\leq\frac{c}{{\omega_{n}^{\frac{n}{2}}}R^{\frac{n}{2}}},\,\,\,\,\,\,\,\mbox{for
all }|x|>R.$
Therefore for $\varepsilon>0$ and $R$ sufficiently large, we obtain by using
$(F3)$ that
$\int_{|x|>R}F(|x|,u_{m}(x))dx\leq\varepsilon\int_{|x|>R}u^{2}_{m}(x)dx<\varepsilon
c^{2},$
which in turn implies that
$\lim_{R\rightarrow\infty}\lim_{n\rightarrow\infty}\int_{|x|>R}F(|x|,u_{m}(x))dx=0.$
Since $u$ inherits all the properties used to get the above limit, it follows
also that
$\lim_{R\rightarrow\infty}\int_{|x|>R}F(|x|,u(x))\,dx=0.$
4\. We claim that $u\in S_{c}$. Notice that
$S_{c}=H^{s}(\mathbb{R}^{n})\cap\Lambda^{-1}(\\{c\\})$, where $\Lambda$ is
defined by $\Lambda(u):=\|u\|_{L^{2}}$ for $u\in L^{2}(\mathbb{R}^{n})$. We
choose a Schwarz symmetric minimizing sequence $\\{u_{m}\\}\subset S_{c}$
converging weakly to $u$ in $H^{s}(\mathbb{R}^{n})$, and so it converges
strongly to $u$ in $L^{2}(\mathbb{R}^{n})$. Hence we have that $u\in
H^{s}(\mathbb{R}^{n})$ and $u\in\Lambda^{-1}(\\{c\\})$. Indeed, since
$\Lambda$ is continuous, $\Lambda^{-1}(\\{c\\})$ is a closed set in
$L^{2}(\mathbb{R}^{n})$.
\- References should be added and replaced -
## References
* [1] W. Beckner, _Inequalities in Fourier analysis,_ Ann. Math., 102 : 159-182, 1975.
* [2] W. Beckner, _Geometric inequalities in Fourier analysis, Essays on Fourier Analysis in honor of Elias M. Stein,_ Princeton University Press, 36-68, 1995.
* [3] W. Beckner, _Sobolev inequalities, the Poisson semigroup, and analysis on the sphere on ${\bf S}^{n}$, _ Proc. Natl. Acad. Sci., 89 : 4816-4819, 1992.
* [4] A. Burchard, H. Hajaiej, _Rearrangement inequalities for functional with monotone integrands_ , J. Funct. Anal. 233, 561-582, 2006.
* [5] H. Hajaiej and C.A. Stuart, _Existence and non-existence of Schwarz symmetric ground states for elliptic eigenvalue problems,_ Ann. Mat. Pura Appl., 184 : 297-314, 2005
* [6] B. Kawohl, _Rearrangements and convexity of level sets in PDE,_ Springer-Verlag, 1985.
* [7] E.H. Lieb, _Existence and uniqueness of the minimizing solution of Choquard s nonlinear equation,_ Studies in Appl. Math. 57 : 93-105, 1976/77.
* [8] E. H. Lieb, _Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities_ , Ann. of Math. 118 : 349-374, 1983.
* [9] E.H. Lieb and M. Loss, _Analysis,_ Volume 14 of Graduate Studies in Mathematics, AMS, 1997.
* [10] G. Polya and G. Szegö, _Inequalities for the capacity of a condenser,_ Amer. J. Math. 67: 1-32, 1945.
* [11] R. E. Showalter, _Monotone Operators in Banach Space and Nonlinear Partial Differential Equations_ , volume 49 of Math. Surveys and Monographs, AMS, 1997.
|
arxiv-papers
| 2011-04-07T19:43:49 |
2024-09-04T02:49:18.156244
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Hichem Hajaiej",
"submitter": "Hichem Hajaiej",
"url": "https://arxiv.org/abs/1104.1414"
}
|
1104.1501
|
# Some results for the Apostol-Genocchi polynomials of higher order
1∗Hassan Jolany and †Hesam Sharifi
###### Abstract.
The present paper deals with multiplication formulas for the Apostol-Genocchi
polynomials of higher order and deduces some explicit recursive formulas. Some
earlier results of Carlitz and Howard in terms of Genocchi numbers can be
deduced. We introduce the 2-variable Apostol-Genocchi polynomials and then we
consider the multiplication theorem for 2-variable Genocchi polynomials. Also
we introduce generalized Apostol-Genocchi polynomials with $a,b,c$ parameters
and we obtain several identities on generalized Apostol-Genocchi polynomials
with $a,b,c$ parameters .
_Keywords and Phrases_ :Apostol-Genocchi numbers and polynomials (of higher
order), Generalization of Genocchi numbers and polynomials, Raabe’s
multiplication formula, multiplication formula, Bernoulli numbers and
polynomials, Euler numbers and polynomials, Stirling numbers
###### 2000 Mathematics Subject Classification:
11B68, 05A10, 05A15
111corresponding Author
*School of Mathematics, Statistics and Computer Science, University of Tehran, Iran. E-mail: hassan.jolany@khayam.ut.ac.ir
†Department of Mathematics, Faculty of Science, University of Shahed, Tehran,
Iran. E-mail: hsharifi@shahed.ac.ir
## 1\. Preliminaries and motivation
The classical Genocchi numbers are defined in a number of ways. The way in
which it is defined is often determined by which sorts of applications they
are intended to be used for. The Genocchi numbers have wide-ranging
applications from number theory and Combinatorics to numerical analysis and
other fields of applied mathematics. There exist two important definitions of
the Genocchi numbers: the generating function definition, which is the most
commonly used definition, and a Pascal-type triangle definition, first given
by Philip Ludwig von Seidel, and discussed in [29]. As such, it makes it very
appealing for use in combinatorial applications. The idea behind this
definition, as in Pascal’s triangle, is to utilize a recursive relationship
given some initial conditions to generate the Genocchi numbers. The
combinatorics of the Genocchi numbers were developed by Dumont in [4] and
various co-authors in the 70s and 80s. Dumont and Foata introduced in 1976 a
three-variable symmetric refinement of Genocchi numbers, which satisfies a
simple recurrence relation. A six-variable generalization with many similar
properties was later considered by Dumont. In [30] Jang et al. defined a new
generalization of Genocchi numbers, poly Genocchi numbers. Kim in [10] gave a
new concept for the q-extension of Genocchi numbers and gave some relations
between q-Genocchi polynomials and q-Euler numbers. In [31], Simsek et al.
investigated the q-Genocchi zeta function and L-function by using generating
functions and Mellin transformation. Genocchi numbers are known to count a
large variety of combinatorial objects, among which numerous sets of
permutations. One of the applications of Genocchi numbers that was
investigated by Jeff Remmel in [32] is counting the number of up-down ascent
sequences. Another application of Genocchi numbers is in Graph Theory. For
instance, Boolean numbers of the associated Ferrers Graphs are the Genocchi
numbers of the second kind [33]. A third application of Genocchi numbers is in
Automata Theory. One of the generalizations of Genocchi numbers that was first
proposed by Han in [34] proves useful in enumerating the class of
deterministic finite automata (DFA) that accept a finite language and in
enumerating a generalization of permutations counted by Dumont. Recently S.
Herrmann in [6], presented a relation between the $f$ -vector of the boundary
and the interior of a simplicial ball directly in terms of the $f$-vectors.
The most interesting point about this equation is the occurrence of the
Genocchi numbers $G_{2n}$. In the last decade, a surprising number of papers
appeared proposing new generalizations of the classical Genocchi polynomials
to real and complex variables or treating other topics related to Genocchi
polynomials. Qiu-Ming Luo in [19] introduced new generalizations of Genocchi
polynomials, he defined the Apostol-Genocchi polynomials of higher order and
q-Apostol-Genocchi polynomials and he obtained a relationship between Apostol-
Genocchi polynomials of higher order and Goyal-Laddha-Hurwitz-Lerch Zeta
function. Next Qiu-Ming Luo and H.M. Srivastava in [35] by Apostol-Genocchi
polynomials of higher order derived various explicit series representations in
terms of the Gaussian hypergeometric function and the Hurwitz (or generalized)
zeta function which yields a deeper insight into the effectiveness of this
type of generalization. Also it is clear that Apostol-Genocchi polynomials of
higher order are in a class of orthogonal polynomials and we know that most
such special functions that are orthogonal are satisfied in multiplication
theorem, so in this present paper we show this property is true for Apostol-
Genocchi polynomials of higher order.
The study of Genocchi numbers and their combinatorial relations has received
much attention [2, 4, 6, 10, 13, 19, 22, 23, 26, 27, 29, 37]. In this paper we
consider some combinatorial relationships of the Apostol-Genocchi numbers of
higher order.
The unsigned Genocchi numbers $\\{G_{2n}\\}_{n\geqslant 1}$ can be defined
through their generating function:
$\sum_{n=1}^{\infty}G_{2n}\frac{x^{2n}}{(2n)!}=x.\tan\Big{(}\frac{x}{2}\Big{)}$
and also
$\sum_{n\geqslant
1}(-1)^{n}G_{2n}\frac{t^{2n}}{(2n)!}=-t\tanh\Big{(}\frac{t}{2}\Big{)}$
So, by simple computation
$\displaystyle\tanh\Big{(}\frac{t}{2}\Big{)}$ $\displaystyle=$
$\displaystyle\sum_{s\geqslant
0}\frac{(\frac{t}{2})^{2s+1}}{(2s+1)!}.\sum_{m\geqslant
0}(-1)^{m}E_{2m}\frac{(\frac{t}{2})^{2m}}{(2m)!}$ $\displaystyle=$
$\displaystyle\sum_{s,m\geqslant
0}\frac{(-1)^{m}}{2^{2m+2s+1}}\frac{E_{2m}t^{2m+2s+1}}{(2m)!(2s+1)!}$
$\displaystyle=$ $\displaystyle\sum_{n\geqslant 1}\sum_{m=0}^{n-1}{2n-1\choose
2m}\frac{(-1)^{m}E_{2m}t^{2n-1}}{2^{2n-1}(2n-1)!},$
we obtain for $n\geqslant 1$,
$G_{2n}=\sum_{k=0}^{n-1}(-1)^{n-k-1}(n-k){2n\choose
2k}\frac{E_{2k}}{2^{2n-2}}$
where $E_{k}$ are Euler numbers. Also the Genocchi numbers $G_{n}$ are defined
by the generating function
$G(t)=\frac{2t}{e^{t}+1}=\sum_{n=0}^{\infty}G_{n}\frac{t^{n}}{n!},(|t|<\pi).$
In general, it satisfies $G_{0}=0,G_{1}=1,G_{3}=G_{5}=G_{7}=...G_{2n+1}=0$,
and even coefficients are given $G_{2n}=2(1-2^{2n})B_{2n}=2nE_{2n-1}$, where
$B_{n}$ are Bernoulli numbers and $E_{n}$ are Euler numbers. The first few
Genocchi numbers for even integers are -1,1,-3,17,-155,2073,… . The first few
prime Genocchi numbers are -3 and 17, which occur at n=6 and 8. There are no
others with $n<10^{5}$. For $x\in\mathbb{R}$, we consider the Genocchi
polynomials as follows
$G(x,t)=G(t)e^{xt}=\frac{2t}{e^{t}+1}e^{xt}=\sum_{n=0}^{\infty}G_{n}(x)\frac{t^{n}}{n!}.$
In special case $x=0$, we define $G_{n}(0)=G_{n}$. Because we have
$G_{n}(x)=\sum_{k=0}^{n}{n\choose k}G_{k}x^{n-k}.$
It is easy to deduce that $G_{k}(x)$ are polynomials of degree $k$. Here, we
present some of the first Genocchi’s polynomials:
$G_{1}(x)=1,G_{2}(x)=2x-1,G_{3}(x)=3x^{2}-3x,G_{4}(x)=4x^{3}-6x^{2}+1,$
$G_{5}(x)=5x^{4}-10x^{3}+5x,G_{6}(x)=6x^{5}-15x^{4}+15x^{2}-3,\ ...$
The classical Bernoulli polynomials (of higher order) $B_{n}^{(\alpha)}(x)$
and Euler polynomials (of higher order)
$E_{n}^{(\alpha)}(x),(\alpha\in\mathbb{C})$, are usually defined by means of
the following generating functions [11, 13, 15, 21, 24, 25, 28].
$\Big{(}\frac{z}{e^{z}-1}\Big{)}^{\alpha}e^{xz}=\sum_{n=0}^{\infty}B_{n}^{(\alpha)}(x)\frac{z^{n}}{n!},(|z|<2\pi)$
and
$\Big{(}\frac{2}{e^{z}+1}\Big{)}^{\alpha}e^{xz}=\sum_{n=0}^{\infty}E_{n}^{(\alpha)}(x)\frac{z^{n}}{n!},(|z|<\pi).$
So that, obviously,
$B_{n}(x):=B_{n}^{1}(x)\ \ \ \text{and}\ \ \ E_{n}(x):=E_{n}^{(1)}(x).$
In 2002, Q. M. Luo and et al. (see [5, 17, 18]) defined the generalization of
Bernoulli polynomials and Euler numbers, as follows
$\frac{tc^{xt}}{b^{t}-a^{t}}=\sum_{n=0}^{\infty}\frac{B_{n}(x;a,b,c)}{n!}t^{n},(|t\ln\frac{b}{a}|<2\pi)$
$\frac{2}{b^{t}+a^{t}}=\sum_{n=0}^{\infty}E_{n}(a,b)\frac{t^{n}}{n!},(|t\ln\frac{b}{a}|<\pi).$
Here, we give an analogous definition for generalized Apostol-Genocchi
polynomials.
Let $a,b>0$, The Generalized Apostol-Genocchi Numbers and Apostol-Genocchi
polynomials with $a,b,c$ parameters are defined by
$\frac{2t}{\lambda
b^{t}+a^{t}}=\sum_{n=0}^{\infty}G_{n}(a,b;\lambda)\frac{t^{n}}{n!}$
$\frac{2t}{\lambda
b^{t}+a^{t}}e^{xt}=\sum_{n=0}^{\infty}G_{n}(x,a,b;\lambda)\frac{t^{n}}{n!}$
$\frac{2t}{\lambda
b^{t}+a^{t}}c^{xt}=\sum_{n=0}^{\infty}G_{n}(x,a,b,c;\lambda)\frac{t^{n}}{n!}$
respectively.
For a real or complex parameter $\alpha$, The Apostol-Genocchi polynomials
with $a,b,c$ parameters of order $\alpha$, $G_{n}^{(\alpha)}(x;a,b;\lambda)$,
each of degree $n$ is $x$ as well as in $\alpha$, are defined by the following
generating functions
$\Big{(}\frac{2t}{\lambda
b^{t}+a^{t}}\Big{)}^{\alpha}e^{xz}=\sum_{n=0}^{\infty}G_{n}^{(\alpha)}(x,a,b;\lambda)\frac{t^{n}}{n!}.$
Clearly, we have $G_{n}^{(1)}(x,a,b;\lambda)=G_{n}(x;a,b;\lambda)$.
Now, we introduce the 2-variable Apostol-Genocchi Polynomials and then we
consider the multiplication theorem for 2-variable Apostol-Genocchi
Polynomials.
Now, we start with the definition of Apostol-Genocchi Polynomials
$G_{n}(x;\lambda)$. The Apostol-Genocchi Polynomials $G_{n}(x;\lambda)$ in
variable $x$ are defined by means of the generating function
$\frac{2ze^{xz}}{\lambda
e^{z}+1}=\sum_{n=0}^{\infty}G_{n}(x;\lambda)\frac{z^{n}}{n!}\ \ \ (|z|<2\pi\
when\ \lambda=1,|z|<|\log\lambda|\ when\ \lambda\neq 1).$
with, of course
$G_{n}(\lambda):=G_{n}(0;\lambda)$
Where $G_{n}(\lambda)$ denotes the so-called Apostol-Genocchi numbers.
Also (see [1, 14, 16, 19, 20, 24, 28]) Apostol-Genocchi Polynomials
$G_{n}^{(\alpha)}(x;\lambda)$ of order $\alpha$ in variable $x$ are defined by
means of the generating function:
$\Big{(}\frac{2z}{\lambda
e^{z}+1}\Big{)}^{\alpha}e^{xz}=\sum_{n=0}^{\infty}G_{n}^{(\alpha)}(x;\lambda)\frac{z^{n}}{n!}$
with, of course $G_{n}^{(\alpha)}(\lambda):=G_{n}^{\alpha}(0;\lambda)$.
Where $G_{n}^{\alpha}(\lambda)$ denotes the so-called Apostol-Genocchi numbers
of higher order. If we set
$\phi(x,t;\alpha)=\Big{(}\frac{2t}{e^{t}+1}\Big{)}^{\alpha}e^{xt}$
Then
$\frac{\partial\phi}{\partial x}=t\phi$
and
$t\frac{\partial\phi}{\partial t}-\Big{\\{}\frac{\alpha+tx}{t}-\frac{\alpha
e^{t}}{e^{t}+1}\Big{\\}}\frac{\partial\phi}{\partial x}=0.$
Next, we introduce the class of Apostol-Genocchi numbers as follows. (for more
information see [38])
${}_{H}G_{n}(\lambda)=\sum_{s=0}^{[\frac{n}{2}]}\frac{n!G_{n-2s}(\lambda)G_{s}(\lambda)}{s!(n-2s)!}$
The generating function of ${}_{H}G_{n}(\lambda)$ is provided by
$\frac{4t^{3}}{(\lambda e^{t}+1)(\lambda e^{t^{2}}+1)}=\sum_{n=0}^{\infty}\
{}_{H}G_{n}(\lambda)\frac{t^{n}}{n!}$
and the generalization of ${}_{H}G_{n}(\lambda)$ for $(a,b)\neq 0$, is
$\frac{4t^{3}}{(\lambda e^{at}+1)(\lambda e^{bt^{2}}+1)}=\sum_{n=0}^{\infty}\
{}_{H}G_{n}(a,b;\lambda)\frac{t^{n}}{n!}$
where
${}_{H}G_{n}(a,b;\lambda)=\frac{1}{ab}\sum_{n=0}^{[\frac{n}{2}]}\frac{n!a^{n-2s}b^{s}G_{n-2s}(\lambda)G_{s}(\lambda)}{s!(n-2s)!}$
The main object of the present paper is to investigate the multiplication
formulas for the Apostol-type polynomials.
Luo in [16] defined the multiple alternating sums as
$Z_{k}^{(l)}(m;\lambda)=(-1)^{l}\sum_{{}^{0\leq v_{1},v_{2},...,v_{m}\leq
l}_{v_{1}+v_{2}+...+v_{m}=\ell}}{l\choose
v_{1},v_{2},...,v_{m}}(-\lambda)^{v_{1}+2v_{2}+...+mv_{m}}$
$Z_{k}(m;\lambda)=\sum_{j=1}^{m}(-1)^{j+1}\lambda^{j}j^{k}=\lambda-\lambda^{2}2^{k}+...+(-1)^{m+1}\lambda^{m}m^{k}$
$Z_{k}(m)=\sum_{j=1}^{m}(-1)^{j+1}j^{k}=1-2^{k}+...+(-1)^{m+1}m^{k},\
(m,k,l\in\mathbb{N}_{0};\lambda\in\mathbb{C})$
where $\mathbb{N}_{0}:=\mathbb{N}\cup\\{0\\}\ ,\
(\mathbb{N}:=\\{1,2,3,...\\}).$
## 2\. the multiplication formulas for the apostol-genocchi polynomials of
higher order
In this Section, we obtain some interesting new relations and properties
associated with Apostol-Genocchi polynomials of higher order and then derive
several elementary properties including recurrence relations for Genocchi
numbers. First of all we prove the multiplication theorem of these polynomials
###### Theorem 2.1.
For $m\in\mathbb{N}$, $n\in\mathbb{N}_{0}$, $\alpha,\lambda\in\mathbb{C}$, the
following multiplication formula of the Apostol-Genocchi polynomials of higher
order holds true:
(1)
$G_{n}^{(\alpha)}(mx;\lambda)=m^{n-\alpha}\sum_{v_{1},v_{2},...,v_{m-1}\geq
0}{\alpha\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}G_{n}^{(\alpha)}\Big{(}x+\frac{r}{m};\lambda^{m}\Big{)}$
where $r=v_{1}+2v_{2}+...+(m-1)v_{m-1}$, ($m$ is odd )
###### Proof.
It is easy to observe that
$\frac{1}{\lambda e^{t}+1}=-\frac{1-\lambda
e^{t}+\lambda^{2}e^{2t}+...+(-\lambda)^{m-1}e^{(m-1)t}}{(-\lambda)^{m}e^{mt}-1}.\qquad(\ast)$
But we have, if $x_{i}\in\mathbb{C}$
$(x_{1}+x_{2}+...+x_{m})^{n}=\sum_{{}^{a_{1},a_{2},...,a_{m}\geqslant
0}_{a_{1}+a_{2}+...a_{m}=n}}{n\choose
a_{1},a_{2},...,a_{m}}x_{1}^{a_{1}}x_{2}^{a_{2}}...x_{m}^{a_{m}}.\qquad(\ast\ast)$
The last summation takes place over all positive or zero integers
$a_{i}\geqslant 0$ such that $a_{1}+a_{2}+...+a_{m}=n$, where
${n\choose a_{1},a_{2},...,a_{m}}:=\frac{n!}{a_{1!}a_{2}!...a_{m}!}.$
So by applying $(\ast,\ast\ast)$, we get
$\displaystyle\sum_{n=0}^{\infty}G_{n}^{(\alpha)}(mx;\lambda)\frac{t^{n}}{n!}$
$\displaystyle=$ $\displaystyle\Big{(}\frac{2t}{\lambda
e^{t}+1}\Big{)}^{\alpha}e^{mxt}$ $\displaystyle=$
$\displaystyle\Big{(}\frac{2t}{\lambda^{m}e^{mt}+1}\Big{)}^{\alpha}\Big{(}\sum_{k=0}^{m-1}(-\lambda)^{k}e^{kt}\Big{)}e^{mxt}$
$\displaystyle=$ $\displaystyle\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{\alpha\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}\Big{(}\frac{2t}{\lambda^{m}e^{mt}+1}\Big{)}^{\alpha}e^{(x+\frac{r}{m})mt}$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\bigg{(}m^{n-\alpha}\sum_{v_{1},v_{2},...,v_{m}\geqslant
0}{{}^{\alpha}\choose
v_{1},v_{2},...,v_{m}}(-\lambda)^{r}G_{n}^{(\alpha)}\Big{(}x+\frac{r}{m};\lambda^{m}\Big{)}\bigg{)}\frac{t^{n}}{n!}.$
By comparing the coefficient of $\frac{t^{n}}{n!}$ on both sides of last
equation, proof is complete. ∎
In terms of the generalized Apostol-Genocchi polynomials, by setting
$\lambda=1$ in Theorem 2.1, we obtain the following explicit formula that is
called multiplication Theorem for Genocchi polynomials of higher order.
###### Corollary 2.2.
For $m\in\mathbb{N}$, $n\in\mathbb{N}_{0}$, $\alpha,\in\mathbb{C}$, we have
$G_{n}^{(\alpha)}(mx)=m^{n-\alpha}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{\alpha\choose
v_{1},v_{2},...,v_{m-1}}(-1)^{r}G_{n}^{(\alpha)}\Big{(}x+\frac{r}{m}\Big{)}\ \
\text{(m is odd)}.$
And using corollary 2.2, (by setting $\alpha=1$), we get Corollary 2.3 that is
the main result of [36] and is called multiplication Theorem for Genocchi
polynomials.
###### Corollary 2.3.
For $m\in\mathbb{N}$, $n\in\mathbb{N}_{0}$, we have
$G_{n}(mx)=m^{n-1}\sum_{k=0}^{m-1}(-1)^{k}G_{n}\Big{(}x+\frac{k}{m}\Big{)}\ \
\text{(m is odd)}.$
Now, we consider the multiplication formula for the Apostol-Genocchi numbers
when, $m$ is even.
###### Theorem 2.4.
For $m\in\mathbb{N}$ (m even), $n\in\mathbb{N}$,
$\alpha,\lambda\in\mathbb{C}$, the following multiplication formula of the
Apostol-Genocchi polynomials of higher order holds true:
$G_{n}^{(\alpha)}(mx;\lambda)=(-2)^{\alpha}m^{n-\alpha}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{\alpha\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}B_{n}^{(\alpha)}\Big{(}x+\frac{r}{m},\lambda^{m}\Big{)},$
where $r=v_{1}+2v_{2}+...+(m-1)v_{m-1}.$
###### Proof.
It is easy to observe that
$\frac{1}{\lambda e^{t}+1}=-\frac{1-\lambda
e^{t}+\lambda^{2}e^{2t}+...+(-\lambda)^{m-1}e^{(m-1)t}}{(-\lambda)^{m}e^{mt}-1}.$
So, we obtain
$\displaystyle\sum_{n=0}^{\infty}G_{n}^{(\alpha)}(mx;\lambda)\frac{t^{n}}{n!}$
$\displaystyle=$ $\displaystyle\Big{(}\frac{2t}{\lambda
e^{t}+1}\Big{)}^{\alpha}e^{mxt}$ $\displaystyle=$ $\displaystyle
2^{\alpha}\Big{(}\frac{t}{\lambda e^{t}+1}\Big{)}^{\alpha}e^{mxt}$
$\displaystyle=$
$\displaystyle(-2)^{\alpha}\Big{(}\frac{t}{\lambda^{m}e^{mt}-1}\Big{)}^{\alpha}\Big{(}\sum_{k=0}^{m-1}(-\lambda
e^{t})^{k}\Big{)}^{\alpha}e^{mxt}$ $\displaystyle=$
$\displaystyle(-2)^{\alpha}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{\alpha\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}\Big{(}\frac{t}{\lambda^{m}e^{m}-1}\Big{)}^{\alpha}e^{(x+\frac{r}{m})mt}$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\Big{(}(-2)^{\alpha}m^{n-\alpha}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{\alpha\choose v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}$ $\displaystyle\times$
$\displaystyle
B_{n}^{(\alpha)}(x+\frac{r}{m};\lambda^{m})\Big{)}\frac{t^{n}}{n!}$
By comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides proof will
be complete. ∎
Next, using Theorem 2.4, (with $\lambda=1$), we obtain the Genocchi
polynomials of higher order can be expressed by the Bernoulli polynomials of
higher order when, $m$ is even
###### Corollary 2.5.
For $m\in\mathbb{N}$ (m even), $n\in\mathbb{N}_{0}$, $\alpha\in\mathbb{C}$, we
get
$G_{n}^{(\alpha)}(mx)=(-2)^{\alpha}m^{n-\alpha}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{\alpha\choose
v_{1},v_{2},...,v_{m-1}}(-1)^{r}B_{n}^{\alpha}\Big{(}x+\frac{r}{m}\Big{)}.$
Also by applying $\alpha=1$, in corollary 2.5 we obtain the following
assertion that is one of the most remarkable identities in area of Genocchi
polynomials.
###### Corollary 2.6.
For $m\in\mathbb{N}$, $n\in\mathbb{N}_{0}$, we obtain
$G_{n}(mx)=-2m^{n-1}\sum_{k=0}^{m-1}(-1)^{k}B_{n}\Big{(}x+\frac{k}{m}\Big{)}\
\ \text{ $m$ is even}.$
Obviously, the result of Corollary 2.6 is analogous with the well-known
Raabe’s multiplication formula. Now, we present explicit evaluations of
$Z_{n}^{(l)}(m;\lambda)$, $Z_{n}^{(l)}(\lambda)$, $Z_{n}(m)$ by Apostol-
Genocchi polynomials.
###### Theorem 2.7.
For $m,n,l\in\mathbb{N}_{0},\lambda\in\mathbb{C}$, we have
$Z_{n}^{(l)}(m;\lambda)=2^{-l}\sum_{j=0}^{l}{l\choose
j}\frac{(-1)^{j(m+1)}\lambda^{mj+l}}{(n+1)_{l}}\sum_{k=0}^{n+l}{n+l\choose
k}G_{k}^{(j)}(mj+l;\lambda)G_{n+l-k}^{(l-j)}(\lambda)$
where $(n)_{0}=1,(n)_{k}=n(n+1)...(n+k-1).$
###### Proof.
By definition of $Z_{n}^{(l)}(m;\lambda)$, we calculate the following sum
$\sum_{n=0}^{\infty}Z_{n}^{(l)}(m;\lambda)\frac{t^{n}}{n!}=$
$\sum_{n=0}^{\infty}\Big{[}(-1)^{l}\sum_{{}^{0\leqslant
v_{1},v_{2},...,v_{m}\leqslant l}_{v_{1}+v_{2}+...+v_{m}=l}}{l\choose
v_{1},v_{2},...,v_{m}}(-\lambda)^{\lambda_{1}+2\lambda_{2}+...+m\lambda_{m}}(v_{1}+2v_{2}+...+mv_{m})^{n}\Big{]}\frac{t^{n}}{n!}$
$\displaystyle=$ $\displaystyle(-1)^{l}\sum_{{}^{0\leqslant
v_{1},v_{2},...,v_{m}\leqslant l}_{v_{1}+v_{2}+...+v_{m}=l}}{l\choose
v_{1},v_{2},...,v_{m}}(-\lambda
e^{t})^{\lambda_{1}+2\lambda_{2}+...+m\lambda_{m}}$ $\displaystyle=$
$\displaystyle(\lambda
e^{t}-\lambda^{2}e^{2t}+...+(-1)^{m+1}\lambda^{m}e^{mt})^{l}$ $\displaystyle=$
$\displaystyle\bigg{(}\frac{(-1)^{m+1}\lambda^{m+1}e^{(m+1)t}}{\lambda
e^{t}+1}+\frac{\lambda e^{t}}{\lambda e^{t}+1}\bigg{)}^{l}$ $\displaystyle=$
$\displaystyle(2t)^{-l}\sum_{j=0}^{l}{l\choose
j}\Big{[}\frac{2t(-1)^{m+1}\lambda^{m+1}e^{(m+1)t}}{\lambda
e^{t}+1}\Big{]}^{j}\Big{[}\frac{2t\lambda e^{t}}{\lambda
e^{t}+1}\Big{]}^{l-j}$ $\displaystyle=$
$\displaystyle(2t)^{-l}\sum_{j=0}^{l}{l\choose
j}(-1)^{j(m+1)}\lambda^{mj+l}\sum_{n=0}^{\infty}G_{n}^{(j)}(mj+l;\lambda)\frac{t^{n}}{n!}\sum_{n=0}^{\infty}G_{n}^{(l-j)}(\lambda)\frac{t^{n}}{n!}$
$\displaystyle=$ $\displaystyle
2^{-l}\sum_{n=0}^{\infty}\bigg{[}\sum_{j=0}^{l}{j\choose
l})\frac{(-1)^{j(m+1)}\lambda^{mj+l}}{(n+1)_{l}}\sum_{k=0}^{n+l}{n+l\choose
k}G_{k}^{(j)}(mj+l;\lambda)G_{n+l-k}^{(l-j)}(\lambda)\bigg{]}\frac{t^{n}}{n!}$
by comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides, proof will
be complete. ∎
As a direct result, using $\lambda=1$ in Theorem 2.7, we derive an explicit
representation of multiple alternating sums $Z^{(l)}_{n}(m)$, in terms of the
Genocchi polynomials of higher order. We also deduce their special cases and
applications which lead to the corresponding results for the Genocchi
polynomials.
###### Corollary 2.8.
Form $m,n,l\in\mathbb{N}_{0}$, the following formula holds true in terms of
the Genocchi polynimials
$Z_{n}^{(l)}(m)=2^{-l}\sum_{j=0}^{l}{l\choose
j}\frac{(-1)^{j(m+1)}}{(n+1)_{l}}\sum_{k=0}^{n+l}{n+l\choose
k}G_{k}^{(j)}(mj+l)G_{n+l-k}^{l-j}$
where $(n)_{0}=1,(n)_{k}=n(n+1)...(n+k-1)$.
Next we investigate some of the recursive formulas for the Apostol-Genocchi
numbers of higher order that are analogous to the results of Howard [7, 8, 9]
and we deduce that they constitute a useful special case.
###### Theorem 2.9.
For $m$ be odd, $n,l\in\mathbb{N}_{0}\ ,\lambda\in\mathbb{C}$, we have
$m^{n}G_{n}^{(l)}(\lambda^{m})-m^{l}G_{n}^{(l)}(\lambda)=(-1)^{l-1}\sum_{k=0}^{n}{n\choose
k}m^{k}G_{k}^{(l)}(\lambda^{m})Z_{n-k}^{(l)}(m-1;\lambda).$
###### Proof.
By taking $x=0,\alpha=l$ in (1), where $r=v_{1}+2v_{2}+...+(m-1)v_{m-1}$ we
obtain
$m^{l}G_{n}^{(l)}(\lambda)=m^{n}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{l\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}G_{n}^{(l)}(\frac{r}{m},\lambda^{m}).$
But we know
$G_{n}^{(l)}(x;\lambda)=\sum_{k=0}^{n}{n\choose
k}G_{k}^{(l)}(\lambda)x^{n-k}.$
So, we obtain
$\displaystyle m^{l}G_{n}^{(l)}(\lambda)$ $\displaystyle=$ $\displaystyle
m^{n}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant 0}{l\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}\sum_{k=0}^{n}{n\choose
k}G_{k}^{(l)}(\lambda^{m})\Big{(}\frac{r}{m}\Big{)}^{n-k}$ $\displaystyle=$
$\displaystyle\sum_{k=0}^{n}{n\choose
k}m^{k}G_{k}^{(l)}(\lambda^{m})\sum_{0\leqslant
v_{1},v_{2},...,v_{m-1}\leqslant l}{l\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}r^{n-k}$ $\displaystyle=$
$\displaystyle\sum_{k=0}^{n}{n\choose
k}m^{k}G_{k}^{(l)}(\lambda^{m})\sum_{{}^{0\leqslant
v_{1},v_{2},...,v_{m-1}\leqslant l}_{v_{1}+v_{2}+...v_{m-1}=l}}{l\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}r^{n-k}+m^{n}G_{n}^{(l)}(\lambda^{m})$
$\displaystyle=$ $\displaystyle(-1)^{l}\sum_{k=0}^{n}{n\choose
k}m^{k}G_{k}^{(l)}(\lambda^{m})Z_{n-k}^{(l)}(m-1;\lambda)+m^{n}G_{n}^{(l)}(\lambda^{m})$
So proof is complete. ∎
Furthermore, we derive some well-known results (see [10]) involving Genocchi
polynomials of higher order and Genocchi polynomials which we state here. By
setting $\lambda=1$, $l=1$ in Theorem 2.9, we get Corollaries 2.10, 2.11,
respectively
###### Corollary 2.10.
For $m$ be odd, $n,l\in\mathbb{N}_{0}\ $, we have
$(m^{n}-m^{l})G_{n}^{(l)}=(-1)^{l-1}\sum_{k=0}^{n}{n\choose
k}G_{k}^{(l)}Z_{n-k}^{(l)}(m-1).$
###### Corollary 2.11.
For $m$ be odd, $n\in\mathbb{N}_{0}\ ,\lambda\in\mathbb{C}$, we have
$m^{n}G_{n}(\lambda^{m})-mG_{n}(\lambda)=\sum_{k=0}^{n}{n\choose
k}m^{k}G_{k}(\lambda^{m})Z_{n-k}(m-1;\lambda).$
Also by setting $\lambda=1$ in Corollary 2.10, we get following assertion that
is analogous to the formula of Howard in terms of Genocchi numbers.
###### Corollary 2.12.
For $m$ be odd, $n,l\in\mathbb{N}_{0}\ ,\lambda\in\mathbb{C}$, we obtain
$(m^{n}-m)G_{n}=\sum_{k=0}^{n}{n\choose k}m^{k}G_{k}Z_{n-k}(m-1)$
.
Next, we investigate the generalization of Howard’s formula in terms of
Apostol-Genocchi numbers, when $m$ is even.
###### Theorem 2.13.
For $m$ be even, $n,l\in\mathbb{N}_{0},\ \lambda\in\mathbb{C}$, the following
formula
$m^{l}G_{n}^{(l)}(\lambda)-(-2)^{l}m^{n}B_{n}^{(l)}(\lambda^{m})=2^{l}\sum_{k=0}^{n}{n\choose
k}m^{k}B_{k}^{(l)}(\lambda^{m})Z_{n-k}^{(l)}(m-1;\lambda)$
holds true, where $r=v_{1}+2v_{2}+...+(m-1)v_{m-1}$.
###### Proof.
We have
$G_{n}^{(l)}(\lambda)=(-2)^{l}m^{n-l}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{l\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}B_{n}^{(l)}(\frac{r}{m},\lambda^{m})$
But we know
$B_{n}^{(l)}(x;\lambda)=\sum_{k=0}^{n}{n\choose
k}B_{k}^{(l)}(\lambda)x^{n-k}.$
So we get
$\displaystyle m^{l}G_{n}^{(l)}(\lambda)$ $\displaystyle=$
$\displaystyle(-2)^{l}m^{n}\sum_{v_{1},v_{2},...,v_{m-1}\geqslant 0}{l\choose
v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}\sum_{k=0}^{n}{n\choose
k}B_{k}^{(l)}(\lambda^{m})\Big{(}\frac{r}{m}\Big{)}^{n-k}$ $\displaystyle=$
$\displaystyle(-2)^{l}\sum_{k=0}^{n}{n\choose
k}m^{k}B_{k}^{(l)}(\lambda^{m})\sum_{v_{1},v_{2},...,v_{m-1}\geqslant
0}{l\choose v_{1},v_{2},...,v_{m-1}}(-\lambda)^{r}r^{n-k}$ $\displaystyle=$
$\displaystyle 2^{l}\sum_{k=0}^{n}{n\choose
k}m^{k}B_{k}^{(l)}(\lambda^{m})Z_{n-k}^{(l)}(m-1;\lambda)+(-2)^{l}m^{n}B_{n}^{(l)}(\lambda^{m})$
So we obtain
$m^{l}G_{n}^{(l)}(\lambda)-(-2)^{l}m^{n}B_{n}^{(l)}(\lambda^{m})=2^{l}\sum_{k=0}^{n}{n\choose
k}m^{k}B_{k}^{(l)}(\lambda^{m})Z_{n-k}^{(l)}(m-1;\lambda)$
So proof is complete. ∎
Also by letting $\lambda=1$ in Theorem 2.13, we obtain following assertion
###### Corollary 2.14.
For $m$ be even, $n,l\in\mathbb{N}_{0}$, we get
$m^{l}G_{n}^{(l)}-(-2)^{l}m^{n}B_{n}^{(l)}=2^{l}\sum_{k=0}^{n}{n\choose
k}m^{k}B_{n}^{(l)}Z_{n-k}^{(l)}(m-1)$
Here we present the lowering orders for Apostol-Genocchi numbers of higher
order.
###### Theorem 2.15.
(Lowering orders) For $n,k\geqslant 1$,
$G_{k}^{(n+1)}(\lambda)=2kG_{k-1}^{(n)}(\lambda)-\Big{(}2-\frac{2k}{n}\Big{)}G_{k}^{(n)}(\lambda)$
###### Proof.
Let us put $G_{n}(t;\lambda)=\Big{(}\frac{2t}{\lambda e^{t}+1}\Big{)}^{n}$.
Then $G_{n}(t;\lambda)$ is the generating function of higher order Apostol-
Genocchi numbers. The derivative
$G^{{}^{\prime}}(t;\lambda)=\frac{d}{dt}G_{n}(t;\lambda)$ is equal to
$n\Big{(}\frac{1}{t}-\frac{\lambda e^{t}}{\lambda
e^{t}+1}\Big{)}G_{n}(t;\lambda)=\frac{n}{t}G_{n}(t;\lambda)-nG_{n}(t;\lambda)+\frac{n}{\lambda
e^{t}+1}G_{n}(t;\lambda)$
and
$tG_{n}^{{}^{\prime}}(t;\lambda)=nG_{n}(t;\lambda)-ntG_{n}(t;\lambda)+\frac{n}{2}G_{n+1}(t)$
so we obtain
$\frac{G_{k}^{(n)}(\lambda)}{(k-1)!}=n\frac{G_{k}^{(n)}(\lambda)}{k!}-n\frac{G_{k-1}^{(n)}(\lambda)}{(k-1)!}+\frac{n}{2}\frac{G_{k}^{(n+1)}(\lambda)}{k!}$
for $k\geqslant 1$. This formula can written as
$G_{k}^{(n+1)}(\lambda)=2kG_{k-1}^{(n)}(\lambda)-\Big{(}2-\frac{2k}{n}\Big{)}G_{k}^{(n)}(\lambda)$
so proof is complete. ∎
## 3\. generalized apostol genocchi polynomials with $a,b,c$ parameters
In this section we investigate some recurrence formulas for generalized
Apostol-Genocchi polynomials with $a,b,c$ parameters . In 2003, Cheon [3]
rederived several known properties and relations involving the classical
Bernoulli polynomials $B_{n}(x)$ and the classical Euler polynomials
$E_{n}(x)$ by making use of some standard techniques based upon series
rearrangement as well as matrix representation. Srivastava and Pinter [36]
followed Cheon’s work [3] and established two relations involving the
generalized Bernoulli polynomials $B^{(\alpha)}_{n}(x)$ and the generalized
Euler polynomials $E^{(\alpha)}_{n}(x)$. So, we will study further the
relations between generalized Bernoulli polynomials with $a,b$ parameters and
Genocchi polynomials with the methods of generating function and series
rearrangement.
###### Theorem 3.1.
Let $x\in\mathbb{R}$ and $n\geqslant 0$. For every positive real number $a,b$
and $c$ such that $a\neq b$ and $b>0$, we have
$G_{n}^{(\alpha)}(a,b;\lambda)=G_{n}^{(\alpha)}\Big{(}\frac{\alpha\ln a}{\ln
a-\ln b};\lambda\Big{)}(\ln b-\ln a)^{n-\alpha}$
###### Proof.
We know
$\displaystyle\Big{(}\frac{2t}{\lambda b^{t}+a^{t}}\Big{)}^{\alpha}$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}G_{n}^{(\alpha)}(a,b;\lambda)\frac{t^{n}}{n!}$
$\displaystyle=$ $\displaystyle\frac{1}{a^{\alpha t}}\Big{(}\frac{2t}{\lambda
e^{t(\ln b-\ln a)}+1}\Big{)}^{\alpha}$ $\displaystyle=$ $\displaystyle
e^{-t\alpha\ln a}\Big{(}\frac{2t(\ln b-\ln a)}{\lambda e^{t(\ln b-\ln
a)}+1}\Big{)}^{\alpha}\times\frac{1}{(\ln b-\ln a)^{\alpha}}$ $\displaystyle=$
$\displaystyle\frac{1}{(\ln b-\ln
a)^{\alpha}}\sum_{n=0}^{\infty}G_{n}^{(\alpha)}\Big{(}\frac{\alpha\ln a}{\ln
a-\ln b};\lambda\Big{)}(\ln b-\ln a)^{n}\frac{t^{n}}{n!}$
So by comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides, we get
$G_{n}^{(\alpha)}(a,b;\lambda)=G_{n}^{(\alpha)}\Big{(}\frac{\alpha\ln a}{\ln
a-\ln b};\lambda\Big{)}(\ln b-\ln a)^{n-\alpha}.$
∎
###### Theorem 3.2.
Suppose that the conditions of Theorem 3.1 holds true, we get
$G_{n}^{(\alpha)}(x;a,b,c;\lambda)=G_{n}^{(\alpha)}\Big{(}\frac{-\alpha\ln
a+x\ln c}{\ln b-\ln a},\lambda\Big{)}(\ln b-\ln a)^{n-\alpha}$
###### Proof.
We have
$\displaystyle\sum_{n=0}^{\infty}G_{n}^{(\alpha)}(x;a,b,c;\lambda)$
$\displaystyle=$ $\displaystyle\Big{(}\frac{2t}{\lambda
b^{t}+a^{t}}\Big{)}^{\alpha}c^{xt}$ $\displaystyle=$
$\displaystyle\frac{1}{\alpha^{at}}\Big{(}\frac{2t}{\lambda e^{t(\ln b-\ln
a)}+1}\Big{)}^{\alpha}c^{xt}$ $\displaystyle=$ $\displaystyle e^{t(-\alpha\ln
a+x\ln c)}\Big{(}\frac{2t}{\lambda e^{t(\ln b-\ln a)}+1}\Big{)}^{\alpha}$
$\displaystyle=$ $\displaystyle\frac{1}{(\ln b-\ln
a)^{\alpha}}\sum_{n=0}^{\infty}G_{n}^{(\alpha)}\Big{(}\frac{-\alpha\ln a+x\ln
c}{\ln b-\ln a},\lambda\Big{)}(\ln b-\ln a)^{n}\frac{t^{n}}{n!}.$
So by comparing the coefficient of $\frac{t^{n}}{n!}$ on both sides, we get
$G_{n}^{(\alpha)}(x;a,b,c;\lambda)=G_{n}^{(\alpha)}\Big{(}\frac{-\alpha\ln
a+x\ln c}{\ln b-\ln a},\lambda\Big{)}(\ln b-\ln a)^{n-\alpha}.$
Therefore proof is complete. ∎
The generalized Apostal-Genocchi polynomials of higher order
$G_{n}^{(\alpha)}(x;a,b,c;\lambda)$ prossess a number of interesting
properties which we state here.
###### Theorem 3.3.
Let $a,b,c\in\mathbb{R}^{+}\ (a\neq b)$ and $x\in\mathbb{R}$, then
(2) $G_{n}^{(\alpha)}(x+1;a,b,c;\lambda)=\sum_{k=0}^{n}{n\choose k}(\ln
c)^{n-k}G_{k}^{(\alpha)}(x;a,b,c;\lambda)$ (3)
$G_{n}^{(\alpha)}(x+\alpha;a,b,c;\lambda)=G_{n}^{(\alpha)}\Big{(}x;\frac{a}{c},\frac{b}{c},c;\lambda\Big{)}$
(4)
$G_{n}^{(\alpha)}(\alpha-x;a,b,c;\lambda)=G_{n}^{(\alpha)}\Big{(}-x;\frac{a}{c},\frac{b}{c},c;\lambda\Big{)}$
(5) $G_{n}^{(\alpha+\beta)}(x+y;a,b,c;\lambda)=\sum_{r=0}^{k}{k\choose
r}G_{k-r}^{(\alpha)}(x;a,b,c;\lambda)G_{r}^{(\beta)}(y;a,b,c;\lambda)$ (6)
$\frac{\partial^{l}}{\partial
x^{l}}\\{G_{n}^{(\alpha)}(x;a,b,c;\lambda)\\}=\frac{n!}{(n-\ell)!}(\ln
c)^{\ell}G_{n-\ell}^{(\alpha)}(x;a,b,c;\lambda)$ (7)
$\int^{t}_{s}G_{n}^{(\alpha)}(x;a,b,c;\lambda)dx=\frac{1}{(n+1)\ln
c}\Big{[}G_{n+1}^{(\alpha)}(t;a,b,c;\lambda)-G_{n+1}^{(\alpha)}(s;a,b,c;\lambda)\Big{]}$
###### Proof.
We know
$\displaystyle\sum_{n=0}^{\infty}G_{n}^{(\alpha)}(x+1;a,b,c;\lambda)\frac{t^{n}}{n!}$
$\displaystyle=$ $\displaystyle\Big{(}\frac{t}{\lambda
b^{t}+a^{t}}\Big{)}^{\alpha}.c^{(x+1)t}$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}G_{k}^{(\alpha)}(x;a,b,c;\lambda)(\ln
c)^{n}\frac{t^{n+k}}{n!k!}$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}G_{k}^{(\alpha)}(x;a,b,c;\lambda)(\ln
c)^{n-k}\frac{t^{n+k}}{(n-k)!k!}$
So comparing the coefficients of $t^{n}$ on both sides, we arrive at the
result (2) asserted by Theorem 3.3. Similary, by simple manipulations, leads
us to the result (3), (4) and (5) of Theorem 3.3 and by successive
differentiation with respect to $x$ and then using the principle of
mathematical induction on $\ell\in\mathbb{N}_{0}$, we obtain the formula (6).
Also, by taking $\ell=1$ in (6) and integrating both sides with respect to
$x$, we get the formula (7). ∎
###### Remark 3.4.
Let $a,b,c\in\mathbb{R}^{+}\ (a\neq-b)$ and $x\in\mathbb{R}$, by
differentiating both sides of the following generating function
$\sum_{n=0}^{\infty}G_{n}^{\alpha}(x;a,b,c;\lambda)\frac{t^{n}}{n!}=\frac{t^{\alpha}}{(\lambda
e^{t\ln(\frac{b}{a})}+1)^{\alpha}}e^{t(x\ln c-x\ln a)}.$
We get
$\displaystyle\alpha\lambda\ln(\frac{b}{a})\sum_{k=0}^{n}{n\choose k}(\ln
b)^{k}G_{n-k}^{(\alpha+1)}(x;a,b,c;\lambda)$ $\displaystyle=$
$\displaystyle(\alpha-n)G_{n}^{(\alpha)}(x;a,b,c;\lambda)$ $\displaystyle+$
$\displaystyle n(x\ln c-\alpha\ln a)G_{n-1}^{(\alpha)}(x;a,b,c;\lambda)$
###### Remark 3.5.
GI-Sang Cheon and H. M. Srivastava in [3, 20] investigated the classical
relationship between Bernoulli and Euler polynomials as follows
$B_{n}(x)=\sum_{{}^{k=0}_{k\neq 1}}^{n}{n\choose k}B_{k}E_{n-k}(x)$
by applying a similar Srivastava’s method in [20] we obtain the following
result for generalized Bernoulli polynomials and Genocchi numbers
$\displaystyle B_{n}(x+y,a,b)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\sum_{k=0}^{n}\frac{1}{n-k+1}{n\choose
k}[B_{k}(y,a,b)+B_{k}(y+1,a,b)]G_{n-k}(x)$ $\displaystyle G_{n}(x+y)$
$\displaystyle=$ $\displaystyle\frac{1}{2}\sum_{k=0}^{n}{n\choose
k}[G_{k}(y)+G_{k}(y+1)]E_{n-k}(x)$
so, because we have
$G_{n}(y+1)+G_{n}(y)=2ny^{n-1}$
we obtain
$G_{n}(x+y)=\sum_{k=0}^{n}k{n\choose k}y^{k-1}E_{n-k}(x)\ \ \ \ \ \ (y\neq 0)$
## 4\. multiplication theorem for 2-variable Genocchi polynomial
We apply the method of generating function, which are exploited to derive
further classes of partial sums involving generalized many index many variable
polynomials. In introduction we introduced 2-variable Genocchi polynomial. An
application of 2-variable Genocchi polynomials is relevant to the
multiplication theorems. In this section we develop the multiplication theorem
for 2-variable Genocchi polynomial which yields a deeper insight into the
effectiveness of this type of generalizations.
###### Theorem 4.1.
Let $x,y\in\mathbb{R}^{+}$ and $m$ be odd, we obtain
$G_{n}(mx,py,\lambda)=m^{n-1}\sum_{k=0}^{m-1}\lambda^{k}(-1)^{k}_{H}G_{n}\Big{(}x+\frac{k}{m},\frac{py}{m^{2}},\lambda^{m}\Big{)}$
###### Proof.
We know
$\sum_{n=0}^{\infty}G_{n}(mx,py,\lambda)\frac{t^{n}}{n!}=\frac{2te^{mxt+pyt^{2}}}{\lambda
e^{t}+1}$
and handing the R. H. S of the above equations, we defined
$\sum_{n=0}^{\infty}G_{n}(mx,py,\lambda)\frac{t^{n}}{n!}=\frac{2te^{mxt}}{\lambda^{m}e^{mt}+1}\frac{\lambda^{m}e^{mt}+1}{\lambda
e^{t}+1}e^{pyt^{2}}.$
By noting that
$\frac{2te^{mxt}}{\lambda^{m}e^{mt}+1}\frac{\lambda^{m}e^{mt}+1}{\lambda
e^{t}+1}e^{pyt^{2}}=\sum_{k=0}^{m-1}\frac{1}{m}(-1)^{k}\lambda^{k}\sum_{q=0}^{\infty}\frac{t^{q}m^{q}}{q!}G_{q}\Big{(}x+\frac{k}{m},\lambda^{m}\Big{)}\sum_{r=0}^{\infty}\frac{t^{2r}p^{r}}{r!}y^{r}.$
We get
$\sum_{n=0}^{\infty}G_{n}(mx,py,\lambda)\frac{t^{n}}{n!}=\sum_{n=0}^{\infty}t^{r}m^{n-1}\sum_{k=0}^{m-1}(-1)^{k}\lambda^{k}\sum_{r=0}^{[\frac{n}{2}]}\frac{G_{n-2r}(x+\frac{k}{m},\lambda^{m})}{(n-2r)!r!}\Big{(}\frac{py}{m^{2}}\Big{)}^{r}$
Also, by simple computation we realize that
${}_{H}G_{n}(x,y,\lambda)=\sum_{s=0}^{[\frac{n}{2}]}\frac{y^{s}G_{n-2s}(x,\lambda)}{s!(n-2s)!}$
So, we obtain
$G_{n}(mx,py,\lambda)=m^{n-1}\sum_{k=0}^{m-1}(-1)^{k}\lambda^{k}_{H}G_{n}\Big{(}x+\frac{k}{m},\frac{py}{m^{2}},\lambda^{m}\Big{)}$
Therefore proof is complete. ∎
Also, by a similar method, we get the following remark.
###### Remark 4.2.
Let $m$ be odd and $x,y\in\mathbb{R}^{+}$, we get
${}_{H}G_{n}(mx,m^{2}y,\lambda)=m^{n-1}\sum_{\ell=0}^{m-1}(-1)^{\ell}\lambda^{\ell}_{H}G_{n}\Big{(}x+\frac{\ell}{m},y,\lambda^{m}\Big{)}.$
Acknowledgments: The authors wishes to express his sincere gratitude to the
referee for his/her valuable suggestions and comments.
## References
* [1] T. M. Apostol, _On the Lerch Zeta function_ , Pacific. J. Math. No. 1, 1951, 161-167.
* [2] I. N. Cangul, H. Ozden and Y. Simsek, _A new approach to q-Genocchi numbers and their interpolation functions_ , Nonlinear Analysis: Theory, Methods and Applications, Vol. 71, 2009, 793-799.
* [3] G. S. Cheon,_A note on the Bernoulli and Euler polynomials_. Appl. Math. Lett. Vol. 16, No.3, 2003, 365-368.
* [4] D. Dumont and G. Viennot, _A Combinatorial Interpretation of the Seidel Generation of Genocchi Numbers_ , Annals of Discrete Mathematics, Vol. 6, 1980, 77-87.
* [5] B. N. Guo and F. Qi, _Generalization of Bernoulli polynomials_ , J. Math. Ed. Sci. Tech. 33, No. 3, 2002, 428-431.
* [6] S. Herrmann, _Genocchi numbers and f-vectors of simplicial balls_ , European Journal of Combinatorics, Vol. 29, Issue 5, 2008, 1087-1091.
* [7] F. T. Howard,_A sequence of numbers related to the exponential function_ , Duke. Math. J. 34 , 1967, 599-616.
* [8] F. T. Howard,_Explicit formulas for degenerate Bernoulli numbers_ , Disc. Math, Vol. 162, Issue 1-3, 1996, 175-185.
* [9] F. T. Howard, M. Cenkci, _Notes on degenerate numbers_ , Disc. Math, Vol. 307, Issues 19-20, 2007, 2359-2375.
* [10] T. Kim, _On the q-extension of Euler and Genocchi numbers_ , J. Math. Anal. Appl, Vol. 326, Issue 2, 2007, 1458-1465.
* [11] 2\. T. Kim and S.H. Rim , _Some q-Bernoulli numbers of higher order associated with the p-adic q-integrals_. Indian J. Pure. Appl. Math. 32, 2001, 1565-1570.
* [12] G. D. Liu, H. M. Srivastava, _Explicit formulas for the No $\ddot{u}$rland polynomials $B_{n}^{(x)}$ and $b_{n}^{(x)}$_ , Comp. Math. Appl, Vol. 51, Issue 9-10, 2006, 1377-1384.
* [13] H. Liu and W. Wang, _Some identities on the Bernoulli, Euler and Genocchi polynomials via power sums and alternate power sums_ , Discrete Mathematics, Vol. 309, Issue 10, 2009, 3346-3363.
* [14] S. D. Lin and H. M. Srivastava, _Some families of the Hurwitz-Lerch Zeta function and associated fractional derivative and other integral repre-sentations_. Appl. Math. Comput, 154 , 2004, 725-733.
* [15] Q. M. Luo, _Some results for the q-Bernoulli and q-Euler polynomials_ , J. Math. Anal. Apl, Vol. 363, Issue 1, 2010, 7-18.
* [16] Q. M. Luo,_The multiplication formulas for the Apostol-Bernoulli and Apostol-Euler polynomials of higher order_ , Integral Transforms and Special Functions, Vol. 20, Issue 5, 2009, 377-391.
* [17] Q. M. Luo, B. N. Guo, F. Qi, and L. Debnath, _Generalization of Bernoulli numbers and polynomials_ , IJMMS, Vol. 2003, Issue 59, 2003, 3769-3776.
* [18] Q. M. Luo, F. Qi, and L. Debnath, _Generalizations of Euler numbers and polynomials_ , IJMMS. Vol. 2003, Issue 61, 2003(3893-3901)
* [19] Q. M. Luo, _q-Extensions for the Apostol-Genocchi Polynomials_ , General Mathematics Vol. 17, No. 2 ,2009, 113-125.
* [20] Q. M. Luo and H. M. Srivastava, _Some relationships between the Apostol-Bernoulli and Apostol-Euler polynomials Computers and Mathematics with Applications_ , Vol. 51, Issues 3-4, 2006, 631-642.
* [21] P. J. McCarthy , _Some irreducibility theorems for Bernoulli polynomials of higher order_ , Duke Math. J. Vol. 27, No. 3 ,1960, 313-318.
* [22] J. Riordan and P. R. Stein, _Proof of a conjecture on Genocchi numbers_ , Discrete Mathematics, Vol. 5, Issue 4, 1973, 381-388.
* [23] S. H. Rim, K. H. Park and E. J. Moon, _On Genocchi Numbers and Polynomials_ , Abstract and Applied Analysis. Vol. 2008.
* [24] B. Y. Rubinstein and L. G. Fel ,_Restricted partition functions as Bernoulli and Eulerian polynomials of higher order_ , Ramanujan Journal, Vol. 11, No. 3, 2006, 331-347.
* [25] C. S. Ryoo, _A numerical computation on the structure of the roots of q-extension of Genocchi polynomials_ , Applied Mathematics Letters, Vol. 21, Issue 4, 2008, 348-354.
* [26] C. S. Ryoo, _A numerical computation on the structure of the roots of (h,q)-extension of Genocchi polynomials_ , Mathematical and Computer Modelling, Vol. 49, Issues 3-4, 2009, 463-474.
* [27] Y. Simsek, _q-Hardy-Berndt type sums associated with q-Genocchi type zeta and q-l-functions_ , Nonlinear Analysis: Theory, Methods and Applications, Vol. 71, Issue 12, 2009, 377-395.
* [28] H. M. Srivastava, _Some formulae for the q-Bernoulli and Euler polynomials of higher order_ , J. Math. Anal. Appl. Vol. 273, Issue 1, 2002, 236-242.
* [29] J. Zeng and J. Zhou, _A q-analog of the Seidel generation of Genocchi numbers_ , European Journal of Combinatorics, Vol. 27, Issue 3, 2006, 364-381.
* [30] L. C. Jang, T. Kim, D. H. Lee, and D. W. Park, _An application of polylogarithms in the analogue of Genocchi numbers_ , NNTDM, Vol. 7, Issue 3, 2000, 66-70.
* [31] Y. Simsek, I. N. Cangul, V. Kurt, and D. Kim, _q-Genocchi numbers and polynomials associated with q-Genocchi-type l-functions_ , Adv. Difference Equ, doi:10.11555.2008/85750
* [32] Jeff Remmel, _Ascent Sequences, $2+2$-free posets, Upper Triangular Matrices, and Genocchi numbers_, Workshop on Combinatorics, Enumeration, and Invariant Theory,George Mason University, Virginia, 2010.
* [33] Anders Claesson, Sergey Kitaev, Kari Ragnarsson, Bridget Eileen Tenner, _Boolean complexes for Ferrers graphs_ , arXiv:0808.2307v3
* [34] Michael Domaratzki, _Combinatorial Interpretations of a Generalization of the Genocchi Numbers_ , Journal of Integer Sequences, Vol. 7, 2004\.
* [35] Qiu-Ming Luo, H.M. Srivastava, _Some generalizations of the Apostol-Genocchi polynomials and the Stirling numbers of the second kind_ , Appl. Math. Comput. (2011), doi:10.1016/j.amc.2010.12.048.
* [36] H.M. Srivastava and A. Pinter, _Remarks on some relationships between the Bernoulli and Euler polynomials_ , Applied Math. Letter. Vol. 17, 2004, 375-380.
* [37] B. Kurt,_The multiplication formulas for the Genocchi polynomials of higher order_. Proc. Jangjeon Math. Soc. Vol. 13, No.1, 2010, 89-96.
* [38] G. Dattoli, S. Lorenzutta and C. Cesarano,_Bernoulli numbers and polynomials from a more general point of view_. Rend. Mat. Appl. Vol. 22, No.7, 2002, 193-202.
|
arxiv-papers
| 2011-04-08T07:24:03 |
2024-09-04T02:49:18.162022
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Hassan Jolany, Hesam Sharifi",
"submitter": "Hassan Jolany",
"url": "https://arxiv.org/abs/1104.1501"
}
|
1104.1520
|
# Unification of quantum and classical correlations and quantumness measures
Kavan Modi kavan@quantumlah.com Centre for Quantum Technologies, National
University of Singapore, Singapore Vlatko Vedral Centre for Quantum
Technologies, National University of Singapore, Singapore Department of
Physics, National University of Singapore, Singapore Clarendon Laboratory,
University of Oxford, Oxford UK
###### Abstract
We give a pedagogical introduction to quantum discord. We the discuss the
problem of separation of total correlations in a given quantum state into
entanglement, dissonance, and classical correlations using the concept of
relative entropy as a distance measure of correlations. This allows us to put
all correlations on an equal footing. Entanglement and dissonance, whose
definition is introduced here, jointly belong to what is known as quantum
discord. Our methods are completely applicable for multipartite systems of
arbitrary dimensions. We finally show, using relative entropy, how different
notions of quantum correlations are related to each other. This gives a single
theory that incorporates all correlations, quantum, classical, etc.
## I Introduction
Quantum systems are correlated in ways inaccessible to classical objects. A
distinctive quantum feature of correlations is quantum entanglement (Einstein
_et al._ , 1935; Schrödinger, 1935; Bohr, 1935). Entangled states are
nonclassical as they cannot be prepared with the help of local operations and
classical communication (LOCC) Horodecki _et al._ (2009). However, it is not
the only aspect of nonclassicality of correlations due to the nature of
operations allowed in the framework of LOCC. To illustrate this, one can
compare a classical bit with a quantum bit; in the case of full knowledge
about a classical bit, it is completely described by one of two locally
distinguishable states, and the only allowed operations on the classical bit
are to keep its value or flip it. To the contrary, quantum operations can
prepare quantum states that are indistinguishable for a given measurement.
Such operations and classical communication can lead to separable states
(those which can be prepared via LOCC) which are mixtures of locally
indistinguishable states. These states are nonclassical in the sense that they
cannot be prepared using classical operations on classical bits.
Recent measures of nonclassical correlations are motivated by different
notions of classicality and operational means to quantify nonclassicality
Henderson and Vedral (2001); Ollivier and Zurek (2001); Oppenheim _et al._
(2002); Groisman _et al._ (2005); Luo (2008); Wu _et al._ (2009); Modi _et
al._ (2010). Quantum discord has received much attention in studies involving
thermodynamics and correlations Zurek (2003); Horodecki _et al._ (2005);
Rodriguez-Rosario _et al._ (2008); Devi and Rendell (2008); Datta _et al._
(2008); Piani _et al._ (2008); mazzola; Dakić _et al._ (2010); Chen _et
al._ (2011); Cavalcanti _et al._ (2010); Piani _et al._ (2011). These works
are concerned with understanding the role of quantumness of correlations in a
variety of systems and tasks. In some of the studies, it is also desirable to
compare various notions of quantum correlations. It is well known that the
different measures of quantum correlation are not identical and conceptually
different. For example, the discord does not coincide with entanglement or
measurement induced disturbance and a direct comparison of any two of these
notions can be rather meaningless. Therefore, an unified classification of
correlations is in demand as well as a unification of different notions of
quantumness. In this article, using relative entropy, we resolve some of these
issues by introducing measures for classical and nonclassical correlations for
quantum states under a single theory. Our theory further allows us to connect
different notions of quantumness. This will allow us to generalize all
measures of quantumness for multipartite systems in symmetric and asymmetric
manners. We begin with a pedagogical introduction to quantumness of
correlations.
## II Conditional entropy
Figure 1: _Conditional entropy._ The Venn diagram shows the joint entropy
$H(ab)$, marginal entropies $H(a)$ and $H(b)$, conditional entropies, $H(a|b)$
and $H(b|a)$, and mutual information $I(a:b)$ a joint classical probability
distribution for (correlated) random variables $a$ and $b$.
The story of quantumness of correlations beyond-entanglement begins with the
non-uniqueness of quantum conditional entropy. In classical probability
theory, conditional entropy is defined as
$\displaystyle H(b|a)=H(ab)-H(a).$ (1)
It is the measure ignorance of $b$ has given some knowledge of state of $a$.
Fig. 1) depicts this relationship in a graphical manner. Another way to
express the conditional entropy is as the ignorance of $b$ when the state of
$a$ is known to be in the $i$th state, weighted by the probability for $i$th
outcome as
$\displaystyle H(b|a)=\sum_{i}p^{a}_{i}H(b|a=i),$ (2)
It is the classical-equivalency of Eqs. 1 and 2 give rise to quantumness of
correlations and in specific quantum discord Datta (2008). This is due to the
fact that these two equations are not the same in quantum theory. While the
first simply takes difference in the joint ignorance and the ignorance of $a$,
the second formula depends on specific outcomes of $a$, which requires a
measurement. Measurements in quantum theory are basis dependent and change the
state of the system.
In generalizing the classical concepts above to quantum we replace joint
classical-probability distributions with density operators and Shannon’s with
von Neumann’s entropy. How do we deal with conditional entropy then? Clearly
there are two options: Eqs. 1 and 2. Let is deal with Eq. 1 first and define
quantum conditional entropy as
$\displaystyle S^{(1)}(B|A)=S(AB)-S(A).$ (3)
This is well known quantity in quantum information theory Schumacher and
Nielsen (1996) and negative of this quantity is known as coherent information.
However, this is a troubling quantity as it can be negative for entangled
states and for a long time there was no way to interpret the negativity. This
is in start contrast with the classical conditional entropy which has a clear
interpretation and is always positive.
On the other hand, we can give define the quantum version of Eq. 2 by making
measurements on party $A$. To put in the details, the joint state $\rho^{AB}$
is measured by $A$ giving $i$th outcome:
$\displaystyle\rho^{AB}\rightarrow\sum_{i}\Pi^{A}_{i}\rho^{AB}\Pi^{A}_{i}=\sum_{i}p_{i}|i\rangle\langle
i|\otimes\rho^{B}_{i},$ (4)
where $\Pi_{i}$ are rank one _positive operator values measures_ , $|i\rangle$
are classical flags on measuring apparatus indicating the measurement outcome,
$p_{i}=\mbox{Tr}[\Pi^{A}_{i}\rho^{AB}]$ is probability of $i$th outcome,
$\rho^{B}_{i}=\mbox{Tr}_{A}[\Pi^{A}_{i}\rho^{AB}]$. The conditional entropy of
$B$ is then clearly defined as
$\displaystyle S^{(2)}(B|A)=S(B|\\{\Pi_{i}\\})=\sum_{i}p_{i}S(\rho^{B}_{i}),$
(5)
This definition of conditional entropy is always positive. The obvious problem
with this definition is that the state $\rho^{AB}$ changes after the
measurement. Also note that this quantity is not symmetric under party swap.
A different approach to conditional entropy is taken in Cerf and Adami (1997,
1999), where a quantum _conditional amplitude_ (analogous to classical
conditional probability) is defined such that it satisfies Eq. 3. We only mean
to suggest that the two approaches above are not the only options available.
Different approaches give us different distinctions of quantum theory from the
classical theory. And in someway different notions of quantumness.
## III Quantumness of correlations
Clearly the two definition of conditional entropies above are different in
quantum theory. The first one suffers from negativity and second one is
‘classicalization’ of a quantum state. Let us now derive quantum discord and
relate it to the preceding section. We start with the concept of mutual
information:
$\displaystyle I(a:b)=H(a)+H(b)-H(ab)$ (6)
and using Eq. 2
$\displaystyle J(b|a)=H(b)-\sum_{i}p^{a}_{i}H(b|a=i).$ (7)
Clearly the two classical mutual information above are the same, but not in
quantum theory. This is precisely what was noted by Ollivier and Zurek, and
they called the difference between $I$ and $J$ _quantum discord_ :
$\displaystyle\delta(B|A)=I(A:B)-J(B|A).$ (8)
Working out the details one finds that quantum discord is simply,
$\displaystyle\delta(B|A)=S^{(2)}(B|A)-S^{(1)}(B|A),$ (9)
the difference in two definition of conditional entropy.
Henderson and Vedral Henderson and Vedral (2001) had also looked at $J(B|A)$
called it classical correlations. In fact they advocated to that
$\max_{\\{\Pi_{i}\\}}J(B|A)$ to be the classical correlations. Which meant
that quantum discord is best defined as
$\displaystyle\delta(B|A)=\min_{\\{\Pi_{i}\\}}[I(A:B)-J(B|A)].$ (10)
Since conditional entropy in Eq. 5 is asymmetric under party swap, quantum
discord is also asymmetric under party swap.
A side note should be made at this point. The interpretation of negativity of
quantum conditional entropy in Eq. 3 was given in terms of task known as state
merging Horodecki _et al._ (2005), and we will see shortly that a similar
task gives quantum discord an operational meaning. While the minimum of Eq. 5
over all POVM is related to entanglement of formation between $B$ and $C$, a
purification of $AB$: $E_{F}(BC)=\min_{\\{\Pi_{i}\\}}S^{(2)}(B|A)$ Koashi and
Winter (2004). Putting the two together leads to an task dependent operation
interpretation of quantum discord Cavalcanti _et al._ (2010). Barring the
details, we can say that quantum discord between $A$ and $B$, as measured by
$A$ is equal to the consumption of entanglement in state preparation of $BC$
plus state merging in those two parties. Additionally, state merging and other
tasks that involve conditional entropies are asymmetric under party swap and a
natural interpretation of asymmetry of quantum discord arises.
The minimization over all POVM of quantum is not an easy problem to deal with
in general. A similar quantity called _measurement induced disturbance_ (MID)
was introduced to deal with this difficulty. MID is defined as the difference
in the mutual information of a joint state, $\rho^{AB}$ and it’s dephased
version $\chi^{AB}$. The dephasing takes place in the marginal basis, leaving
the marginal states unchanged:
$\displaystyle MID=I(\rho^{AB})-I(\chi^{AB})=S(\rho^{AB})-S(\chi^{AB}).$ (11)
We will come back to MID later in the article.
## IV Unification of correlations
Figure 2: _Correlations as a distance._ The large ellipse represents the set
of all states with the set of separable states in the smaller ellipse. The
squares represent the set of classical states, and the dots within the squares
are the sets of product states. $\rho$ is an entangled state and $\sigma$ is
the closest separable state. The correlations are entanglement, $E$, discord,
$D$, and dissonance, $Q$.
Both of the measures above are defined in terms of mutual information and
therefore very difficult to generalize for multipartite case Chakrabarty _et
al._ (2010). Below we will get over that hurdle by examining classical states,
states that have no quantum correlations. It is easy verify that a state has
zero discord and MID simultaneously. What that means is that such a state has
equal value for conditional entropies in Eqs. 3 and 5. Such a state is called
a _classical state_ and has the form
$\displaystyle\chi^{AB}=\sum_{i}p_{i}|i\rangle\langle i|\otimes\rho^{B}_{i}$
(12)
when measurements are made by $A$ and $\chi^{AB}=\sum_{j}\rho^{A}_{j}\otimes
p_{j}|j\rangle\langle j|$ when measurements are made by $B$. It is then easy
to see that a symmetric classical state must have the form
$\displaystyle\chi^{AB}=\sum_{ij}p_{ij}|ij\rangle\langle ij|.$ (13)
Further the conditional amplitude defined in Cerf and Adami (1997, 1999) is
not a density operator and may behave strangely. In Brodutch and Terno (2010)
it is shown that the conditional amplitude reduces to a density operator when
the state is classical.
Based on the definition of classical states we may now introduce a measure of
quantum correlations as a distance from a given state to the closest classical
state. The distance from a state to a state without the desired property (e.g.
entanglement or discord) is a measure of that property. For example, the
distance to the closest separable state is a meaningful measure of
entanglement. If the distance is measured with relative entropy, the resulting
measure of entanglement is the relative entropy of entanglement Vedral _et
al._ (1997); Vedral and Plenio (1998). Using relative entropy we define
measures of nonclassical correlations as a distance to the closest classical
states Modi _et al._ (2010), though many other distance measures can serve
just as well Dakić _et al._ (2010). We call our measure of quantum
correlations _relative entropy of discord_.
Since all the distances are measured with relative entropy, this provides a
consistent way to compare different correlations, such as entanglement,
discord, classical correlations, and _quantum dissonance_ , a new quantum
correlation that may be present in separable states. Dissonance is a similar
notion to discord, but it excludes entanglement. Lastly, here we have to make
no mention of whether we want a symmetric discord measure or asymmetric, or
the number of parties to be involved. We simply have to choose the appropriate
classical state and no ambiguity is left. A graphical illustration is given in
Fig. 2.
Let us briefly define the types states discussed below. A product state of
$N$-partite system, a state with no correlations of any kind, has the form of
$\pi=\pi_{1}\otimes\dots\otimes\pi_{N}$, where $\pi_{n}$ is the reduced state
of the $n$th subsystem. The set of product states, $\mathcal{P}$, is not a
convex set in the sense a mixture of product states may not be another product
state. The set of classical states, $\mathcal{C}$, contains mixtures of
locally distinguishable states $\chi=\sum_{k_{n}}p_{k_{1}\dots
k_{N}}|k_{1}\dots k_{N}\rangle\langle k_{1}\dots
k_{N}|=\sum_{\vec{k}}p_{\vec{k}}|\vec{k}\rangle\langle\vec{k}|$, where
$p_{\vec{k}}$ is a joint probability distribution and local states
$|k_{n}\rangle$ span an orthonormal basis. The correlations of these states
are identified as classical correlations Henderson and Vedral (2001); Ollivier
and Zurek (2001); Oppenheim _et al._ (2002); Modi _et al._ (2010). Note that
$\mathcal{C}$ is not a convex set; mixing two classical states written in
different bases can give rise to a nonclassical state. The set of separable
states, $\mathcal{S}$, is convex and contains mixtures of the form
$\sigma=\sum_{i}p_{i}\pi_{1}^{(i)}\otimes\dots\otimes\pi_{N}^{(i)}$. These
states can be prepared using only local quantum operations and classical
communication Werner (1989) and can possess nonclassical features Henderson
and Vedral (2001); Ollivier and Zurek (2001). The set of product states is a
subset of the set of classical states which in turn is a subset of the set of
separable states. Finally, entangled states are all those which do not belong
to the set of separable states. The set of entangled states, $\mathcal{E}$, is
not a convex set either.
The relative entropy between two quantum states $x$ and $y$ is defined as
$S(x||y)\equiv-\mbox{tr}(x\log y)-S(x)$, where $S(x)\equiv-\mbox{tr}(x\log x)$
is the von Neumann entropy of $x$. The relative entropy is a non-negative
quantity and due to this property it often appears in the context of distance
measure though technically it is not a distance, e.g. it is not symmetric. In
Fig. 3, we present all possible types of correlations present in a quantum
state $\rho$. $T_{\rho}$ is the _total mutual information_ of $\rho$ given by
the distance to the closest product state. If $\rho$ is entangled, its
entanglement is measured by the relative entropy of entanglement, $E$, which
is the distance to the closest separable state $\sigma$. Having found
$\sigma$, one then finds the closest classical state, $\chi_{\sigma}$, to it.
This distance, denoted by $Q$, contains the rest of nonclassical correlations
(it is similar to discord Henderson and Vedral (2001); Ollivier and Zurek
(2001) but entanglement is excluded). We call this quantity _quantum
dissonance_. Alternatively, if we are interested in relative entropy of
discord, $D$, then we find the distance between $\rho$ and closest classical
state $\chi_{\rho}$. Summing up, we have the following nonclassical
correlations:
$\displaystyle E=$
$\displaystyle\min_{\sigma\in\mathcal{S}}S(\rho||\sigma)\quad\textrm{(entanglement)},$
(14) $\displaystyle D=$
$\displaystyle\min_{\chi\in\mathcal{C}}S(\rho||\chi)\quad\textrm{(quantum
discord)},$ (15) $\displaystyle Q=$
$\displaystyle\min_{\chi\in\mathcal{C}}S(\sigma||\chi)\quad\textrm{(quantum
dissonance)}.$ (16)
Next, we compute classical correlations as the minimal distance between a
classically correlated state, $\chi$, and a product state, $\pi$:
$C=\min_{\pi\in\mathcal{P}}S(\chi||\pi)$. Finally, we compute the quantities
labeled $L_{\rho}$ and $L_{\sigma}$ in Fig. 3, which give us additivity
conditions for correlations.
$\sigma$$T_{\sigma}$$Q$$\rho$$E$$D$$T_{\rho}$$\chi_{\sigma}$$C_{\sigma}$$\chi_{\rho}$$C_{\rho}$$\pi_{\sigma}$$L_{\sigma}$$\pi_{\chi_{\sigma}}$$\pi_{\chi_{\rho}}$$\pi_{\rho}$$L_{\rho}$
Figure 3: _Correlations in a quantum state._ An arrow from $x$ to $y$, $x\to
y$, indicates that $y$ is the closest state to $x$ as measured by the relative
entropy $S(x||y)$. The state $\rho\in\mathcal{E}$ (the set of entangled
states), $\sigma\in\mathcal{S}$ (the set of separable states),
$\chi\in\mathcal{C}$ (the set of classical states), and $\pi\in\mathcal{P}$
(the set of product states). The distances are entanglement, $E$, quantum
discord, $D$, quantum dissonance, $Q$, total mutual information, $T_{\rho}$
and $T_{\sigma}$, and classical correlations, $C_{\sigma}$ and $C_{\rho}$. All
relative entropies, except for entanglement, reduce to the differences in
entropies of $y$ and $x$, $S(x||y)=S(y)-S(x)$. With the aid of $L_{\rho}$ and
$L_{\sigma}$ the closed path are additive.
Skipping the details of the calculations (presented in Modi _et al._ (2010))
we give the final results. The surprisingly simple results and is summarized
in Fig. 3. First, we find that the closest classical state is obtained by
making rank one POVM measurement on the quantum state
$\displaystyle\chi=\sum_{\vec{k}}|\vec{k}\rangle\langle\vec{k}|\rho|\vec{k}\rangle\langle\vec{k}|,$
(17)
where $|\vec{k}\rangle$ is projection in space at most of dimension $d^{2}$.
To find relative entropy of discord one has to optimize over all rank one
POVM.
$\displaystyle D=S(\rho||\chi)=\min_{\\{\Pi_{i}\\}}S(\chi)-S(\rho).$ (18)
Therefore finding the closest classical state is still a very difficult
problem and has the same challenged as faced in computing original discord.
We find that all correlations (except entanglement) in Fig. 3 are given by
simply taking the difference in entropy of the state at tail of the arrow from
the entropy of the state at the tip of the arrow, i.e. $S(x||y)=S(y)-S(x)$ for
all solid lines. Which means that a closed loop of (solid lines) yield
correlations that are additive, i.e. $D+C_{\rho}=T_{\rho}+L_{\rho}$ or
$T_{\rho}-C_{\rho}=D-L_{\rho}$, which is actually the original discord. We
show how these two measures are related to each other below. See Modi _et
al._ (2010) for details presented in this section.
## V Unifying quantumness measures
Next we note, there are four fundamental elements involved in study of
quantumness of correlations. The first of it is the quantum state, $\rho$.
Given a quantum state we immediately have its marginals, $\pi_{\rho}$. The
third element is the classical state $\chi$, obtained by dephasing $\rho$ in
some basis. And the final element is the marginals of $\chi_{\rho}$. It then
turns out that different measures of quantumness are different because they
put different constrain in the relationships these four elements have with
each other. We have illustrated this in Fig. 4.
In Fig. 4a the four fundamental elements are shown. Figs. 4b-d show how three
measures of quantumness are found using the four elements. The original
discord maximizes the distance from the classical state and its marginals.
This has the meaning that the classical state is least confusing from its
marginals. Quantum discord is the defined as the difference in confusion of a
quantum state with its marginals and the a classical state obtained from that
quantum state and it confusion with its marginals. Similarly MID attempts to
minimize the confusion between the classical state and marginals of the
original quantum state. This has the effect that the marginals of the quantum
state are the same as the marginals of the classical state. Finally relative
entropy of discord is defined as the distance between a quantum state and its
closest classical state.
$\rho$$T$$\pi\rho$$D$$\chi_{\rho}$$C$$L$$\pi_{\chi}$a. Elements of quantumness
$\rho$$\pi\rho=\pi_{\chi}$$\chi_{\rho}$minimizec.
MID$\rho$$\pi\rho$$\chi_{\rho}$$\pi_{\chi}$maximizeb.
Originaldiscord$\rho$$\pi\rho$minimize$\chi_{\rho}$$\pi_{\chi}$d. RED Figure
4: _Measures of quantumness._ a. Fundamental elements needed in defining a
measure of quantumness of correlations. b. Original quantum discord. c.
Measurement induced disturbance. d. Relative entropy of discord.
We now turn our attention to show that using relative entropy we can describe
(and generalize) other quantumness measures such as original _quantum discord_
Ollivier and Zurek (2001), _symmetric quantum discord_ Wu _et al._ (2009),
and _measurement induced disturbance_. Luo (2008).
Vedral et al. Vedral _et al._ (1997) show that quantum relative entropy has
the operational meaning of being able to confuse two quantum states. The
argument goes as the following: suppose you are given either $\rho$ or
$\sigma$ and you have to determine which by making $N$ measurements (POVM).
The probability of confusing the two states is
$\displaystyle P_{N}=e^{-NS(\rho||\sigma)}.$ (19)
Now suppose $\rho$ is an entangled state. Then for what separable state
$\sigma$ can be confused for $\rho$ the most? The answer is
$\displaystyle P_{N}=e^{-N\min_{\sigma\in\mathcal{S}}S(\rho||\sigma)},$ (20)
where $\mathcal{S}$ is the set of separable states. This is the meaning of
relative entropy of entanglement:
$\displaystyle E(\rho)=\min_{\sigma\in\mathcal{S}}S(\rho||\sigma).$ (21)
In similar manner we can give meaning to relative entropy of discord as
$\displaystyle D(\rho)=\min_{\chi\in\mathcal{C}}S(\rho||\chi)$ (22)
the classical state $\chi$ that imitates $\rho$ the most.
The great advantage of looking at these measures in this manner is that, now
they are no longer constrained to be bipartite measures. Nor they are
constrained to be symmetric or asymmetric under party exchanges. We can now
define quantum discord by $n-$partite systems with measurements on $m$
subsystems. Similarly, MID can be defined in such a manner as well. The other
advantage is that we know how these elements are related to each other and
that there are only finite number of relationships among them that make sense,
e.g. maximization of distance between a quantum state and a classical state
does not make sense as on may get infinity for the result.
## VI Conclusions
We have given a pedagogical review of ideas behind quantum correlations beyond
entanglement. In doing so we were able to generalize the concepts of quantum
discord to multipartite case, with no ambiguity regarding the asymmetry of
quantum discord under party exchange. We have shown how three measures of
quantumness can be viewed under a single formalism using relative entropy.
###### Acknowledgements.
We acknowledge the financial support by the National Research Foundation and
the Ministry of Education of Singapore. We are grateful to the organizers of
the _75 years of entanglement_ in Kolkata for inviting our participation.
## References
* Einstein _et al._ (1935) A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev., 47, 777 (1935).
* Schrödinger (1935) E. Schrödinger, Naturwissenschaften, 23, 844 (1935).
* Bohr (1935) N. Bohr, Phys. Rev., 48, 696 (1935).
* Horodecki _et al._ (2009) R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Reviews of Modern Physics, 81, 865 (2009).
* Henderson and Vedral (2001) L. Henderson and V. Vedral, J. Phys. A, 34, 6899 (2001).
* Ollivier and Zurek (2001) H. Ollivier and W. H. Zurek, Phys. Rev. Lett., 88, 017901 (2001).
* Oppenheim _et al._ (2002) J. Oppenheim, M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett., 89, 180402 (2002).
* Groisman _et al._ (2005) B. Groisman, S. Popescu, and A. Winter, Phys. Rev. A, 72, 032317 (2005).
* Luo (2008) S. Luo, Phys. Rev. A, 77, 022301 (2008).
* Wu _et al._ (2009) S. Wu, U. V. Poulsen, and K. Mølmer, Phys. Rev. A, 80, 032319 (2009).
* Modi _et al._ (2010) K. Modi, T. Paterek, W. Son, V. Vedral, and M. Williamson, Phys. Rev. Lett., 104, 080501 (2010).
* Zurek (2003) W. H. Zurek, Phys. Rev. A, 67, 012320 (2003).
* Horodecki _et al._ (2005) M. Horodecki, P. Horodecki, R. Horodecki, J. Oppenheim, A. Sen(De), U. Sen, and B. Synak-Radtke, Phys. Rev. A, 71, 062307 (2005a).
* Rodriguez-Rosario _et al._ (2008) C. A. Rodriguez-Rosario, K. Modi, A. Kuah, A. Shaji, and E. C. G. Sudarshan, J. Phys. A, 41, 205301 (2008).
* Devi and Rendell (2008) A. R. U. Devi and R. W. Rendell, Phys. Rev. Lett., 100, 140502 (2008).
* Datta _et al._ (2008) A. Datta, A. Shaji, and C. Caves, Phys. Rev. Lett., 100, 050502 (2008).
* Piani _et al._ (2008) M. Piani, P. Horodecki, and R. Horodecki, Phys. Rev. Lett., 100, 090502 (2008).
* Dakić _et al._ (2010) B. Dakić, V. Vedral, and i. c. v. Brukner, Phys. Rev. Lett., 105, 190502 (2010).
* Chen _et al._ (2011) L. Chen, E. Chitambar, K. Modi, and G. Vacanti, Phys. Rev. A, 83, 020101 (2011).
* Cavalcanti _et al._ (2010) D. Cavalcanti, L. Aolita, S. Boixo, K. Modi, M. Piani, and A. Winter, arXiv:1008.3205 (2010).
* Piani _et al._ (2011) M. Piani, S. Gharibian, G. Adesso, J. Calsamiglia, P. Horodecki, and A. Winter, arXiv:1103.4032 (2011).
* Datta (2008) A. Datta, arXiv:0807.4490v1 (2008).
* Schumacher and Nielsen (1996) B. Schumacher and M. A. Nielsen, Phys. Rev. A, 54, 2629 (1996).
* Cerf and Adami (1997) N. J. Cerf and C. Adami, Phys. Rev. Lett., 79, 5194 (1997).
* Cerf and Adami (1999) N. J. Cerf and C. Adami, Phys. Rev. A, 60, 893 (1999).
* Brodutch and Terno (2010) A. Brodutch and D. R. Terno, Phys. Rev. A, 81, 062103 (2010).
* Horodecki _et al._ (2005) M. Horodecki, J. Oppenheim, and A. Winter, Nature, 436 (2005b).
* Koashi and Winter (2004) M. Koashi and A. Winter, Phys. Rev. A, 69, 022309 (2004).
* Chakrabarty _et al._ (2010) I. Chakrabarty, P. Agrawal, and A. K. Pati, arXiv:1006.5784 (2010).
* Vedral _et al._ (1997) V. Vedral, M. B. Plenio, M. A. Rippin, and P. L. Knight, Phys. Rev. Lett., 78, 2275 (1997a).
* Vedral and Plenio (1998) V. Vedral and M. Plenio, Phys. Rev. A, 57, 1619 (1998).
* Werner (1989) R. F. Werner, Phys. Rev. A, 40, 4277 (1989).
* Note (1) In this Letter, when we speak of discord we mean the relative entropy of discord as defined by us in Eq. 15. When we speak of the original definition discord we will write it as such.
* Vedral _et al._ (1997) V. Vedral, M. B. Plenio, K. Jacobs, and P. L. Knight, Phys. Rev. A, 56, 4452 (1997b).
|
arxiv-papers
| 2011-04-08T08:41:05 |
2024-09-04T02:49:18.168132
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Kavan Modi and Vlatko Vedral",
"submitter": "Kavan Modi",
"url": "https://arxiv.org/abs/1104.1520"
}
|
1104.1580
|
Proceedings of the 2011
New York Workshop on
Computer, Earth and Space Science
February 2011
Goddard Institute for Space Studies
http://www.giss.nasa.gov/meetings/cess2011
Editors
M.J. Way and C. Naud
Sponsored by the Goddard Institute for Space Studies
Contents
Foreword
Michael Way & Catherine Naud 4
Introduction
Michael Way 5
On a new approach for estimating threshold crossing times
with an application to global warming
Victor H. de la Peña 8
Cosmology through the large-scale structure of the Universe
Eyal Kazin 13
On the Shoulders of Gauss, Bessel, and Poisson: Links,
Chunks, Spheres, and Conditional Models
William Heavlin 21
Mining Citizen Science Data: Machine Learning Challenges
Kirk Borne 24
Tracking Climate Models
Claire Monteleoni 28
Spectral Analysis Methods for Complex Source Mixtures
Kevin Knuth 31
Beyond Objects: Using Machines to Understand the Diffuse
Universe
J.E.G. Peek 35
Viewpoints: A high-performance high-dimensional exploratory
data analysis tool
Michael J. Way 44
Clustering Approach for Partitioning Directional Data in
Earth and Space Sciences
Christian Klose 46
Planetary Detection: The Kepler Mission
Jon Jenkins 52
Understanding the possible influence of the solar activity
on the terrestrial climate: a times series analysis approach
Elizabeth Martínez-Gómez 54
Optimal Scheduling of Exoplanet Observations Using Bayesian
Adaptive Exploration
Thomas J. Loredo 61
Beyond Photometric Redshifts using Bayesian Inference
Tamás Budavári 65
Long-Range Climate Forecasts Using Data Clustering and
Information Theory
Dimitris Giannakis 68
Comparison of Information-Theoretic Methods to estimate
the information flow in a dynamical system
Deniz Gencaga 72
Reconstructing the Galactic halo’s accretion history: A finite
mixture model approach
Duane Lee & Will Jessop 77
Program 78
Participants 79
Talk Video Links 82
Foreword
Michael Way
NASA/Goddard Institute for Space Studies
2880 Broadway
New York, New York, USA
Catherine Naud
Department of Applied Physics and Applied Mathematics
Columbia University, New York, New York, USA
and
NASA/Goddard Institute for Space Studies
2880 Broadway
New York, New York, USA
The purpose of the New York Workshop on Computer, Earth and Space Sciences is
to bring together the New York area’s finest Astronomers, Statisticians,
Computer Scientists, Space and Earth Scientists to explore potential synergies
between their respective fields. The 2011 edition (CESS2011) was a great
success, and we would like to thank all of the presenters and participants for
attending.
This year was also special as it included authors from the upcoming book
titled “Advances in Machine Learning and Data Mining for Astronomy.” Over two
days, the latest advanced techniques used to analyze the vast amounts of
information now available for the understanding of our universe and our planet
were presented. These proceedings attempt to provide a small window into what
the current state of research is in this vast interdisciplinary field and we’d
like to thank the speakers who spent the time to contribute to this volume.
This year all of the presentations were video taped and those presentations
have all been uploaded to YouTube for easy access. As well, the slides from
all of the presentations are available and can be downloaded from the workshop
website111http://www.giss.nasa/gov/meetings/cess2011.
We would also like to thank the local NASA/GISS staff for their assistance in
organizing the workshop; in particular Carl Codan and Patricia Formosa. Thanks
also goes to Drs. Jim Hansen and Larry Travis for supporting the workshop and
allowing us to host it at The Goddard Institute for Space Studies again.
Introduction
Michael Way
NASA/Goddard Institute for Space Studies
2880 Broadway
New York, New York, USA
This is the 2nd time I’ve co-hosted the New York Workshop on Computer, Earth,
and Space Sciences (CESS). My reason for continuing to do so is that, like
many at this workshop, I’m a strong advocate of interdisciplinary research. My
own research institute (GISS222Goddard Institute for Space Studies) has
traditionally contained people in the fields of Planetary Science, Astronomy,
Earth Science, Mathematics and Physics. We believe this has been a recipe for
success and hence we also continue partnerships with the Applied Mathematics
and Statistics Departments at Columbia University and New York Unversity. Our
goal with these on-going workshops is to find new partnerships between
people/groups in the entire New York area who otherwise would never have the
opportunity to meet and share ideas for solving problems of mutual interest.
My own science has greatly benefitted over the years via collaborations with
people I would have never imagined working with 10 years ago. For example, we
have managed to find new ways of using Gaussian Process Regression (a non-
linear regression technique) (Way et al. 2009) by working with linear algebra
specialists at the San Jose State University department of Mathematics and
Computer Science. This has led to novel methods for inverting relatively large
($\sim$100,000$\times$100,000) non-sparse matrices for use with Gaussian
Process Regression (Foster et al. 2009).
As we are all aware, many scientific fields are also dealing with a data
deluge which is often approached by different disciplines in different ways. A
recent issue of Science Magazine333http://www.sciencemag.org/site/special/data
has discussed this in some detail (e.g. Baranuik 2011). It has also been
discussed in the recent book “The Fourth Paradigm” Hey et al. (2009). What the
Science articles made me the most aware of is my own continued narrow focus.
For example, there is a great deal that could be shared between the people at
this workshop and the fields of Biology, Bio-Chemistry, Genomics and
Ecologists to name a few from the Science article. This is particularly
embarrassing for myself since in 2004 I attended a two-day seminar in Silicon
Valley that discussed several chapters in the book “The Elements of
Statistical Learning” (Hastie, Tibshirani, & Friedman 2003). Over 90% of the
audience were Bio-Chemists, while I was only one of two Astronomers.
Another area which I think we can all agree most fields can benefit from is
better (and cheaper) methods for displaying and hence interrogating our data.
Later today I will discuss a program called viewpoints (Gazis, Levit, & Way
2010) which can be used to look at modest sized multivariate data sets on an
individual desktop/laptop. Another of the Science Magazine articles (Fox &
Hendler 2011) discusses a number of ways to look at data in less expensive
way.
In fact several of the speakers at the CESS workshop this year are also
contributors to a book in progress (Way et al. 2011) that has chapters written
by a number of people in the fields of Astronomy, Machine Learning and Data
Mining who have themselves engaged in interdisciplinary research – this being
one of the rationales for inviting them to contribute to this volume.
Finally, although I’ve restricted myself to the “hard sciences” we should not
forget that interdisciplinary research is taking place in areas that perhaps
only a few of us are familiar with. For example, I can highly recommend a
recent book (Morris 2010) that discusses possible theories for the current
western lead in technological innovation. The author (Ian Morris) uses data
and methodologies from the fields of History, Sociology,
Anthropology/Archaeology, Geology, Geography and Genetics to support the
thesis in the short title of his book: “Why The West Rules – For Now”.
Regardless, I would like to thank all of the speakers for coming to New York
and also for contributing to the workshop proceedings.
## References
* Baranuik (2011) Baranuik, R.G. 2011, _More Is Less: Signal Processing and the Data Deluge_ , Science, 331, 6018, 717
* Foster et al. (2009) Foster, L., Waagen, A., Aijaz, N., Hurley, M., Luis, A., Rinsky, J. Satyavolu, C., Way, M., Gazis, P., Srivastava, A. 2009, _Stable and Efficient Gaussian Process Calculations_ , Journal of Machine Learning Research, 10, 857
* Fox & Hendler (2011) Fox, P. & Hendler, J. 2011, _Changing the Equation on Scientific Data Visualization_ , Science, 331, 6018, 705
* Gazis, Levit, & Way (2010) Gazis, P.R., Levit, C. & Way, M.J. 2010, _Viewpoints: A High-Performance High-Dimensional Exploratory Data Analysis Tool_ , Publications of the Astronomical Society of the Pacific, 122, 1518
* Hastie, Tibshirani, & Friedman (2003) Hastie, T., Tibshirani, R. & Friedman, J.H. 2003, _The Elements of Statistical Learning_ , Springer 2003, ISBN-10: 0387952845
* Hey et al. (2009) Hey, T., Tansley, S. & Tolle, K. 2009, _The Fourth Paradigm: Data-Intensive Scientific Discovery_ , Microsoft Research, ISBN-10: 0982544200
* Morris (2010) Morris, I. 2010, _Why the West Rules–for Now: The Patterns of History, and What They Reveal About the Future_ , Farrar, Straus and Giroux, ISBN-10: 0374290024
* Way et al. (2009) Way, M.J., Foster, L.V., Gazis, P.R. & Srivastava, A.N. 2009, _New Approaches To Photometric Redshift Prediction Via Gaussian Process Regression In The Sloan Digital Sky Survey_ , The Astrophysical Journal, 706, 623
* Way et al. (2011) Way, M.J., Scargle, J., Ali, K & Srivastava, A.N. 2011, _Advances in Machine Learning and Data Mining for Astronomy_ , in preparation, Chapman and Hall
On a new approach for estimating threshold crossing times with an application
to global warming
Victor J. de la Peña444Joint work with Brown, M., Kushnir, Y., Ravindarath, A.
and Sit, T
Columbia University
Department of Statistics
New York, New York, USA
## Abstract
Given a range of future projected climate trajectories taken from a multitude
of models or scenarios, we attempt to find the best way to determine the
threshold crossing time. In particular, we compare the proposed estimators to
the more commonly used method of calculating the crossing time from the
average of all trajectories (the mean path) and show that the former are
superior in different situations. Moreover, using one of the former approaches
also allows us to provide a measure of uncertainty as well as other properties
of the crossing times distribution. In the cases with infinite first-hitting
time, we also provide a new approach for estimating the cross time and show
that our methods perform better than the common forecast. As a demonstration
of our method, we look at the projected reduction in rainfall in two
subtropical regions: the US Southwest and the Mediterranean.
KEY WORDS: Climate change; First-hitting time; Threshold-crossing; Probability
bounds; Decoupling.
## Introduction: Data and Methods
The data used to carry out the demonstration of the proposed method are time
series of Southwest (U.S.) and Mediterranean region precipitation, calculated
from IPCC Fourth Assessment (AR4) model simulations of the twentieth and
twenty-first centuries (Randall et al. 2007). To demonstrate the application
of our methods, simulated annual mean precipitation time series, area averaged
over the US West ($125^{\circ}$W to $95^{\circ}$W and $25^{\circ}$N to
$40^{\circ}$N) and the Mediterranean ($30^{\circ}$N to $45^{\circ}$N and
$10^{\circ}$W to $50^{\circ}$E), were assembled from nineteen models. Refer to
Seager et al. (2007) and the references therein for details.
## Optimality of an unbiased estimator
### An unbiased estimator
Before discussing the two possible estimators, we define
$\boldsymbol{X}(t)=\\{X_{1}(t),\ldots,X_{n}(t)\\}$ be the outcomes of $n$
models (stochastic processes in the same probability space). The first hitting
time of the $i$th simulated path $X_{i}$ with $T$ bounded is defined as
$T_{r,i}:=\inf\left\\{t\in[0,\tau]:X_{i}(t)\geq r\right\\}\text{ where
}X_{i}(t)\geq 0\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ i=1,\ldots,n(=19).$
Unless otherwise known, we assume that the paths are equally likely to be
close to the “true” path. Therefore, we let
$T_{r}=T_{j}\text{ with probability }\frac{1}{n}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ j=1,\ldots,n(=19),$ (1)
where $T_{r}$ denotes the true path. There are two possible ways to estimate
the first-hitting time of the true path, namely
1. 1.
Mean of the first-hitting time:
$T^{(UF)}_{r}:=\frac{1}{n}\sum^{n}_{i=1}T_{r,i},$
2. 2.
First-hitting time of the mean path:
$T^{(CF)}_{r}:=\inf\left\\{t\in[0,\tau]:\bar{X}_{n}(t):=\frac{1}{n}\sum^{n}_{i=1}X_{i}(t)\geq
r\right\\}.$
###### Proposition 0.1.
The unbiased estimator $T^{(UF)}_{b}$ outperforms the traditional estimator
$T^{(CF)}_{b}$ in terms of (i) mean-squared error and (ii) Brier skill score.
$a^{-1}_{(n)}(r)$, to be specified in Theorem 3.1, is preferred in cases where
$T^{(UF)}_{r}=\infty$.
Remark: By considering the crossing times of individual paths, we can obtain
an empirical CDF for $T_{r}$, which is useful for modeling various statistical
properties of $T_{r}$.
### Extending boundary crossing of non-random functions to that of stochastic
processes
In the situations in which not all the simulated paths cross the boundary
before the end of the experiment, we propose a remedy which can be summarized
in the following theorem. For details, refer to Brown et al. (2011)
###### Theorem 0.1.
Let $X_{s}\geq 0$, $a_{(n)}(t)=E\sup_{s\leq
t}X_{s}=n^{-1}\sum^{n}_{i=1}\sup_{s\leq t}Y_{s,i}$. Assume $a_{n}(t)$ is
increasing (we can also use a generalized inverse) with
$a_{(n)}^{-1}(r)=t_{r}=\inf\\{t>0:a_{(n)}(t)=r\\}\longrightarrow a(t)$, we can
obtain bounds, under certain conditions:
$\frac{1}{2}a_{(n)}^{-1}(r/2)\leq E[T_{r}]\leq 2a_{(n)}^{-1}(r),$
Remark: The lower bound is universal.
Figure 1: Illustrating how to obtain $T^{(UF)}$ and $T^{(CF)}$ Figure 2:
Illustration of $a_{(n)}(t)$ and $a_{(n)}^{-1}(r)$.
## Results
Details of the results are tabulated as follows:
| $T^{(UF)}:=\sum n^{-1}T_{i}$ | $T^{(CF)}(n^{-1})$ | $\hat{M}$ | $a^{-1}_{(n)}(r)$
---|---|---|---|---
Mediterranean | 2010.21 | 2035 | 2018 | 2008
Southwest US | $\infty$ | 2018 | 2011 | 2004
where $T^{(UF)}$, $T^{(CF)}$, $\hat{M}$ and $a^{-1}_{(n)}(r)$ denote
respectively the mean hitting times of the simulated paths, the hitting time
of the mean simulated path, the median hitting times of the simulated paths
and the hitting time estimate based on Theorem 3.1. 19 paths are simulated for
both Mediterranean and Southwest US regions. The infinity value for the
Southwest US region is due to the fact that there are three paths that do not
cross the boundary. If we just include the paths that cross the boundary, we
will have $T^{(UF)}=2004.63$. Clearly, in the case of
Southwest,$a_{(}n)^{-1}(r)$ is better than $T_{r}$ wich has infinite
expectation.
According to the current estimates, the drought in the Southwest region is
already in process. This observation shows a case where $T^{(UF)}$ and
$a^{-1}_{(n)}(r)$ provide better forecasts than $T^{(CF)}$ or the median.
## References
* Brown et al. (2011) Brown M, de la Peña, V.H., Kushnir, Y & Sit, T 2011, “On Estimating Threshold Crossing Times,” submitted.
* Seager et al. (2007) Seager, R., Ting, M. F., Held, I., Kushnir, Y., Lu, J., Vecchi, G., Huang, H. P., Harnik, N., Leetmaa, A., Lau, N. C., Li, C. H., Velez, J. & Naik, N. 2007, “Model projections of an imminent transition to a more arid climate in southwestern North America,” Science, May 25, Volume 316, Issue 5828, p.1181-1184.
* Randall et al. (2007) Randall D.A., Wood R.A., Bony S., Colman R., Fichefet T., Fyfe J., Kattsov V., Pitman A., Shukla J., Srinivasan J., Stouffer R.J., Sumi A. & Taylor K.E. 2007, “Climate Models and Their Evaluation.” in: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Solomon, S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL, editors. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Cosmology through the large-scale structure of the Universe
Eyal A. Kazin555eyalkazin@gmail.com
Center for Cosmology and Particle Physics
New York University, 4 Washington Place
New York, NY 10003, USA
## Abstract
The distribution of matter contains a lot of cosmological information.
Applying N-point statistics one can measure the geometry and expansion of the
cosmos as well as test General Relativity at scales of millions to billions of
light years. In particular, I will discuss an exciting recent measurement
dubbed the “baryonic acoustic feature”, which has recently been detected in
the Sloan Digital Sky Survey galaxy sample. It is the largest known “standard
ruler” (half a billion light years across), and is being used to investigate
the nature of the acceleration of the Universe.
## The questions posed by $\Lambda$CDM
The Cosmic Microwave Background (CMB) shows us a picture of the early Universe
which was very uniform (Penzias & Wilson 1965), yet with enough
inhomogeneities (Smoot et al. 1992) to seed the structure we see today in the
form of galaxies and the cosmic-web. Ongoing sky surveys are measuring deeper
into the Universe with high edge technology transforming cosmology into a
precision science.
The leading “Big Bang” model today is dubbed $\Lambda$CDM. While shown to be
superbly consistent with many independent astronomical probes, it indicates
that the “regular” material (atoms, radiation) comprise of only $5\%$ of the
energy budget, hence challenging our current understanding of physics.
The $\Lambda$ is a reintroduction of Einstein’s so-called cosmological
constant. He originally introduced it to stabilize a Universe that could
expand or contract, according to General Relativity. At present, it is
considered a mysterious energy with a repulsive force that explains the
acceleration of the observed Universe. This acceleration was first noticed
through super-novae distance-redshift relationships (Riess et al. 1998,
Perlmutter et al. 1999). Often called dark energy, it has no clear
explanation, and most cosmologists would happily do away with it, once a
better intuitive explanation emerges. One stream of thought is modifying
General Relativity on very large scales, e.g, by generalizing to higher
dimensions.
Cold dark matter (CDM), on the other hand, has gained its “street-cred”
throughout recent decades, as an invisible substance (meaning not interacting
with radiation), but seen time and time again as the dominant gravitational
source. Dark matter is required to explain various measurements as the virial
motions of galaxies within clusters (Zwicky 1933), the rotation curves of
galaxies (Rubin & Ford 1970), the gravitational lensing of background
galaxies, and collisions of galaxy clusters (Clowe et al. 2004). We have yet
to detect dark matter on Earth, although there already have been false
positives. Physicists hope to see convincing evidence emerge from the Large
Hadron Collider which is bashing protons at near the speed of light.
One of the most convincing pieces of evidence for dark matter is the growth of
the large-scale structure of the Universe, the subject of this essay. The CMB
gives us a picture of the Universe when it was one thousand time smaller than
present. Early Universe inhomogeneities seen through temperature fluctuations
in the CMB are of the order one part in $10^{5}$. By measuring the
distribution of galaxies, the structure in the recent Universe is probed to
percent level at scales of hundreds of millions of light-year scales and it
can also be probed at the unity level and higher at “smaller” cosmic scales of
thousands of light-years. These tantalizing differences in structure can not
be explained by the gravitational attraction of regular material alone (atoms,
molecules, stars, galaxies etc.), but can be explained with non-relativistic
dark matter. Similar arguments show that the dark matter consists of $\sim
20\%$ of the energy budget, and dark energy $\sim 75\%$.
The distribution of matter, hence, is a vital test for any cosmological model.
## Acoustic oscillations as a cosmic ruler
Recently an important feature dubbed the baryonic acoustic feature has been
detected in galaxy clustering (Eisenstein et al. 2005, Percival et al. 2010,
Kazin et al. 2010). The feature has been detected significantly in the
anisotropies of the CMB by various Earth and space based missions (e.g. Torbet
et al. 1999, Komatsu et al. 2009). Hence, cosmologists have made an important
connection between the early and late Universe.
When the Universe was much smaller than today, energetic radiation dominated
and did not enable the formation of atoms. Photon pressure on the free
electrons and protons (collectively called baryons), caused them to propagate
as a fluid in acoustic wave fashion. A useful analogy to have in mind is a
pebble dropped in water perturbing it and forming a wave.
As the Universe expanded it cooled down and the first atoms formed freeing the
radiation, which we now measure as the CMB. Imagine the pond freezing,
including the wave. As the atoms are no longer being pushed they slow down,
and are now gravitationally bound to dark matter.
This means that around every over density, where the plasma-photon waves (or
pebble) originated, we expect an excess of material at a characteristic radius
of the wave when it froze, dubbed the sound horizon.
In practice, this does not happen in a unique place, but throughout the whole
Universe (think of throwing many pebbles into the pond). This means that we
expect to measure a characteristic correlation length in the anisotropies of
the CMB, as well as in the clustering of matter in a statistical manner.
Figure 1 demonstrates the detection of the feature in the CMB temperature
anisotropies (Larson et al. 2011) and in the clustering of luminous red
galaxies (Eisenstein et al. 2001).
As mentioned before, the $\sim 10^{5}$ increase in the amplitude of the
inhomogeneities between early (CMB) and late Universe (galaxies) is explained
very well with dark matter. The height of the baryonic acoustic feature also
serves as a firm prediction of the CDM paradigm. If there was no dark matter,
the relative amplitude of the feature would be much higher. An interesting
anecdote is that we happen to live in an era when the feature is still
detectable in galaxy clustering. Billions of years from now, it will be washed
away, due to gravitational interplay between dark matter and galaxies.
In a practical sense, as the feature spans a characteristic scale, it can be
used as a cosmic ruler. The signature in the anisotropies of the CMB (Figure
1a), calibrates this ruler by measuring the sound-horizon currently to an
accuracy of $\sim 1.5\%$ (Komatsu et al. 2009).
By measuring the feature in galaxy clustering transverse to the line-of-sight,
you can think of it as the base of a triangle, for which we know the observed
angle, and hence can infer the distance to the galaxy sample. Clustering along
the line-of-sight is an even more powerful measurement, as it is sensitive to
the expansion of the Universe. By measuring expansion rates one can test
effects of dark energy. Current measurements show that the baryonic acoustic
feature in Figure 1b, can be used to measure the distance to $\sim 3.5$
billion light-years to an accuracy of $\sim 4\%$ (Percival et al. 2010, Kazin
et al. 2010).
## Clustering- the technical details
As dark matter can not be seen directly, luminous objects, as galaxies, can
serve as tracers, like the tips of icebergs. Galaxies are thought to form in
regions of high dark matter density. An effective way to measure galaxy
clustering (and hence inferring the matter distribution) is through two-point
correlations of over-densities.
An over-density at point $\vec{x}$ is defined as the contrast to the mean
density $\overline{\rho}$:
$\delta(\vec{x})\equiv\frac{\rho(\vec{x})}{\overline{\rho}}-1.$ (2)
The auto-correlation function, defined as the joint probability of measuring
an excess of density at a given separation $r$ is defined as:
$\xi(r)\equiv\langle{\delta(\vec{x})\delta(\vec{x}+\vec{r})}\rangle,$ (3)
where the average is over the volume, and the cosmological principle assumes
statistical isotropy. This is related to the Fourier complementary power
spectrum P($k$).
For P$(k)$, it is common to smooth out the galaxies into density fields,
Fourier transforming $\delta$ and convolving with a “window function” that
describes the actual geometry of the survey.
The estimated $\xi$, in practice, is calculated by counting galaxy pairs:
$\hat{\xi}(r)=\frac{DD(r)}{RR(r)}-1,$ (4)
where $DD(r)$ is the normalized number of galaxy pairs within a spherical
shells of radius $r\pm\frac{1}{2}\Delta r$. This is compared to random points
distributed according to the survey geometry, where $RR$ is the random-random
normalized pair count. By normalized I refer to the fact that one uses many
more random points than data points to reduce Poisson shot noise. Landy &
Szalay (1993) show that an estimator that minimizes the variance is:
$\hat{\xi}(r)=\frac{DD(r)+RR(r)-2DR(r)}{RR(r)},$ (5)
where $DR$ are the normalized data-random pairs.
Figure 1: The baryonic acoustic feature in the large-scale structure of the
Universe. The solid lines are $\Lambda$CDM predictions. (a) Temperature
fluctuations in the CMB (2D projected $k$-space), measured by WMAP, ACBAR and
QUaD. The feature is the form of peaks and troughs, and is detected to very
high significance. These anisotropies are at the level of $10^{-5}$, and
reflect the Universe as it was 1000-fold smaller than today ($\sim 13.3$
billion year ago). (b) SDSS luminous galaxy 3D clustering $\xi$ at very large
scales ($100$ $h^{-1}$Mpc corresponds to $\sim 0.5$ billion light years). The
feature is detected consistent with predictions. Notice $\xi$ is of order
$1\%$ at the feature, showing a picture of the Universe $\sim 3$ billion years
ago. The gray regions indicate $68,\ 95\%$ CL regions of simulated mock galaxy
catalogs, reflecting cosmic variance. These will be substantially reduced in
the future with larger volume surveys. (ABB means time “after big bang”)
## The Sloan Digital Sky Survey
Using a dedicated $2.5$ meter telescope, the SDSS has industrialized (in a
positive way!) astronomy. In January 2011, they publicly released an image of
one third of the sky, and detected $469$ million objects from astroids to
galaxies666http://www.sdss3.org/dr8/ (SDSS-III collaboration: Hiroaki Aihara
et al. 2011).
These images give a 2D projected image of the Universe. This is followed up by
targeting objects of interest, obtaining their spectroscopy. The spectra
contains information about the composition of the objects. As galaxies and
quasars have signature spectra, these can be used as a templates to measure
the Doppler-shift. The expanding Universe causes these to be redshifted. The
redshift $z$ can be simply related to the distance $d$ through the Hubble
equation at low $z$:
$cz=Hd,$ (6)
where $c$ is the speed of light and the Hubble parameter $H$ [1/time] is the
expansion rate of the Universe. Hence, by measuring $z$, observers obtain a 3D
picture of the Universe, which can be used to measure clustering. Dark energy
effects Equation 6 through $H(z)$, when generalizing for larger distances.
The SDSS team has obtained spectroscopic redshifts of over a million objects
in the largest volume to date. It is now in its third phase, obtaining more
spectra for various missions including: improving measurements of the baryonic
acoustic feature (and hence measuring dark energy) by measuring a larger and
deeper volume, learning the structure of the Milky Way, and detection of
exoplanets (Eisenstein et al. 2011).
## Summary
Cosmologists are showing that there is much more than meets the eye. It is
just a matter of time until dark matter will be understood, and might I be
bold enough to say harnessed? The acceleration of the Universe, is still a
profound mystery, but equipped with tools such as the baryonic acoustic
feature, cosmologists will be able to provide rigorous tests.
E.K was partially supported by a Google Research Award and NASA Award
NNX09AC85G.
## References
* Clowe et al. (2004) Clowe, D., Gonzalez, A., & Markevitch, M. 2004, The Astrophysical Journal, 604, 596
* Eisenstein et al. (2011) Eisenstein, D. J., Weinberg, D. H., Agol, E., Aihara, H., Allende Prieto, C., Anderson, S. F., Arns, J. A., Aubourg, E., Bailey, S., Balbinot, E., et al. 2011, arXiv:1101.1529v1
* Eisenstein et al. (2001) Eisenstein, D. J. et al. 2001, The Astronomical Journal, 122, 2267
* Eisenstein et al. (2005) Eisenstein, D. J. et al. 2005, The Astrophysical Journal, 633, 560
* Kazin et al. (2010) Kazin, E. A., Blanton, M. R., Scoccimarro, R., McBride, C. K., Berlind, A. A., Bahcall, N. A., Brinkmann, J., Czarapata, P., Frieman, J. A., Kent, S. M., Schneider, D. P., & Szalay, A. S. 2010, The Astrophysical Journal, 710, 1444
* Komatsu et al. (2009) Komatsu, E., Dunkley, J., Nolta, M. R., Bennett, C. L., Gold, B., Hinshaw, G., Jarosik, N., Larson, D., Limon, M., Page, L., Spergel, D. N., Halpern, M., Hill, R. S., Kogut, A., Meyer, S. S., Tucker, G. S., Weiland, J. L., Wollack, E., & Wright, E. L. 2009, The Astrophysical Journal Supplement Series, 180, 330
* Landy & Szalay (1993) Landy, S. D. & Szalay, A. S. 1993, The Astrophysical Journal, 412, 64
* Larson et al. (2011) Larson, D., Dunkley, J., Hinshaw, G., Komatsu, E., Nolta, M. R., Bennett, C. L., Gold, B., Halpern, M., Hill, R. S., Jarosik, N., Kogut, A., Limon, M., Meyer, S. S., Odegard, N., Page, L., Smith, K. M., Spergel, D. N., Tucker, G. S., Weiland, J. L., Wollack, E., & Wright, E. L. 2011, The Astrophysical Journal Supplement Series, 192, 16
* Penzias & Wilson (1965) Penzias, A. A. & Wilson, R. W. 1965, The Astrophysical Journal, 142, 419
* Percival et al. (2010) Percival, W. J., Reid, B. A., Eisenstein, D. J., Bahcall, N. A., Budavari, T., Fukugita, M., Gunn, J. E., Ivezic, Z., Knapp, G. R., Kron, R. G., Loveday, J., Lupton, R. H., McKay, T. A., Meiksin, A., Nichol, R. C., Pope, A. C., Schlegel, D. J., Schneider, D. P., Spergel, D. N., Stoughton, C., Strauss, M. A., Szalay, A. S., Tegmark, M., Weinberg, D. H., York, D. G., & Zehavi, I. 2010, Monthly Notices of the Royal Astronomical Society, 401, 2148
* Perlmutter et al. (1999) Perlmutter, S. et al. 1999, The Astrophysical Journal, 517, 565
* Riess et al. (1998) Riess, A. G., Filippenko, A. V., Challis, P., Clocchiatti, A., Diercks, A., Garnavich, P. M., Gilliland, R. L., Hogan, C. J., Jha, S., Kirshner, R. P., Leibundgut, B., Phillips, M. M., Reiss, D., Schmidt, B. P., Schommer, R. A., Smith, R. C., Spyromilio, J., Stubbs, C., Suntzeff, N. B., & Tonry, J. 1998, The Astronomical Journal, 116, 1009
* Rubin & Ford (1970) Rubin, V. C. & Ford, Jr., W. K. 1970, The Astrophysical Journal, 159, 379
* SDSS-III collaboration: Hiroaki Aihara et al. (2011) SDSS-III collaboration: Hiroaki Aihara, Allende Prieto, C., An, D., et al. 2011, arXiv:1101.1559v2
* Smoot et al. (1992) Smoot, G. F., Bennett, C. L., Kogut, A., Wright, E. L., Aymon, J., Boggess, N. W., Cheng, E. S., de Amici, G., Gulkis, S., Hauser, M. G., Hinshaw, G., Jackson, P. D., Janssen, M., Kaita, E., Kelsall, T., Keegstra, P., Lineweaver, C., Loewenstein, K., Lubin, P., Mather, J., Meyer, S. S., Moseley, S. H., Murdock, T., Rokke, L., Silverberg, R. F., Tenorio, L., Weiss, R., & Wilkinson, D. T. 1992, The Astrophysical Journal Letters, 396, L1
* Torbet et al. (1999) Torbet, E., Devlin, M. J., Dorwart, W. B., Herbig, T., Miller, A. D., Nolta, M. R., Page, L., Puchalla, J., & Tran, H. T. 1999, The Astrophysical Journal Letters, 521, L79
* Zwicky (1933) Zwicky, F. 1933, Helvetica Physica Acta, 6, 110
On the Shoulders of Gauss, Bessel, and Poisson: Links, Chunks, Spheres, and
Conditional Models
William D Heavlin
Google, Inc.
Mountain View, California, USA
## Abstract
We consider generalized linear models (GLMs) and the associated exponential
family (“links”). Our data structure partitions the data into mutually
exclusively subsets (“chunks”). The conditional likelihood is defined as
conditional on the within-chunk histogram of the response. These likelihoods
have combinatorial complexity. To compute such likelihoods efficiently, we
replace a sum over permutations with an integration over the orthogonal or
rotation group (“spheres”). The resulting approximate likelihood gives rise to
estimates that are highly linearized, therefore computationally attractive.
Further, this approach refines our understanding of GLMs in several
directions.
## Notation and Model
Our observations are chunked into subsets indexed by $g:$
_$(y_{gi},\mathbf{x}_{gi}:\;g=1,2,\ldots,G;\ >i=1,\ldots,n_{g})$._ The _g-_ th
chunk’s responses are denoted by
$\mathbf{y}_{g}=(y_{g1},y_{g2},\ldots,y_{gn_{g}})$ and its feature matrix by
$\mathbf{X}_{g};$ its _i_ -th row is $\mathbf{x}_{gi}^{\mathbf{T}}.$ Our
framework is that of the generalized linear model (McCullough & Nelder 1999):
$\Pr\\{y_{gi}|\mathbf{x}_{gi}^{\mathbf{T}}\mathbf{\beta}\\}=\exp\\{y_{gi}\mathbf{x}_{gi}^{\mathbf{T}}\mathbf{\beta})+h_{1}(y_{gi})+h_{2}(\mathbf{x}_{gi}^{\mathbf{T}}\mathbf{\beta})\\}.$
(7)
## The Spherical Approximation
Motivated by the risk of attenuation, we condition ultimately on the variance
of $\mathbf{y}_{g}.$ The resulting likelihood consists of these terms, indexed
by $g:$
$\exp\\{L_{cg}(\mathbf{\beta})\\}\approx\frac{\exp\\{\mathbf{y}_{g}^{\mathbf{T}}\mathbf{X}_{g}\mathbf{\beta}\\}}{\mbox{ave}\\{\exp\\{\mathbf{y}_{g}^{\mathbf{T}}\mathbf{P}_{\tau}^{\mathbf{T}}\mathbf{X}_{g}\mathbf{\beta}\\}|\tau\in\mbox{orthogonal}\\}}$
(8)
Free of intercept terms, this likelihood resists attenuation. The rightmost
term of (8) reduces to the von Mises-Fisher distribution (Mardia & Jupp 2000,
Watson & Williams 1956) and is computationally attractive (Plis et al. 2010).
Figure 1 assesses the spherical approximation. The x-axis is the radius
$\kappa=||\mathbf{y}_{g}||\times||\mathbf{X}_{g}\mathbf{\beta}||$, the y-axis
the differential effect of equation (8)’s two denominators. Panel (c)
illustrates how larger chunk sizes $n_{g}$ improve the spherical
approximation. Panel (a) and (b) illustrates how the approximation for
$n_{g}=2$ can be improved by a continuity correction.
Figure 1: _Numerically calculated values of
$\frac{\partial}{\partial\kappa}\log Q$ as a function of radius $\kappa.$_
## Some Normal Equations
From (8) these maximum likelihood equations follow:
$[\sum_{g}\frac{\rho_{g}}{r_{g}}\mathbf{X}_{g}^{\mathbf{T}}\mathbf{X}_{g}]\mathbf{\hat{\beta}}=\mathbf{X}_{g}^{\mathbf{T}}\mathbf{y}_{g},$
(9)
which are nearly the same as those of Gauss. Added is the ratio
$\rho_{g}/r_{g},$ which throttles chunks with less information; to first
order, it equals the within-chunk variance.
The dependence of $\rho_{g}/r_{g}$ on $\mathbf{\beta}$ is weak, so the
convergence of (9) is rapid. Equation (9) resembles iteratively reweighted
least squares (Jorgensen 2006), but is more attractive computationally. To
estimate many more features, we investigate marginal regression (Fan & Lv
2008) and boosting (Schapire & Singer 1999).
Conditional models like those in (8) do not furnish estimates of intercepts.
The theory of conditional models therefore establishes a framework for
multiple-stage modeling.
## References
* Fan & Lv (2008) Fan, J. & Lv, J. 2008, “Sure independence screening for ultra-high dimensional feature space (with discussion).” _Journal of the Royal Statistical Society, series B_ vol. 70, pp. 849-911.
* Jorgensen (2006) Jorgensen, M. (2006). “Iteratively reweighted least squares,” _Encyclopedia of Environmetrics._ John Wiley & Sons.
* McCullough & Nelder (1999) McCullough, P. & Nelder, J.A. 1999, _Generalized Linear Models,_ 2nd edition, John Wiley & Sons.
* Mardia & Jupp (2000) Mardia, K.V. & Jupp, P.E. 2000, _Directional Statistics,_ 2nd edition, John Wiley & Sons.
* Plis et al. (2010) Plis, S.M., Lane, T., Calhoun, V.D. 2010, “Permutations as angular data: efficient inference in factorial spaces,” _IEEE International Conference on Data Mining,_ pp.403-410.
* Schapire & Singer (1999) Schapire, R.E. & Singer, Y. 1999, “Improved Boosting Algorithms Using Confidence-Rated Predictors.” _Machine Learning_ , vol. 37, pp. 297-336.
* Watson & Williams (1956) Watson, G.S. & Williams, E.J. 1956, “On the construction of significance tests on the circle and the sphere.” _Biometrika,_ vol. 43. pp. 344-352.
Mining Citizen Science Data: Machine Learning Challenges
Kirk Borne
School of Physics, Astronomy & Computational Science
George Mason University Fairfax, Virginia, USA
Large sky surveys in astronomy, with their open data policies (“data for all”)
and their uniformly calibrated scientific databases, are key
cyberinfrastructure for astronomical research. These sky survey databases are
also a major content provider for educators and the general public. Depending
on the audience, we recognize three broad modes of interaction with sky survey
data (including the image archives and the science database catalogs). These
modes of interaction span the progression from information-gathering to active
engagement to discovery. They are:
* a.)
Data Discovery – What was observed, when, and by whom? Retrieve observation
parameters from an sky survey catalog database. Retrieve parameters for
interesting objects.
* b.)
Data Browse – Retrieve images from a sky survey image archive. View
thumbnails. Select data format (JPEG, Google Sky KML, FITS). Pan the sky and
examine catalog-provided tags (Google Sky, World Wide Telescope).
* c.)
Data Immersion – Perform data analysis, mining, and visualization. Report
discoveries. Comment on observations. Contribute followup observations. Engage
in social networking, annotation, and tagging. Provide classifications of
complex images, data correlations, data clusters, or novel (outlying,
anomalous) detections.
In the latter category are Citizen Science research experiences. The world of
Citizen Science is blossoming in many ways, including century-old programs
such as the Audubon Society bird counts and the American Association of
Variable Star Observers (at aavso.org) continuous monitoring, measurement,
collation, and dissemination of brightness variations of thousands of variable
stars, but now including numerous projects in modern astronomy, climate
science, biodiversity, watershed monitoring, space science, and more. The most
famous and successful of these is the Galaxy Zoo project (at galaxyzoo.org),
which is “staffed” by approximately 400,000 volunteer contributors. Modern
Citizen Science experiences are naturally online, taking advantage of Web 2.0
technologies, for database-image-tagging mash-ups. It takes the form of crowd-
sourcing the various stages of the scientific process. Citizen Scientists
assist scientists’ research efforts by collecting, organizing, characterizing,
annotating, and/or analyzing data. Citizen Science is one approach to engaging
the public in authentic scientific research experiences with large
astronomical sky survey databases and image archives.
Citizen Science is a term used for scientific research projects in which
individual (non-scientist) volunteers (with little or no scientific training)
perform or manage research-related tasks such as observation, measurement, or
computation. In the Galaxy Zoo project, volunteers are asked to click on
various pre-defined tags that describe the observable features in galaxy
images – nearly one million such images from the SDSS (Sloan Digital Sky
Survey, at sdss.org). Every one of these million galaxies has now been
classified by Zoo volunteers approximately 200 times each. These tag data are
a rich source of information about the galaxies, about human-computer
interactions, about cognitive science, and about the Universe. The galaxy
classifications are being used by astronomers to understand the dynamics,
structure, and evolution of galaxies through cosmic time, and thereby used to
understand the origin, state, and ultimate fate of our Universe. This
illustrates some of the primary characteristics (and required features) of
Citizen Science: that the experience must be engaging, must work with real
scientific data, must not be busy-work, must address authentic science
research questions that are beyond the capacity of science teams and
computational processing pipelines, and must involve the scientists. The
latter two points are demonstrated (and proven) by: (a) the sheer enormous
number of galaxies to be classified is beyond the scope of the scientist
teams, plus the complexity of the classification problem is beyond the
capabilities of computational algorithms, primarily because the classification
process is strongly based upon human recognition of complex patterns in the
images, thereby requiring “eyes on the data”; and (b) approximately 20 peer-
reviewed journal articles have already been produced from the Galaxy Zoo
results – many of these papers contain Zoo volunteers as co-authors, and at
least one of the papers includes no professional scientists as authors. The
next major step in astronomical Citizen Science (but also including other
scientific disciplines) is the Zooniverse project (at zooniverse.org). The
Zooniverse is a framework for new Citizen Science projects, thereby enabling
any science team to make use of the framework for their own projects with
minimal effort and development activity. Currently active Zooniverse projects
include Galaxy Zoo II, Galaxy Merger Zoo, the Milky Way Project, Supernova
Search, Planet Hunters, Solar Storm Watch, Moon Zoo, and Old Weather. All of
these depend on the power of human cognition (i.e., human computation), which
is superb at finding patterns in data, at describing (characterizing) the
data, and at finding anomalies (i.e., unusual features) in data. The most
exciting example of this was the discovery of Hanny’s Voorwerp (Figure 1). A
key component of the Zooniverse research program is the mining of the
volunteer tags. These tag databases themselves represent a major source of
data for knowledge discovery, pattern detection, and trend analysis. We are
developing and applying machine learning algorithms to the scientific
discovery process with these tag databases. Specifically, we are addressing
the question: how do the volunteer-contributed tags, labels, and annotations
correlate with the scientist-measured science parameters (generated by
automated pipelines and stored in project databases)? The ultimate goal will
be to train the automated data pipelines in future sky surveys with improved
classification algorithms, for better identification of anomalies, and with
fewer classification errors. These improvements will be based upon millions of
training examples provided by the Citizen Scientists. These improvements will
be absolutely essential for projects like the future LSST (Large Synoptic
Survey Telescope, at lsst.org), since LSST will measure properties for at
least 100 times more galaxies and 100 times more stars than SDSS. Also, LSST
will do repeated imaging of the sky over its 10-year project duration, so that
each of the roughly 50 billion objects observed by LSST will have
approximately 1000 separate observations. These 50 trillion time series data
points will provide an enormous opportunity for Citizen Scientists to explore
time series (i.e., object light curves) to discover all types of rare
phenomena, rare objects, rare classes, and new objects, classes, and sub-
classes. The contributions of human participants may include: characterization
of countless light curves; human-assisted search for best-fit models of
rotating asteroids (including shapes, spin periods, and varying surface
reflection properties); discovery of sub-patterns of variability in known
variable stars; discovery of interesting objects in the environments around
variable objects; discovery of associations among multiple variable and/or
moving objects in a field; and more.
Figure 1: Hanny’s Voorwerp (Hanny’s Object) – The green gas cloud seen below
the spiral galaxy in these images was first recognized as something unusual
and “out of the ordinary” by Galaxy Zoo volunteer Hanny van Arkel, a Dutch
school teacher, who was initially focused on classifying the dominant spiral
galaxy above the green blob. This object is an illuminated gas cloud, glowing
in the emission of ionized oxygen. It is probably the light echo from a dead
quasar that was luminous at the center of the spiral galaxy about 100,000
years ago. These images are approximately true color. The left image was taken
with a ground- based telescope, and the right image was obtained by the Hubble
Space Telescope (courtesy W. Keel, the Galaxy Zoo team, NASA, and ESA).
As an example of machine learning the tag data, a preliminary study by (Baehr
2010) of the galaxy mergers found in the Galaxy Zoo I project was carried out.
We found specific parameters in the SDSS science database that correlate best
with “mergerness” versus “non-mergerness”. These database parameters are
therefore useful in distinguishing normal (undisturbed) galaxies from abnormal
(merging, colliding, interacting, disturbed) galaxies. Such results may
consequently be applied to future sky surveys (e.g., LSST), to improve the
automatic (machine-based) classification algorithms for colliding and merging
galaxies. All of this was made possible by the fact that the galaxy
classifications provided by Galaxy Zoo I participants led to the creation of
the largest pure set of colliding and merging galaxies yet to be compiled for
use by astronomers.
## References
* Baehr (2010) Baehr, S., Vedachalam, A., Borne, K., & Sponseller, D., ”Data Mining the Galaxy Zoo Mergers,” NASA Conference on Intelligent Data Understanding, https://c3.ndc.nasa.gov/dashlink/resources/220/, pp. 133-144 (2010).
Tracking Climate Models777This is an excerpt from a journal paper currently
under review. The conference version appeared at the NASA Conference on
Intelligent Data Understanding, 2010 (Monteleoni, Schmidt & Saroha 2010)
Claire Monteleoni 888cmontel@ccls.columbia.edu
Center for Computational Learning Systems, Columbia University
New York, New York, USA
Gavin A. Schmidt
NASA Goddard Institute for Space Studies, 2880 Broadway
and
Center for Climate Systems Research, Columbia University
New York, New York, USA
Shailesh Saroha
Department of Computer Science
Columbia University
New York, New York, USA
Eva Asplund
Department of Computer Science, Columbia University
and
Barnard College
New York, New York, USA
Climate models are complex mathematical models designed by meteorologists,
geophysicists, and climate scientists, and run as computer simulations, to
predict climate. There is currently high variance among the predictions of 20
global climate models, from various laboratories around the world, that inform
the Intergovernmental Panel on Climate Change (IPCC). Given temperature
predictions from 20 IPCC global climate models, and over 100 years of
historical temperature data, we track the changing sequence of which model
currently predicts best. We use an algorithm due to Monteleoni & Jaakkola
(2003), that models the sequence of observations using a hierarchical learner,
based on a set of generalized Hidden Markov Models, where the identity of the
current best climate model is the hidden variable. The transition
probabilities between climate models are learned online, simultaneous to
tracking the temperature predictions.
---
Figure 1: Global Future Simulation 1: Tracking the predictions of one model
using the predictions of the remaining 19 as input, with no true temperature
observations. Black vertical line separates past (hindcasts) from future
predictions. Bottom plot zooms in on y-axis.
On historical global mean temperature data, our online learning algorithm’s
average prediction loss nearly matches that of the best performing climate
model in hindsight. Moreover its performance surpasses that of the average
model prediction, which is the default practice in climate science, the median
prediction, and least squares linear regression. We also experimented on
climate model predictions through the year 2098. Simulating labels with the
predictions of any one climate model, we found significantly improved
performance using our online learning algorithm with respect to the other
climate models, and techniques (see _e.g._ Figure 1). To complement our global
results, we also ran experiments on IPCC global climate model temperature
predictions for the specific geographic regions of Africa, Europe, and North
America. On historical data, at both annual and monthly time-scales, and in
future simulations, our algorithm typically outperformed both the best climate
model per region, and linear regression. Notably, our algorithm consistently
outperformed the average prediction over models, the current benchmark.
## References
* Monteleoni & Jaakkola (2003) Monteleoni, C. & Jaakkola, T. 2003 “Online learning of non-stationary sequences”, In NIPS ’03: Advances in Neural Information Processing Systems 16, 2003
* Monteleoni, Schmidt & Saroha (2010) Monteleoni, C., Schmidt, G & Saroha, S. 2010 “Tracking Climate Models”, in NASA Conference on Intelligent Data Understanding, 2010
Spectral Analysis Methods for Complex Source Mixtures
Kevin H. Knuth
Departments of Physics and Informatics
University at Albany
Albany, New York, USA
## Abstract
Spectral analysis in real problems must contend with the fact that there may
be a large number of interesting sources some of which have known
characteristics and others which have unknown characteristics. In addition,
one must also contend with the presence of uninteresting or background
sources, again with potentially known and unknown characteristics. In this
talk I will discuss some of these challenges and describe some of the useful
solutions we have developed, such as sampling methods to fit large numbers of
sources and spline methods to fit unknown background signals.
## Introduction
The infrared spectrum of star-forming regions is dominated by emission from a
class of benzene-based molecules known as Polycyclic Aromatic Hydrocarbons
(PAHs). The observed emission appears to arise from the combined emission of
numerous PAH molecular species, both neutral and ionized, each with its unique
spectrum. Unraveling these variations is crucial to a deeper understanding of
star-forming regions in the universe. However, efforts to fit these data have
been defeated by the complexity of the observed PAH spectra and the very large
number of potential PAH emitters. Linear superposition of the various PAH
species accompanied by additional sources identifies this problem as a source
separation problem. It is, however, of a formidable class of source separation
problems given that different PAH sources are potentially in the hundreds,
even thousands, and there is only one measured spectral signal for a given
astrophysical site. In collaboration with Duane Carbon (NASA Advanced
Supercomputing Center, NASA Ames), we have focused on developing informed
Bayesian source separation techniques (Knuth 2005) to identify and
characterize the contribution of a large number of PAH species to infrared
spectra recorded from the Infrared Space Observatory (ISO). To accomplish this
we take advantage of a large database of over 500 atomic and molecular PAH
spectra in various states of ionization that has been constructed by the NASA
Ames PAH team (Allamandola, Bauschlicher, Cami and Peeters). To isolate the
PAH spectra, much effort has gone into developing background estimation
algorithms that model the spectral background so that it can be removed to
reveal PAH, as well as atomic and ionic, emission lines.
## The Spectrum Model
Blind techniques are not always useful in complex situations like these where
much is known about the physics of the source signal generation and
propagation. Higher-order models relying on physically-motivated parameterized
functions are required, and by adopting such models, one can introduce more
sophisticated likelihood and prior probabilities. We call this approach
Informed Source Separation (Knuth et al. 2007). In this problem, we have
linear mixing of P PAH spectra, K Planck blackbodies, a mixture of G Gaussians
to describe unknown sources and additive noise:
$F(\lambda)=\sum_{p=1}^{P}c_{p}PAH_{p}(\lambda)+\sum_{k=1}^{K}A_{k}Planck(\lambda;T_{k})+\sum_{g=1}^{G}A_{g}N(\lambda;\bar{\lambda_{g}},\sigma_{g})+\phi(\lambda)$
(10)
where $PAH_{p}$ is a p-indexed PAH spectrum from the dictionary, $N$ is a
Gaussian. The function Planck is
$Planck(\lambda;T_{k})=\sqrt{\frac{\lambda_{max}}{\lambda}}\frac{exp(hc/\lambda_{max}kT)-1}{exp(hc/\lambda
kT)-1}$ (11)
where $h$ is Planck’s constant, $c$ is the speed of light, $k$ is Boltzmann’s
constant, $T$ is the temperature of the cloud, and $\lambda_{max}$ is the
wavelength where the blackbody spectral energy peaks
$\lambda_{max}=hc/4.965kT$.
## Source Separation using Sampling Methods
The sum over Planck blackbodies in the modeled spectrum (1) takes into account
the fact that we are recording spectra from potentially several sources
arranged along the line-of-sight. Applying this model in conjunction with a
nested sampling algorithm to data recorded from ISO of the Orion Bar we were
able to obtain reasonable background fits, which often showed the presence of
multiple blackbodies. The results indicate that there is one blackbody
radiator at a temperature of 61.043 $\pm$ 0.004 K, and possibly a second
(36.3% chance), at a temperature around 18.8 K. Despite these successes, this
algorithm did not provide adequate results for background removal since the
estimated background was not constrained to lie below the recorded spectrum.
Upon background subtraction, this led to unphysical negative spectral power.
This result encouraged us to develop an alternative background estimation
algorithm. Estimation of PAHs was demonstrated to be feasible in synthetic
mixtures with low noise using sampling methods, such as Metropolis-Hastings
Markov chain Monte Carlo (MCMC) and Nested Sampling. Estimation using gradient
climbing techniques, such as the Nelder-Mead simplex method, too often were
trapped in local solutions. In real data, PAH estimation was confounded by
spectral background.
## Background Removal Algorithm
Our most advanced background removal algorithm was developed to avoid the
problem of negative spectral power by employing a spline-based model coupled
with a likelihood function that favors background models that lie below the
recorded spectrum. This is accomplished by using a likelihood function based
on the Gaussian where the standard deviation on the negative side is 10 times
smaller than on the positive side. The algorithm is designed with the option
to include a second derivative smoothing prior. Users choose the number of
spline knots and set their positions along the x-axis. This provides the
option of fitting a spectral feature or estimating a smooth background
underlying it. Our preliminary work shows that the background estimation
algorithm works very well with both synthetic and real data (Nathan 2010). The
use of this algorithm illustrates that PAH estimates are extremely sensitive
to background, and that PAH characterization is extremely difficult in cases
where the background spectra are poorly understood.
Kevin Knuth would like to acknowledge Duane Carbon, Joshua Choinsky, Deniz
Gencaga, Haley Maunu, Brian Nathan and ManKit Tse for all of their hard work
on this project.
## References
* Knuth (2005) Knuth, K.H. 2005. “Informed source separation: A Bayesian tutorial” (Invited paper) B. Sankur , E. Çetin, M. Tekalp , E. Kuruoğlu (ed.), Proceedings of the 13th European Signal Processing Conference (EUSIPCO 2005), Antalya, Turkey.
* Knuth et al. (2007) Knuth, K.H., Tse M.K., Choinsky J., Maunu H.A, Carbon D.F. 2007, “Bayesian source separation applied to identifying complex organic molecules in space”, Proceedings of the IEEE Statistical Signal Processing Workshop, Madison WI, August 2007.
* Nathan (2010) Nathan, B. 2010. “Spectral analysis methods for characterizing organic molecules in space”. M.S. Thesis, University at Albany, K.H. Knuth, Advisor.
Beyond Objects: Using Machines to Understand the Diffuse Universe
J. E. G. Peek
Department of Astronomy
Columbia University
New York, New York, USA
In this contribution I argue that our understanding of the universe has been
shaped by an intrinsically “object-oriented” perspective, and that to better
understand our diffuse universe we need to develop new ways of thinking and
new algorithms to do this thinking for us.
Envisioning our universe in the context of objects is natural both
observationally and physically. When our ancestors looked up into the the
starry sky, they noticed something very different from the daytime sky. The
nighttime sky has specific objects, and we gave them names: Rigel, Procyon,
Fomalhaut, Saturn, Venus, Mars. These objects were both very distinct from the
blackness of space, but they were also persistent night to night. The same
could not be said of the daytime sky, with its amorphous, drifting clouds,
never to be seen again, with no particular identity. Clouds could sometimes be
distinguished from the background sky, but often were a complex, interacting
blend. From this point forward astronomy has been a science of objects. And we
have been rewarded for this assumption: stars in space can be thought of very
well as discrete things. They have huge density contrasts compared to the rest
of space, and they are incredibly rare and compact. They rarely contact each
other, and are typically easy to distinguish. The same can be said (to a
lesser extent) of planets and galaxies, as well as all manner of astronomical
objects.
I argue, though, that we have gotten to a stage of understanding of our
universe that we need to be able to better consider the diffuse universe. We
now know that the material universe is largely made out of the very diffuse
dark matter, which, while clumpy, is not well approximated as discrete
objects. Even the baryonic matter is largely diffuse: of the 4% of the mass-
energy budget of the universe devoted to baryons, 3.5% is diffuse hot gas
permeating the universe, and collecting around groups of galaxies. Besides the
simple accounting argument, it is important to realize that the interests of
astronomers are now oriented more and more toward origins: origins of planets,
origins of stars, origins of galaxies. This is manifest in the fact that NASA
devotes a plurality of its astrophysics budget to the “cosmic origins”
program. And what do we mean by origins? The entire history of anything in the
universe can be roughly summed up as “it started diffuse and then, under the
force of gravity, it became more dense”. If we are serious about understanding
the origins of things in the universe, we must do better at understanding not
just the objects, but the diffuse material whence they came.
We have, as investigators of the universe, enlisted machines to do a lot of
our understanding for us. And, as machines inherit our intuition through the
codes and algorithms we write, we have given them a keen sense of objects. A
modern and powerful example is the Sloan Digital Sky Survey (SDSS; York et al.
2000). SDSS makes huge maps of the sky with very high fidelity, but these maps
are rarely used for anything beyond wall decor. The real power of the SDSS
experiment depends on the photometric pipeline (Lupton et al. 2001), which
interprets that sky into tens of millions of objects, each with precise
photometric information. With these lists in hand we can better take a census
of the stars and galaxies in our universe. It is sometimes interesting to
understand the limits of these methodologies; the photo pipeline can find
distant galaxies easily, but large, nearby galaxies are a challenge, as the
photo pipeline cannot easily interpret these huge diaphanous shapes (West et
al. 2010; Fig 1). The Virtual Astronomical Observatory (VAO; e.g. Hanisch
2010) is another example of a collection of algorithms that enables our
object-oriented mindset. VAO has developed a huge set of tools that allow
astronomers to collect a vast array of information from different sources, and
combine them elegantly together. These tools, however, almost always use the
“object” as the smallest element of information, and are much less useful in
interpreting the diffuse universe. Finally, astrometry.net is an example of
how cutting edge algorithms combined with excellent data can yield new tools
for interpreting astronomical data (Lang et al. 2010). By accessing giant
catalogs of objects, the software can, in seconds, give precise astrometric
information about any image containing stars. Again, we leverage our object-
oriented understanding, both both psychologically and computationally, to
decode our data.
Figure 1: _r_ -band atlas images for HIPEQ1124+03. A single galaxy has been
divided into 7 sub-images by the SDSS photometric pipeline, which considered
them individual objects. The original galaxy is shown in the lower-middle
panel. Reprinted with permission from West et al. (2010).
As a case study, we examine at a truly object-less data space: the Galactic
neutral hydrogen (H i) interstellar medium (ISM). Through the 21-cm hyperfine
transition of H i, we can study the neutral ISM of our Galaxy and others both
angularly and in the velocity domain (e.g. Kulkarni & Heiles 1988). H i images
of other galaxies, while sometimes diffuse, do typically have clear edges. In
our own Galaxy we are afforded no such luxury. The Galactic H i ISM is sky-
filling, and can represent gas on a huge range of distances and physical
conditions. As our technology increases, we are able to build larger and
larger, and more and more detailed images of the H i ISM. What we see in these
multi-spectral images is an incredible cacophony of shapes and structures,
overlapping, intermingling, with a variety of size, shape, and intensity that
cannot be easily described. Indeed, it is this lack of language that is at the
crux of the problem. These data are affected by a huge number of processes;
the accretion of material onto the Galaxy (e.g. Begum et al. 2010), the impact
of shockwaves and explosions (e.g. Heiles 1979), the formation of stars (e.g.
Kim et al. 1998), the effect of magnetization (e.g. McClure-Griffiths et al.
2006). And yet, we have very few tools that capture this information.
Figure 2: A typical region of the Galactic H i sky, 40${}^{\circ}\times
18^{\circ}$ $10^{\prime}$ in size. The top panel represents, -41.6, -39.4,
-37.2 km s-1 in red, green, and blue, respectively. The middle panel
represents -4.0, -1.8, and 0.4 km ${\rm s^{-1}}$, while the bottom panel
represents 15.8, 18.7, 21.7 km ${\rm s^{-1}}$. Reprinted with permission from
(Peek et al. 2011).
As yet, there are two “flavors” of mechanisms we as a community have used to
try to interpret this kind of diffuse data. The first is the observer’s
method. In the observer’s method the data cubes are inspected by eye, and
visually interesting shapes have been picked out (e.g. Ford et al. 2010).
These shapes are then cataloged and described, usually qualitatively and
without statistical rigor. The problems with these methods are self-evident:
impossible statistics, unquantifiable biases, and an inability to compare to
physical models. The second method is the theorist’s method. In the theorist’s
method, some equation is applied to the data set wholesale, and a number comes
out (e.g. Chepurnov et al. 2010). This method is powerful in that it can be
compared directly to simulation, but typically cannot interpret any shape
information at all. Given that the ISM is not a homogeneous, isotropic system,
and various physical effects may influence the gas in different directions or
at different velocities, this method seems a poor match for the data. It also
cuts out any intuition as to what data may be carrying the most interesting
information.
Figure 3: An H i image of the Riegel-Crutcher cloud from McClure-Griffiths et
al. (2006) at 4.95 km s-1 $v_{\rm LSR}$. The polarization vectors from
background starlight indicate that the structure of the ISM reflects the
structure of the intrinsic magnetization. Reprinted with permission from
McClure-Griffiths et al. (2006).
We are in the process of developing a “third way”, which I will explain in two
examples. Of the two projects, our more completed one is a search for compact,
low-velocity clouds in the Galaxy (e.g. Saul et al. 2011). These clouds are
inherently interesting as they likely probe the surface of the Galaxy as it
interacts with the Galactic halo, a very active area of astronomical research.
To do this our group, led by Destry Saul, wrote a wavelet-style code to search
through the data cubes for isolated clouds that matched our search criteria.
These clouds once found could then be “objectified”, quantified and studied as
a population. In some sense, through this objectification, we are trying to
shoehorn an intrinsically diffuse problem into the object-oriented style
thinking we are trying to escape. This gives us the advantage that we can use
well known tools for analysis (e.g. scatter plots), but we give up a perhaps
deeper understanding of these structures from considering them in their
context. The harder, and far less developed, project is to try to understand
the meaning of very straight and narrow diffuse structures in the HI ISM at
very low velocity. The HI ISM is suffused with “blobby filaments”, but these
particular structures seem to stand out, looking like a handful of dry
fettuccine dropped on the kitchen floor. We know that these kinds of
structures can give us insight into the physics of the ISM: in denser
environments it has been shown that more discrete versions of these features
are qualitatively correlated with dust polarizations and the magnetic
underpinning of the ISM (McClure-Griffiths et al. 2006). We would like to
investigate these features more quantitatively, but we have not developed
mechanisms to answer even the simplest questions. In a given direction how
much of this feature is there? In which way is it pointing? What are its
qualities? Does there exist a continuum of these features, or are they truly
discrete? The “object-oriented” astronomer mindset is not equipped to address
these sophisticated questions.
We are just beginning to investigate machine vision techniques for
understanding these unexplored data spaces. Machine vision technologies are
being developed to better parse our very confusing visual world using
computers, such as in the context of object identification and the 3D
reconstruction of 2D images (Sonka et al. 2008). Up until now, most
astronomical machine vision problems have been embarrassingly easy; points in
space are relatively simple to parse for machines. Perhaps the diffuse
universe will be a new challenge for computer vision specialists and be a
focal point for communication between the two fields. Machine learning
methods, and human-aided data interpretation on large scales may also prove
crucial to cracking these complex problems. How exactly we employ these new
technologies in parsing our diffuse universe is very much up to us.
## References
* Begum et al. (2010) Begum, A., Stanimirovic, S., Peek, J. E., Ballering, N. P., Heiles, C., Douglas, K. A., Putman, M., Gibson, S. J., Grcevich, J., Korpela, E. J., Lee, M.-Y., Saul, D., & Gallagher, J. S. 2010, eprint arXiv, 1008, 1364
* Chepurnov et al. (2010) Chepurnov, A., Lazarian, A., Stanimirović, S., Heiles, C., & Peek, J. E. G. 2010, The Astrophysical Journal, 714, 1398
* Ford et al. (2010) Ford, H. A., Lockman, F. J., & McClure-Griffiths, N. M. 2010, The Astrophysical Journal, 722, 367
* Hanisch (2010) Hanisch, R. 2010, Astronomical Data Analysis Software and Systems XIX, 434, 65
* Heiles (1979) Heiles, C. 1979, Astrophysical Journal, 229, 533
* Kim et al. (1998) Kim, S., Staveley-Smith, L., Dopita, M. A., Freeman, K. C., Sault, R. J., Kesteven, M. J., & Mcconnell, D. 1998, Astrophysical Journal v.503, 503, 674
* Kulkarni & Heiles (1988) Kulkarni, S. R. & Heiles, C. 1988, Neutral hydrogen and the diffuse interstellar medium, 95–153
* Lang et al. (2010) Lang, D., Hogg, D. W., Mierle, K., Blanton, M., & Roweis, S. 2010, The Astronomical Journal, 139, 1782
* Lupton et al. (2001) Lupton, R., Gunn, J. E., Ivezic, Z., Knapp, G. R., Kent, S., & Yasuda, N. 2001, arXiv, astro-ph
* McClure-Griffiths et al. (2006) McClure-Griffiths, N. M., Dickey, J. M., Gaensler, B. M., Green, A. J., & Haverkorn, M. 2006, The Astrophysical Journal, 652, 1339
* Peek et al. (2011) Peek, J. E. G., Heiles, C., Douglas, K. A., Lee, M.-Y., Grcevich, J., Stanimirovic, S., Putman, M. E., Korpela, E. J., Gibson, S. J., Begum, A., & Saul, D. 2011, The Astrophysical Journal Supplement, 1
* Saul et al. (2011) Saul, D., Peek, J. E. G., Grcevich, J., & Putman, M. E. 2011, in prep, 1
* Sonka et al. (2008) Sonka, M., Hlavac, V., & Boyle, R. 2008, 829
* West et al. (2010) West, A. A., Garcia-Appadoo, D. A., Dalcanton, J. J., Disney, M. J., Rockosi, C. M., Ivezić, Ž., Bentz, M. C., & Brinkmann, J. 2010, The Astronomical Journal, 139, 315
* York et al. (2000) York, D. G., Adelman, J., Anderson, J., Anderson, S. F., Annis, J., Bahcall, N. A., Bakken, J. A., Barkhouser, R., Bastian, S., Berman, E., Boroski, W. N., Bracker, S., Briegel, C., Briggs, J. W., Brinkmann, J., Brunner, R., Burles, S., Carey, L., Carr, M. A., Castander, F. J., Chen, B., Colestock, P. L., Connolly, A. J., Crocker, J. H., Csabai, I., Czarapata, P. C., Davis, J. E., Doi, M., Dombeck, T., Eisenstein, D., Ellman, N., Elms, B. R., Evans, M. L., Fan, X., Federwitz, G. R., Fiscelli, L., Friedman, S., Frieman, J. A., Fukugita, M., Gillespie, B., Gunn, J. E., Gurbani, V. K., de Haas, E., Haldeman, M., Harris, F. H., Hayes, J., Heckman, T. M., Hennessy, G. S., Hindsley, R. B., Holm, S., Holmgren, D. J., h Huang, C., Hull, C., Husby, D., Ichikawa, S.-I., Ichikawa, T., Ivezić, Ž., Kent, S., Kim, R. S. J., Kinney, E., Klaene, M., Kleinman, A. N., Kleinman, S., Knapp, G. R., Korienek, J., Kron, R. G., Kunszt, P. Z., Lamb, D. Q., Lee, B., Leger, R. F., Limmongkol, S., Lindenmeyer, C., Long, D. C., Loomis, C., Loveday, J., Lucinio, R., Lupton, R. H., MacKinnon, B., Mannery, E. J., Mantsch, P. M., Margon, B., McGehee, P., McKay, T. A., Meiksin, A., Merelli, A., Monet, D. G., Munn, J. A., Narayanan, V. K., Nash, T., Neilsen, E., Neswold, R., Newberg, H. J., Nichol, R. C., Nicinski, T., Nonino, M., Okada, N., Okamura, S., Ostriker, J. P., Owen, R., Pauls, A. G., Peoples, J., Peterson, R. L., Petravick, D., Pier, J. R., Pope, A., Pordes, R., Prosapio, A., Rechenmacher, R., Quinn, T. R., Richards, G. T., Richmond, M. W., Rivetta, C. H., Rockosi, C. M., Ruthmansdorfer, K., Sandford, D., Schlegel, D. J., Schneider, D. P., Sekiguchi, M., Sergey, G., Shimasaku, K., Siegmund, W. A., Smee, S., Smith, J. A., Snedden, S., Stone, R., Stoughton, C., Strauss, M. A., Stubbs, C., SubbaRao, M., Szalay, A. S., Szapudi, I., Szokoly, G. P., Thakar, A. R., Tremonti, C., Tucker, D. L., Uomoto, A., Berk, D. V., Vogeley, M. S., Waddell, P., i Wang, S., Watanabe, M., Weinberg, D. H., Yanny, B., & Yasuda, N. 2000, The Astronomical Journal, 120, 1579
Viewpoints: A high-performance high-dimensional exploratory data analysis tool
Michael Way
NASA/Goddard Institute for Space Studies
2880 Broadway
New York, New York, USA
Creon Levit & Paul Gazis
NASA/Ames Research Center
Moffett Field, California, USA
Viewpoints (Gazis et al. 2010) is a high-performance visualization and
analysis tool for large, complex, multidimensional data sets. It allows
interactive exploration of data in 100 or more dimensions with sample counts,
or the number of points, exceeding $10^{6}$ (up to $10^{8}$ depending on
available RAM). Viewpoints was originally created for use with the extremely
large data sets produced by current and future NASA space science missions,
but it has been used for a wide variety of diverse applications ranging from
aeronautical engineering, quantum chemistry, and computational fluid dynamics
to virology, computational finance, and aviation safety. One of it’s main
features is the ability to look at the correlation of variables in
multivariate data streams (see Figure 1).
Viewpoints can be considered a kind of “mini” version of the NASA Ames
Hyperwall (Sandstrom et al. 2003) which has been used for examining multi-
variate data of much larger sizes (see Figure 2). Viewpoints has been used
extensively as a pre-processor to the Hyperwall in that one can look at sub-
selections of the full dataset (if the full data set cannot be run) prior to
viewing it with the Hyperwall (which is a highly leveraged resource).
Currently viewpoints runs on Mac OS, Windows and Linux platforms, and only
requires a moderately new (less than 6 years old) graphics card supporting
OpenGL.
More information can be found here:
http://astrophysics.arc.nasa.gov/viewpoints
You can download the software from here:
http://www.assembla.com/wiki/show/viewpoints/downloads
Figure 1: Viewpoints as a collaboration tool: Here one workstation with
multiple screens is looking at the same multi-variate data on a laptop. Screen
layout and setup can be saved to an xml file which allows one to retrace
previous investigations.
Figure 2: Left: The back of the original (7$\times$7 display) hyperwall at
NASA/Ames. Right: The front of the hyperwall. One can see the obvious
similarities between the Hyperwall and viewpoints.
## References
* Gazis et al. (2010) Gazis, P.R., Levit, C. & Way, M.J. 2010, Publications of the Astronomical Society of the Pacific, 122, 1518, “Viewpoints: A High-Performance High-Dimensional Exploratory Data Analysis Tool”
* Sandstrom et al. (2003) Sandstrom, T. A., Henze, C. & Levit, C. 2003, Coordinated & Multiple Views in Exploratory Visualization, (Piscataway: IEEE), 124
Clustering Approach for Partitioning Directional Data in Earth and Space
Sciences
C. D. Klose
Think GeoHazards
New York, New York, USA
K. Obermayer
Electrical Engineering and Computer Science
Technical University of Berlin
Berlin, Germany
## Abstract
A simple clustering approach, based on vector quantization (VQ) is presented
for partitioning directional data in Earth and Space Sciences. Directional
data are grouped into a certain number of disjoint isotropic clusters, and at
the same time the average direction is calculated for each group. The
algorithm is fast, and thus can be easily utilized for large data sets. It
shows good clustering results compared to other benchmark counting methods for
directional data. No heuristics is being used, because the grouping of data
points, the binary assignment of new data points to clusters, and the
calculation of the average cluster values are based on the same cost function.
Keywords: clustering, directional data, discontinuities, fracture grouping
## Introduction
Clustering problems of directional data are fundamental problems in earth and
space sciences. Several methods have been proposed to help to find groups
within directional data. Here, we give short overview on existing clustering
methods of directional data and outline a new clustering method which is based
on vector quantization (Gray 1984). The new method improves on several issues
of clustering directional data and is published by Klose (2004).
Counting methods for visually partitioning the orientation data in
stereographic plots were introduced by Schmidt (1925). Shanley & Mahtab (1976)
and Wallbrecher (1978) developed counting techniques to identify clusters of
orientation data. The parameters of Shanley & Mahtab’s counting method have to
be optimized by minimizing an objective function. Wallbrecher’s method is
optimized by comparing the clustering result with a given probability
distribution on the sphere in order to obtain good partitioning results.
However, counting methods depend on the density of data points and their
results are prone to sampling bias (e.g., 1-D or 2-D sampling to describe a
3-D space). Counting methods are time-consuming, can lead to incorrect results
for clusters with small dip angles, and can lead to solutions which an expert
would rate sub-optimal. Pecher (1989) developed a supervised method for
grouping of directional data distributions. A contour density plot is
calculated and an observer picks initial values for the average dip directions
and dip angles of one to a maximum of seven clusters. The method has a
conceptual disadvantage. It uses two different distance measures; one measure
for the assignment of data points to clusters and another measure defined by
the orientation matrix to calculate the refined values for dip direction and
dip angle. Thus, average values and cluster assignments are not determined in
a self-consistent way.
Dershowitz et al. (1996) developed a partitioning method that is based on an
iterative, stochastic reassignment of orientation vectors to clusters.
Probability assignments are calculated using selected probability
distributions on the sphere, which are centered on the average orientation
vector that characterizes the cluster. The average orientation vector is then
re-estimated using principal component analysis (PCA) of the orientation
matrices. Probability distributions on the sphere were developed by several
authors and are summarized in Fisher et al. (1987).
Hammah & Curran (1998) described a related approach based on fuzzy sets and on
a similarity measure $d^{2}(\vec{x},\vec{w})=1-(\vec{x}^{T}\vec{w})^{2}$,
where $\vec{x}$ is the orientation vector of a data point and $\vec{w}$ is the
average orientation vector of the cluster. This measure is normally used for
the analysis of orientation data (Anderberg 1973, Fisher et al. 1987).
## Directional Data
Dip direction $\alpha$ and the dip angle $\theta$ of linear or planar
structures are measured in degrees (∘), where $0^{\circ}\leq\alpha\leq
360^{\circ}$ and $0^{\circ}\leq\theta\leq 90^{\circ}$. By convention, linear
structures and normal vectors of planar structures, pole vectors
$\vec{\Theta}=(\alpha,\theta)^{T}$, point towards the lower hemisphere of the
unit sphere (Figure 1). The orientation
$\vec{\Theta}^{A}=(\alpha^{A},\theta^{A})^{T}$ of a pole vector $A$ can be
described by Cartesian coordinates $\vec{x}^{A}=(x_{1},x_{2},x_{3})^{T}$
(Figure 1), where
$\begin{split}&x_{1}\;=\;cos(\alpha)\,cos(\theta)\hskip 28.45274pt\text{North
direction}\\\ &x_{2}\;=\;sin(\alpha)\,cos(\theta)\hskip 28.45274pt\text{East
direction}\\\ &x_{3}\;=\;sin(\theta)\hskip
65.44142pt\text{downward}.\end{split}$ (12)
The projection $A^{\prime}$ of the endpoint $A$ of all given pole vectors onto
the $x_{1}$-$x_{2}$ plane is called a stereographic plot (Figure 1) and is
commonly used for visualisation purposes.
A) B)
Figure 1: A) Construction of a stereographic plot.
B) Stereographic plot with kernel density distribution.
## The Clustering Method
Given are a set of $N$ pole vectors $\vec{x}_{k}$, $k=1,\ldots,N,$ (eq. 12).
The vectors correspond to $N$ noisy measurements taken from $M$ orientation
discontinuities whose spatial orientations are described by their (yet
unknown) average pole vectors $\vec{w}_{l}$, $l=1,\ldots,M$. For every
partition $l$ of the orientation data, there exists one average pole vector
$\vec{w}_{l}$. The dissimilarity between a data point $\vec{x}_{k}$ and an
average pole vector $\vec{w}_{l}$ is denoted by $d(\vec{x}_{k},\vec{w}_{l})$.
We now describe the assignment of pole vectors $\vec{x}_{k}$ to a partition by
the binary assignment variables
$m_{lk}\;=\;\left\\{\begin{array}[]{ll}1,&\textrm{if data point $k$ belongs to
cluster $l$}\\\ 0,&\textrm{otherwise.}\end{array}\right.$ (13)
One data point $\vec{x}_{k}$ belongs to only one orientation discontinuity
$\vec{w}_{l}$. Here, the arc-length between the pole vectors on the unit
sphere is proposed as the distance measure, i.e.
$d(\vec{x},\vec{w})\;=\;\arccos\,(\,|\,\vec{x}^{T}\vec{w}\,|\,),$ (14)
where $|.|$ denotes the absolute value.
The average dissimilarity between the data points and the pole vectors of the
directional data they belong to is given by
$E\;=\;\frac{1}{N}\sum_{k=1}^{N}\sum_{l=1}^{M}m_{lk}\,d(\vec{x}_{k},\vec{w}_{l}),$
(15)
from which we calculate the optimal partition by minimizing the cost function
$E$, i.e.
$E\;\stackrel{{\scriptstyle!}}{{=}}\;\min_{\\{m_{lk}\\},\\{\vec{w}_{l}\\}}.$
(16)
Minimization is performed iteratively in two steps. In the first step, the
cost function $E$ is minimized with respect to the assignment variables
$\\{m_{lk}\\}$ using
$m_{lk}\;=\;\left\\{\begin{array}[]{ll}1,&\mbox{\rm
if}\;\;\;l\,=\,\arg\min_{q}\,d(\vec{x}_{k},\vec{w}_{q})\\\ 0,&\mbox{\rm
else.}\end{array}\right.$ (17)
In the second step, cost $E$ is minimized with respect to the angles
$\vec{\Theta_{l}}=(\alpha_{l},\theta_{l})^{T}$ which describe the average pole
vectors $\vec{w}_{l}$ (see eq. (12)). This is done by evaluating the
expression
$\frac{\partial E}{\partial\vec{\Theta}_{l}}\;=\;\vec{0},$ (18)
where $\vec{0}$ is a zero vector with respect to
$\vec{\Theta}_{1}=(\alpha_{l},\theta_{l})^{T}$. This iterative procedure is
called batch learning and converges to a minimum of the cost, because $E$ can
never increase and is bounded from below. In most cases, however, a stochastic
learning procedure called on-line learning is used which is more robust:
BEGIN Loop
Select a data point $\vec{x}_{k}$.
Assign data point $\vec{x}_{k}$ to cluster $l$ by:
$l\;=\;\arg\min_{q}d(\vec{x}_{k},\vec{w}_{q})$ (19)
Change the average pole vector of this cluster by:
$\Delta\vec{\Theta}_{l}\;=\;-\gamma\frac{\partial
d(\vec{x}_{k},\vec{w}_{l}(\vec{\Theta}_{l}))}{\partial\vec{\Theta}_{l}}$ (20)
END Loop
The learning rate $\gamma$ should decrease with iteration number $t$, such
that the conditions (Robbins & Monro 1951, Fukunaga 1990)
$\sum_{t=1}^{\infty}\gamma(t)\;=\,\infty,\quad\textrm{and}\quad\sum_{t=1}^{\infty}\gamma^{2}(t)\;<\;\infty$
(21)
are fulfilled.
## Results
The clustering algorithm using the arc-length as distance measure is derived
and applied in Klose et al. (2004) and online available as a Java app
(http://www.cdklose.com). First, the new clustering algorithm is applied to an
artificial data set where orientation and distribution of pole vectors are
statistically defined in advance. Second, the algorithm is applied to a real-
world example given by Shanley & Mahtab (1976) (see Figure 1). Results are
compared to existing counting and clustering methods, as described above.
Input | Output
---|---
|
Figure 2: Snapshots of the Java app of clustering algorithm available at
http://www.cdklose.com
## References
* Anderberg (1973) Anderberg M. R., Cluster analysis for applications. 1973; Academic Press.
* Dershowitz et al. (1996) Dershowitz W. Busse R. Geier J. and Uchida M., A stochastic approach for fracture set definition, In: Aubertin M. Hassani F. and Mitri H., eds., Proc., 2nd NARMS, Rock Mechanics Tools and Techniques, Montreal. 1996; 1809-1813.
* Fisher et al. (1987) Fisher, N. I. Lewis, T. and Embleton B. J., Statistical analysis of spherical data. 1987; Cambridge University Press.
* Fukunaga (1990) Fukunaga K., Introduction to statistical pattern recognition. 1990; Academic Press.
* Gray (1984) Gray R.M., Vector Quantization, IEEE ASSP. 1984; 1 (2): 4-29.
* Hammah & Curran (1998) Hammah R. E. and Curran J. H., Fuzzy cluster algorithm for the automatic delineation of joint sets, Int. J. Rock Mech. Min. Sci. 1998; 35 (7): 889-905.
* Klose (2004) Klose R.M., A New Clustering Approach for Partiontioning Directional Data, IJRMMS. 2004; (42): 315-321.
* Pecher (1989) Pecher A., SchmidtMac - A program to display and analyze directional data, Computers & Geosciences. 1989; 15 (8): 1315-1326.
* Robbins & Monro (1951) Robbins H. and Monro S., A stochastic approximation method, Ann. Math. Stat. 1951; 22: 400-407.
* Schmidt (1925) Schmidt W., Gefügestatistik, Tschermaks Mineral. Petrol. Mitt. 1925; 38: 392-423.
* Shanley & Mahtab (1976) Shanley R. J. and Mahtab M. A., Delineation and analysis of clusters in orientation data, Mathematical Geology. 1976; 8 (1): 9-23.
* Wallbrecher (1978) Wallbrecher E., Ein Cluster-Verfahren zur richungsstatistischen Analyse tektonischer Daten, Geologische Rundschau. 1978; 67 (3): 840-857.
Planetary Detection: The Kepler Mission
Jon Jenkins
NASA/Ames Research Center
Moffett Field, California, USA
The Kepler telescope was launched into orbit in March 2009 to determine the
frequency of Earth-sized planets transiting their Sun-like host stars in the
habitable zone – that range of orbital distances for which liquid water would
pool on the surface of a terrestrial planet such as Earth or Mars. This
daunting task demands an instrument capable of measuring the light output from
each of over 150,000 stars over a 115 square degree field of view
simultaneously at an unprecedented photometric precision of 20 parts per
million (ppm) on 6.5-hour intervals. Kepler is opening up a new vista in
astronomy and astrophysics and is operating in a new regime where the
instrumental signatures compete with the miniscule signatures of terrestrial
planets transiting their host stars. The dynamic range of the intrinsic
stellar variability observed in the light curves is breathtaking: RR Lyrae
stars explosively oscillate with periods of approximately 0.5 days, doubling
their brightness over a few hours. Some flare stars double their brightness on
much shorter time scales at unpredictable intervals. At the same time, some
stars exhibit quasi-coherent oscillations with amplitudes of 50 ppm that can
be seen by eye in the raw flux time series. The richness of Kepler’s data lies
in the huge dynamic range for the variations in intensity $>$104 and the large
dynamic range of time scales probed by the data, from a few minutes to weeks,
months, and ultimately, to years.
Kepler is an audacious mission that places rigorous demands on the science
pipeline used to process the ever-accumulating, large amount of data and to
identify and characterize the minute planetary signatures hiding in the data
haystack. We give an overview of the Science pipeline that reduces the pixel
data to obtain flux time series and detect and characterize planetary transit
signatures. In particular, we detail the adaptive, wavelet-based transit
detector that performs the automated search through each light curve for
transit signatures of Earth-sized planets. We describe a Bayesian Maximum A
Posteriori (MAP) estimation approach under development to improve our ability
to identify and remove instrumental signatures from the light curves that
minimizes any distortion of astrophysical signals in the data and prevents the
introduction of additional noise that may mask small, transit features, as
indicated in the Figure 1. This approach leverages the availability of
thousands of stellar targets on each CCD detector in order to construct an
implicit forward model for the systematic error terms identified in the data
as a whole. The Kepler Mission will not be the last spaceborne astrophysics
mission to scan the heavens for planetary abodes. Several transit survey
missions have been proposed to NASA and to ESA and some are under development.
Clearly, these future missions can benefit from the lessons learned by Kepler
and will face many of the same challenges that in some cases will be more
difficult to solve given the significantly larger volume of data to be
collected on a far greater number of stars than Kepler has had to deal with.
Given the intense interest in exoplanets by the public and by the astronomical
community, the future for exoplanet science appears to just be dawning with
the initial success of the Kepler Mission.
Figure 1: The light curves (a) for two stars on channel 2.1, along with (b) an
LS fit to instrumental components extracted from the light curves and (c) a
Bayesian Maximum A Posteriori (MAP) fit to the same instrumental components.
Curves (b) and (c) have been offset from 0 for clarity. Panel A shows the
results for an RR Lyrae star while panel B shows them for an eclipsing binary.
Both light curves are dominated by intrinsic stellar variability rather than
by instrumental signatures. The RR Lyrae doubles its brightness every 0.5
days, while the eclipsing binary exhibits spot variations that change slowly
over time. The MAP fits do not corrupt the data with short term variations in
the poorly matched instrumental signatures used in the fit, unlike the least
squares fit.
Understanding the possible influence of the solar activity on the terrestrial
climate: a times series analysis approach
Elizabeth Martínez-Gómez
Center for Astrostatistics
326 Thomas Building
The Pennsylvania State University
University Park, PA, USA
Víctor M. Guerrero
Departmento de Estadística
Instituto Tecnológico Autónomo de México
01080, Álvaro Obregón, México
Francisco Estrada
Centro de Ciencias de la Atmósfera
Universidad Nacional Autónoma de México
Ciudad Universitaria, 04510, Coyoacán, México
## Abstract
Until the beginning of the 1980s, the relation between the Sun and climate
change was still viewed with suspicion by the wider climate community and
often remained a “taboo” subject in the solar astrophysics community. The main
reason for this fact was a lack of knowledge about the causal link between the
solar activity and its irradiance, that is, the amount of energy received at
the average distance between the Earth and the Sun. For many years, some
authors doubted about the invariability of the solar radiative output due to
the apparent, but poorly explained, correlations between fluctuations of solar
activity and atmospheric phenomena. Research on the mechanisms of solar
effects on climate and their magnitude is currently benefiting from tremendous
renewal of interest. A large amount of high resolution data is now available;
however, the matter remains controversial because most of these records are
influenced by other factors in addition to solar activity. In many works, the
association between solar and terrestrial environmental parameters is found
through some type of correlation based on multivariate statistics or applying
the wavelet analysis on the corresponding time series data. The talk is
divided in three parts. In the first I will review the solar- terrestrial
climate problem. Later, I will focus on the time series analysis used in our
study and finally, I will summarize our preliminary findings and comment
further ideas to improve our model.
## Review of the solar terrestrial problem
It is well-known that the Sun has an effect on terrestrial climate since its
electromagnetic radiation is the main energy for the outer envelopes of Earth.
This is one of the so called solar-terrestrial physics problems. This problem
has been the subject of speculation and research by scientists for many years.
Understanding the behavior of natural fluctuations in the climate is
especially important because of the possibility of man-induced climate changes
(Dingel & Landberg 1971, Schneider 1979, Tett et al. 1999). Many studies have
been conducted to show correlations between the solar activity and various
meteorological parameters but historical observations of solar activity were
restricted to sunspot numbers and it was not clear how these could be
physically related to meteorological factors. In this section we briefly
describe some of the most remarkable characteristics of the solar activity and
the terrestrial climate, emphasizing their possible connection.
a) Solar Activity
The term solar activity comprises photospheric and chromospheric phenomena
such as sunspots, prominences and coronal disturbances. It has been measured
via satellites during recent decades (for the Total Solar Irradiance, TSI) and
through other “proxy” variables in prior times (for example, the daily
observed number of sunspots, and the concentration of some cosmogenic isotopes
in ice cores as 14C and 10Be).
The phenomena mentioned above are related to the variations of the solar
magnetic field and the amount of received energy. Those are cyclic variations
like the 11-year (sunspots), and 22-year (magnetic field reversion), among
others.
b) Terrestrial climate
The Intergovernmental Panel on Climate Change (IPCC) glossary defines the
climate as the “average weather,” or more rigorously, as the statistical
description in terms of the mean and variability of relevant quantities (for
example, temperature, precipitation, and wind) over a period of time ranging
from months to thousands or millions of years.
c) Is there a connection between solar activity & terrestrial climate?
There are several hypotheses for how solar variations may affect Earth. Some
variations, such as changes in Earth’s orbit (Milankovitch cycles) are only of
interest in astronomy. The correlation between cosmic ray fluxes and clouds
(Laken et al. 2010), as well as, the correlation between the number of
sunspots and changes in the wind patterns (Willett 1949) have also been
reported. Studies about the solar and climate relationship have not been
conclusive since the actual changes are not enough to account for the majority
of the warming observed in the atmosphere over the last half of the 20th
century.
Table 1: Description of the time series for the variables associated to the solar activity and the terrestrial climate. Description | Variable | Timescale | Stationarity | Available period | Period of Study
---|---|---|---|---|---
Solar Activity | Number of sunspots (SP) | daily monthly yearly | No | 1700–2008 | 1700–1985
Total Solar Irradiation (TSI) | yearly | No | 1750–1978 (reconstructed time series by Lean et al. (1995) 1978–2008 (satellites)
10BE concentration in ice cores | geological | No | 1424–1985
Terrestrial climate | Global temperature in both hemispheres (TN, TS) | monthly | Breakpoint (1977) | Jan 1850–May 2009 | Jan 1950 – May2008
Mulivariate ENSO Index (MEI) | monthly | Breakpoint (1977) | Jan 1950–June 2009
North Atlantic Oscillation(NAO) | monthly | Yes | Jul 1821–May 2008
Pacific Decadal Oscillation(PDO) | monthly | Breakpoints (1977,1990) | Jan 1900–Jan 2009
## Multivariate Time Series Analysis: VAR methodology
A common assumption in many time series techniques (e.g. VAR methodology) is
that the data are stationary. A stationary process has the property that the
mean, variance and autocorrelation structure do not change over time (in other
words, a flat looking series without trend, constant variance over time, a
constant autocorrelation structure over time and no periodic fluctuations). If
the time series, $Y_{t}$, is not stationary, we can often transform it with
one of the following techniques: 1) apply a Box-Cox transformation (logarithm
is the simplest), and/or 2) differentiate the data $Y_{t}$ to create a new
series $X_{t}$ ($X_{t}$ = $\nabla Y_{t}$ = $Y_{t}$–$Y_{t-1}$ ; $X_{t}$
=$\nabla^{2}Y_{t}$ = $Y_{t}$ – 2$Y_{t-1}$ \+ $Y_{t-2}$).
### VAR methodology: description
The Vector AutoRegression (VAR) model is one of the most successful, flexible,
and easy to use models for the analysis of multivariate time series. It is a
natural extension of the univariate autoregressive model to dynamic
multivariate time series (Sims 1972; 1980; 1982). It describes the evolution
of a set of $k$ variables (called endogenous variables) over the same sample
period (t = 1, …, T) as a linear function of only their past evolution:
$y_{t}=c+\alpha_{1}y_{t-1}+\alpha_{2}y_{t-2}+...+\alpha_{p}y_{t-p}+\epsilon_{t}$
(22)
where $c$ is a $k\times 1$ vector of constants (intercept) , $\alpha_{i}$ is a
$k\times k$ matrix (for every i=1,.., p), p is the number of lags (that is,
the number of periods back), and $\epsilon_{t}$ is a $k\times 1$ vector of
error terms. There are some additional assumptions about the error terms: 1)
the expected value is zero, that is, E[$\epsilon_{it}$]=0 with t=1,…, T, and
2) the errors are not autocorrelated, E[$\epsilon_{it}$ $\epsilon_{jt}$]=0
with $t\neq\tau$.
The determination of the number of lags p is a trade-off between the
dimensionality and abbreviate models. To find the optimal lag length we can
apply a Log-Likelihood Ratio test (LR) test or an information criterion
(Lütkepohl 1993).
Once the estimation of the parameters in the VAR(p) model shown in Eq. (1)
through Ordinary Least Squares (OLS), we need to interpret the dynamic
relationship between the indicated variables using the Granger causality.
### Application of the VAR methodology to model the solar activity and
terrestrial climate connection
Our purpose is to investigate the relationship between the solar activity and
the major climate phenomena by means of time series analysis. The data are
taken from the National Geophysical Data Center (NGDC) and the National
Climatic Data Center (NCDC). In Table 1 we summarize the selected variables
for each physical system.
a) VAR model for the solar-terrestrial climate connection
We estimate a VAR(p) model using the variables shown in Table 1 where the non-
stationary time series have been differentiated once. The exogenous variables
are: a) number of sunspots, b) TSI, c) d76 (dummy variable for 1976), d) d77
(dummy variable for 1977) and e) d90 (dummy variable for 1990).
The optimal lag length is 4 and the VAR(4) is formed by 5 equations (TN, TS,
MEI, NAO, and PDO). The statistical validation is shown in Table 2.
b) VAR model for the solar activity
We estimate a VAR(p) model using the variables shown in Table 1 where the non-
stationary time series have been differentiated once.
The optimal lag length is 8 and the VAR(8) is formed by 3 equations (SP, TSI,
and 10BE). The statistical validation is shown in Table 2.
Table 2: Description of the time series for the variables associated to the solar activity and the terrestrial climate. VAR(p) characteristic | Model
---|---
Solar-terrestrial climate connection | Solar activity
Lag length (p) | 4 | 8
Significance of the coefficients | All (except equation for NAO) | All
Stability | Yes | Yes
Homoskedasticity of the residuals | No | No
Normality of the residuals | Yes (except equation for TN) | No
Granger causality | TSI does not affect MEI, NAO and PDO | 10BE does not affect SP and TSI
## Summary and ideas for future work
The possible relation between the solar activity and the terrestrial climate
has been addressed in many works. Most of them search for periodicities or
correlations among the set of variables that characterize the solar activity
and the main climate parameters. For example, the “wavelet analysis” cannot be
the most adequate to analyze multivariate time series. In this work we have
proposed and estimated a VAR model to explain such a connection. For this
model we have analyzed the time series for the most remarkable characteristics
of both the solar activity and climate. Our main results and some ideas for
future work are listed below.
* •
The solar activity is modeled by a VAR(8) in which we find that the 10Be
concentration does not play a fundamental role.
* •
The solar activity and terrestrial climate connection is modeled by a VAR(4)
where the solar variables are taken as exogenous. It seems that the sun
(described only for the number of sunspots and the TSI) has a weak connection
to Earth, at least for the major climate phenomena.
* •
It is convenient to include a term related to the cloudiness to verify the
previous findings.
* •
Analyzing certain cycles in the solar activity could help us to determine the
epochs in which the connection with the terrestrial climate was stronger.
* •
We need to search for other proxy variables that describe the solar activity
and introduce variables related to regional climate (for example:
precipitation, pressure, local temperature).
## Acknowledgements
E. Martínez-Gómez thanks to the Faculty For The Future Program and CONACyT-
Mexico postdoctoral fellowship for their financial support to this research.
V. M. Guerrero acknowledges support from Asociación Mexicana de Cultura, A. C.
## References
* Dingel & Landberg (1971) Dingle, A. N. & H. E. Landsberg 1971, Science, 173(3995), 461-462.
* Laken et al. (2010) Laken, B. A., D. R. Kniveton and M. R. Frogley 2010, Atmos. Chem. Phys., 10, 10941–10948.
* Lean et al. (1995) Lean, J., J. Beer and R. Bradley (1995), Geophys. Res. Lett., 22(23), 3195-3198.
* Lütkepohl (1993) Lütkepohl, H. 1993, Introduction to multiple time series analysis, Springer Verlag, Germany.
* Schneider (1979) Schneider, S. H. (1979), The effect of man’s activities on climatic changes in Evolution of Planetary and Climatology of the Earth. Proceedings of the International Conference, Nice, France, 16-20 October 1978, pp. 527.
* Sims (1972) Sims, C. A. 1972, American Economic Review 62, 540-552.
* Sims (1980) Sims, C. A. 1980, Econometrica 48, 1-48.
* Sims (1982) Sims, C. A. 1982, Brooking Papers on Economic Activity 1, 107-152.
* Tett et al. (1999) Tett, F. B. S., P. A. Stott, M. R. Allen, W. J. Ingram & J. F. B. Mitchell 1999, Nature, 399, 569-572.
* Willett (1949) Willett, H. C. 1949, J. Atm. Sciences, 6(1), 34-50.
Optimal Scheduling of Exoplanet Observations Using Bayesian Adaptive
Exploration
Thomas J. Loredo
Department of Astronomy
Cornell University
Cornell, New York, USA
This presentation describes ongoing work by a collaboration of astronomers and
statisticians developing a suite of Bayesian tools for analysis and adaptive
scheduling of exoplanet host star reflex motion observations. In this
presentation I focus on the most unique aspect of our work: adaptive
scheduling of observations using the principles of Bayesian experimental
design in a sequential data analysis setting. I introduce the core ideas and
highlight some of the computational challenges that arise when implementing
Bayesian design with nonlinear models. Specializing to parameter estimation
cases (e.g., measuring the orbit of planet known to be present), there is an
important simplification that enables relatively straightforward calculation
of greedy designs via maximum entropy (MaxEnt) sampling. We implement MaxEnt
sampling using population-based MCMC to provide samples used in a nested Monte
Carlo integration algorithm. I demonstrate the approach with a toy problem,
and with a re-analysis of existing exoplanet data supplemented by simulated
optimal data points.
Bayesian adaptive exploration (BAE) proceeds by iterating a three-stage cycle:
Observation–Inference–Design. Figure 1 depicts the flow of information through
one such cycle. In the observation stage, new data are obtained based on an
observing strategy produced by the previous cycle of exploration. The
inference stage synthesizes the information provided by previous and new
observations to produce interim results such as signal detections, parameter
estimates, or object classifications. In the design stage the interim
inferences are used to predict future data for a variety of possible observing
strategies; the strategy that offers the greatest expected improvement in
inferences, quantified with information-theoretic measures, is passed on to
the next Observation–Inference–Design cycle.
Figure 1: Depiction of one cycle of the three-stage Bayesian adaptive
exploration process.
Figures 2 and 3 show highlights of application of BAE to radial velocity (RV)
observations of the single-planet system HD 222582. Vogt et al. (2000)
reported 24 observations obtained over a 683 d time span with instrumentation
at the Keck observatory; Butler et al. (2006) reported 13 subsequent
observations. We consider the early observations as a starting point, and
compare inferences based on simulated subsequent data at optimal times
identified via BAE to inferences using the actual, non-optimal subsequent data
(we generated simulated data using the best-fit orbit for all 37 actual
observations). Figure 2 shows how the optimal observing time for the first new
datum is calculated. Bayesian analysis based on the 24 early data points (red
diamonds) produces a posterior distribution for the orbital parameters. We
explore the posterior via population-based adaptive Markov chain Monte Carlo
sampling, producing an ensemble of possible orbits.
Figure 2: Results based on data from HD 222582: Observed (red diamonds) and
predicted (ensemble of thin blue curves; also magenta dots at selected times)
velocity vs. time (against left axis); entropy of predictive distribution vs.
future observing time (green curve, against right axis).
The blue curves show the velocity curves for 20 posterior samples, roughly
depicting the predictive distribution for future data vs. time; the magenta
point clouds provide a more complete depiction at selected times, showing
$\sim 100$ samples from the predictive distribution at six future times. For
this type of data, the expected information gain from future data is
proportional to the entropy (uncertainty) in the predictive distribution, so
one expects to learn the most by observing where the predictions are most
uncertain. The green curve (against the right axis) quantifies this, showing
the entropy in the predictive distribution vs. time (in bits relative to
repeating the last actual observation), calculated using the predictive
distribution samples; its peak identifies the optimal observing time. We
generated a single new observation at this time, and repeated the procedure
twice more to produce three new optimal observations. The left panel of Figure
3 shows inferences (samples and histograms for marginal distributions) based
on the resulting 27 observations; the right panel shows inferences using the
37 actual observations. Inferences with the fewer but optimized new
observations are much more precise.
Figure 3: Orbital parameter estimates based on 24 early observations and three
simulated new observations at optimal times (left), and based on 24 early and
13 new, non-optimal actual observations (right). Parameters are period,
$\tau$, eccentricity, $e$, and mean anomaly at a fiducial time, $M_{0}$.
## References
* Vogt et al. (2000) Vogt, S.S., Marcy, G.W., Butler, R.P. & Apps, K. 2000, _Six New Planets from the Keck Precision Velocity Survey_ , Astrophysical Journal, 536, 902
* Butler et al. (2006) Butler, R. P., Wright, J. T., Marcy, G. W., Fischer, D. A., Vogt, S. S., Tinney, C. G., Jones, H. R. A., Carter, B. D., Johnson, J. A., McCarthy, C. & Penny, A. J. 2006, _Catalog of Nearby Exoplanets_ , Astrophysical Journal, 646, 505
Beyond Photometric Redshifts using Bayesian Inference
Tamás Budavári
Johns Hopkins University
Baltimore, Maryland, USA
The galaxies in our expanding Universe are seen redder than their actual
emission. This redshift in their spectra are typically measured from narrow
spectral lines by identifying them to known atomic lines seen in the lab.
Spectroscopy provides accurate determinations of the redshift as well as a
great insight into the physical processes but is very expensive and time
consuming. Alternative methods have been explored for decades. The field of
photometric redshifts started when Baum (1962) first compared the magnitudes
of red galaxies in distant clusters to local measurements. The first studies
were colorful proofs of the concept, which demonstrated the adequate precision
of estimating galaxy redshifts based on photometry alone, without
spectroscopic follow up. The new field became increasingly important over
time, and with the upcoming multicolor surveys just around the corner, the
topic is hotter than ever. Traditional methods can be broken down into two
distinct classes: empirical and template fitting. Empirical methods rely on
training sets of objects with known photometry and spectroscopic redshifts.
The usual assumption is that galaxies with the same colors are at identical
redshifts. The redshift of a new object is derived based on its vicinity in
magnitude space to the calibrators of the training set. Polynomial fitting,
locally linear regression and a plethora of other machine learning algorithms
have been tested and, usually, with good results. They are easy to implement,
fast to run, but are only applicable to new objects with the same photometric
measurements in the same regime. Template fitting relies on high-resolution
spectral models, which are parameterized by their type, brightness and
redshift. The best fitting parameters are typically sought in a maximum
likelihood estimation. It is simple to implement and work for new detections
in any photometric system but the results are only as good as the template
spectra.
Figure 1: The presented slide illustrates the minimalist model to incorporate
the photometric uncertainties of the new galaxy. Using KDE the relation is
estimated (blue dotted line), which is averaged with the uncertainty for the
final result (red solid line).
To understand the implications of the previous methods and to point toward
more advanced approaches, we can use Bayesian inference (Budavári 2009).
Constraints on photometric redshifts and other physical parameters in the more
general inversion problem are derived from first principles. We combine two
key ingredients:
(1) Relation to physics — The redshift (z) is not simply a function of the
observables. Considering that they cannot possibly capture all the
information, one expects a spread. Regression is only a good model, when the
width is very narrow. Otherwise one should estimate the conditional density of
$p(z|m)$. While not easy in practice, this is conceptually straightforward to
do on a given set of calibrators.
(2) Mapping of the observables — If we would like to perform the estimation of
a new object with colors in a separate photometric system, its $m^{\prime}$
magnitudes need to be mapped on to the passbands of the training set (m). This
might be as simple as an empirical conversion formula or as sophisticated as
spectral synthesis. In general, a model is needed, which can also propagate
the uncertainties, $p(m|m^{\prime})$. The final density is the convolution of
these two functions.
After 50 years of pragmatism, photometric redshift estimation is now placed on
a firm statistical foundation. We can put the traditional methods in context;
they are special cases of a unified framework. Their conceptual weaknesses
become visible from this aspect. The new approach points us toward more
advanced methods that combine the advantages of the earlier techniques. In
addition we can formally learn about selecting calibration sets for specific
studies. These advancements are going to be vital for analyzing the
observations of the next-generation survey telescopes.
Long-Range Climate Forecasts Using Data Clustering and Information Theory
Dimitris Giannakis
New York University
New York, New York, USA
Even though forecasting the weather beyond about two weeks is not possible,
certain climate processes (involving, e.g., the large-scale circulation in the
Earth’s oceans) are predictable up to a decade in advance. These so-called
climate regimes can influence regions as large as the West Coast of North
America over several years, and therefore developing models to predict them is
a problem of wide practical impact. An additional central issue is to quantify
objectively the errors and biases that are invariably associated with these
models.
In classical studies on decadal prediction (Boer 2000; 2004, Collins 2002),
ensemble experiments are performed using one or more climate models
initialized by perturbed initial conditions relative to a reference state, and
predictive skill is measured by comparing the root mean square difference of
ensemble trajectories to its equilibrium value. However, skill metrics of this
type have the drawback of not being invariant under invertible transformations
of the prediction variables, and not taking into account the important issue
of model error. Further challenges concern the choice of initial conditions of
the ensemble members (Hurrell et al. 2009, Meehl et al. 2009, Solomon et al.
2009).
In this talk, we present recent work (Giannakis and Majda 2011a; b) on methods
based on data clustering and information theory to build and assess
probabilistic models for long-range climate forecasts that address several of
the above issues. The fundamental perspective adopted here is that predictions
in climate models correspond to transfer of information; specifically transfer
of information between the initial conditions (which in general are not known
completely) and the state of the climate system at some future time. This
opens up the possibility of using the mathematical framework of information
theory to characterize both dynamical prediction skill and model error with
metrics that are invariant under invertible nonlinear transformations of
observables (Kleeman 2002, Roulston and Smith 2002, Majda et al. 2002; 2005,
DelSole and Tippet 2007, Majda and Gershgorin 2010).
The key points of our discussion are that (i) the long-range predictive skill
of climate models can be revealed through a suitable coarse-grained partition
of the set of initial data available to a model (which are generally
incomplete); (ii) long-range predictive skill with imperfect models depends
simultaneously on the fidelity of these models at asymptotic times, their
fidelity during dynamical relaxation to equilibrium, and the discrepancy from
equilibrium of forecast probabilities at finite lead times. Here, the coarse-
grained partition of the initial data is constructed by data-clustering
equilibrium realizations of ergodic dynamical systems without having to carry
out ensemble initializations. Moreover, prediction probabilities conditioned
on the clusters can be evaluated empirically without having to invoke
additional assumptions (e.g., Gaussianity), since detailed initial conditions
are not needed to sample these distributions.
In this framework, predictive skill corresponds to the additional information
content beyond equilibrium of the cluster-conditional distributions. The
natural information-theoretic functional to measure this additional
information is relative entropy, which induces a notion of distance between
the cluster-conditional and equilibrium distributions. A related analysis
leads to measures of model error that correspond to the lack of information
(or ignorance) of an imperfect model relative to the true model. The
techniques developed here have potential applications across several
disciplines involving dynamical system predictions.
As a concrete application of our approach, we study long-range predictability
in the equivalent barotropic, double-gyre model of McCalpin and Haidvogel
(1996) (frequently called the “1.5-layer model”). This simple model of ocean
circulation has non-trivial low-frequency dynamics, characterized by
infrequent transitions between meandering, moderate-energy, and extensional
configurations of the eastward jet (analogous to the Gulf Stream in the North
Atlantic). The algorithm employed here for phase-space partitioning involves
building a multi-time family of clusters, computed for different temporal
intervals of coarse graining; a recipe similar to kernel density estimation
methods. We demonstrate that knowledge of cluster affiliation in the computed
partitions carries significant information beyond equilibrium about the total
energy and the leading two principal components (PCs) of the streamfunction
(which are natural variables for the low-frequency dynamics of this system)
for five- to seven-year forecast lead times, i.e., for a timescale about a
factor of five longer than the maximum decorrelation time of the PCs.
As an application involving imperfect models, we discuss the error in Markov
models of the switching process between the ocean circulation regimes.
Imposing Markovianity on the transition process is a familiar approximation in
this context (Franzke et al. 2008; 2009), though the validity of this
assumption typically remains moot. Our analysis exposes starkly the falseness
of predictive skill that one might attribute to a Markovian description of the
regime transitions in the 1.5-layer model model by relying on an (internal)
assessment based solely on the deviation of the time-dependent prediction
probabilities of the Markov model from its biased equilibrium. In particular,
we find that a Markov model associated with a seven-state partition appears to
outperform a three-state model, both in its discriminating power and its
persistence (measured respectively by the deviation from equilibrium and rate
of approach to equilibrium), when actually the skill of the seven-state model
is false because its equilibrium statistics are biased. Here, the main
conclusion is that evaluating simultaneously model errors in both the
climatology and the dynamical relaxation to equilibrium should be an integral
part of assessments of long-range forecasting skill.
_Acknowledgment_ The material presented in this talk is joint work with Andrew
Majda of New York University. This research is partially supported by NSF
grant DMS-0456713, from ONR DRI grants N25-74200-F6607 and N00014-10-1-0554,
and from DARPA grants N00014-07-10750 and N00014-08-1-1080.
## References
* Boer (2000) G. J. Boer. A study of atmosphere-ocean predictability on long time scales. _Climate Dyn._ , 16(6):469–477, 2000. doi: 10.1007/s003820050340.
* Boer (2004) G. J. Boer. Long time-scale potential predictability in an ensemble of coupled climate models. _Climate Dyn._ , 23(1):29–44, 2004. doi: 10.1007/s00382-004-0419-8.
* Collins (2002) M. Collins. Climate predictability on interannual to decadal time scales: The initial value problem. _Climate Dyn._ , 19:671–692, 2002. doi: 10.1007/s00382-002-0254-8.
* DelSole and Tippet (2007) T. DelSole and M. K. Tippet. Predictability: Recent insights from information theory. _Rev. Geophys._ , 45:RG4002, 2007. doi: 10.1029/2006rg000202.
* Franzke et al. (2008) C. Franzke, D. Crommelin, A. Fischer, and A. J. Majda. A hidden Markov model perspective on regimes and metastability in atmospheric flows. _J. Climate_ , 21(8):1740–1757, 2008. doi: 10.1175/2007jcli1751.1.
* Franzke et al. (2009) C. Franzke, I. Horenko, A. J. Majda, and R. Klein. Systematic metastable regime identification in an AGCM. _J. Atmos. Sci._ , 66(9):1997–2012, 2009. doi: 10.1175/2009jas2939.1.
* Giannakis and Majda (2011a) D. Giannakis and A. J. Majda. Quantifying the predictive skill in long-range forecasting. Part I: Coarse-grained predictions in a simple ocean model. _J. Climate_ , 2011a. submitted.
* Giannakis and Majda (2011b) D. Giannakis and A. J. Majda. Quantifying the predictive skill in long-range forecasting. Part II: Model error in coarse-grained Markov models with application to ocean-circulation regimes. _J. Climate_ , 2011b. submitted.
* Hurrell et al. (2009) J. W. Hurrell et al. Decadal climate prediction: Opportunities and challenges. In _Proceedings of the OceanObs09 Conference: Sustained Ocean Observations and Information for Society_ , pages 21–25, Venice, Italy, 2009.
* Kleeman (2002) R. Kleeman. Measuring dynamical prediction utility using relative entropy. _J. Atmos. Sci._ , 59(13):2057–2072, 2002.
* Majda and Gershgorin (2010) A. J. Majda and B. Gershgorin. Quantifying uncertainty in climage change science through empirical information theory. _Proc. Natl. Acad. Sci._ , 107(34):14958–14963, 2010. doi: 10.1073/pnas.1007009107.
* Majda et al. (2002) A. J. Majda, R. Kleeman, and D. Cai. A mathematical framework for predictability through relative entropy. _Methods Appl. Anal._ , 9(3):425–444, 2002.
* Majda et al. (2005) A. J. Majda, R. V. Abramov, and M. J. Grote. _Information Theory and Stochastics for Multiscale Nonlinear Systems_ , volume 25 of _CRM Monograph Series_. Americal Mathematical Society, Providence, 2005.
* McCalpin and Haidvogel (1996) J. D. McCalpin and D. B. Haidvogel. Phenomenology of the low-frequency variability in a reduced-gravity quasigeostrophic double-gyre model. _J. Phys. Oceanogr._ , 26(5):739–752, 1996.
* Meehl et al. (2009) G. A. Meehl et al. Decadal prediction. can it be skillful? _Bull. Amer. Meteor. Soc._ , 90(10):1467–1485, 2009. doi: 10.1175/2009bams2778.1.
* Roulston and Smith (2002) M.S. Roulston and L.A. Smith. Evaluating probabilistic forecasts using information theory. _Mon. Weather Rev._ , 130(6):1653–1660, 2002\.
* Solomon et al. (2009) A. Solomon et al. Distinguishing the roles of natural and anthropogenically forced decadal climate variability: Implications for prediction. _Bull. Amer. Meteor. Soc._ , 2009. Submitted.
Comparison of Information-Theoretic Methods to estimate the information flow
in a dynamical system
Deniz Gencaga
NOAA Crest
New York, New York, USA
## Abstract
In order to quantify the amount of information about a variable, or to
quantify the information shared between two variables, we utilize information-
theoretic quantities like entropy and mutual information (MI), respectively.
If these variables constitute a coupled, dynamical system, they share the
information along a direction, i.e. we have an information flow in time. In
this case, the direction of the information flow also needs to be estimated.
However, MI does not provide directionality. Transfer entropy (TE) has been
proposed in the literature to estimate the direction of information flow in
addition to its magnitude. Here, our goal is to estimate the transfer entropy
from observed data accurately. For this purpose, we compare most frequently
used methods in the literature and propose our own technique. Unfortunately,
every method has its own free tuning parameter(s) so that there is not a
consensus on an optimal way of estimating TE from a dataset. In this work, we
compare several methods along with a method that we propose, on a Lorenz
model. Here, our goal is to develop the required appropriate and reliable
mathematical tool synthesizing from all of these disjoint methods used in
fields ranging from biomedicine to telecommunications and apply the resulting
technique on tropical data sets to better understand events such as the Madden
Julian Oscillation in the future.
## Introduction and problem statement
Nonlinear coupling between complex dynamical systems is very common in many
fields (Wallace et al. 2006, Gourvitch & Eggermont 2007). In order to study
the relationships between such coupled subsystems, we must utilize higher-
order statistics. Thus, using linear techniques based on correlation analysis
cannot be sufficient. Although using mutual information seems to be a good
alternative to model higher order nonlinear relationships, it becomes
insufficient when we would like to model the directionality of the
interactions between the variables, as it is a symmetric measure. Thus,
Schrieber (2000), proposed an asymmetric measure between two variables X and Y
as follows to study directional interactions:
${TE}_{{Y}\rightarrow{X}}={T}\left({X}_{i+1}\mid{{\bf X_{i}^{(k)}},{\bf
Y_{j}^{(l)}}}\right)=\sum_{i=1}^{N}p\left(x_{x_{i+1}},{\bf x_{i}^{(k)}},{\bf
y_{j}^{(l)}}\right)log_{2}\frac{p\left(x_{i+1}\mid{{\bf x_{i}^{(k)}},{\bf
y_{j}^{(l)}}}\right)}{p\left(x_{i+1}\mid{{\bf x_{i}^{(k)}}}\right)}$ (23)
${TE}_{{X}\rightarrow{Y}}={T}\left({Y}_{i+1}\mid{{\bf Y_{i}^{(k)}},{\bf
X_{j}^{(l)}}}\right)=\sum_{i=1}^{N}p\left(y_{y_{i+1}},{\bf y_{i}^{(k)}},{\bf
x_{j}^{(l)}}\right)log_{2}\frac{p\left(y_{i+1}\mid{{\bf y_{i}^{(k)}},{\bf
x_{j}^{(l)}}}\right)}{p\left(y_{i+1}\mid{{\bf y_{i}^{(k)}}}\right)}$ (24)
where $x_{i}(k)=[x_{i},\ldots,x_{i-k+1}]$ and
$y_{j}(l)=[y_{i},\ldots,y_{i-l+1}]$ are past states and X and Y are $k^{th}$
and $l^{{th}}$ order Markov processes, respectively. Above,
${TE}_{{X}\rightarrow{Y}}$ and ${TE}_{{Y}\rightarrow{X}}$ denote transfer
entropies in the direction from X to Y and from Y to X, respectively. For
example, in Equation 23, TE describes the degree to which, information about Y
allows one to predict future values of X. This is a causal influence that the
subsystem Y has on the subsystem X.
### Estimation
In the literature, there are a couple of methods to estimate this quantity
from data. However, they have their own fine tuning parameters so that there
is not a consensus on an optimal way of estimating TE from a dataset. One of
them is the Kernel Density Estimation method (Sabesan et al. 2007), where an
optimal radius needs to be picked. In another method, called the Adaptive
Partitioning of the observation space (Darbellay & Vajda 1999), we make use of
the unequal bin sized histograms for mutual information estimations. However,
as we have to subtract multivariate mutual information quantities to estimate
the final TE, this could be affected by biases. Thus, we propose our own
method of generalized Bayesian piecewise-constant model for the probability
density function estimation and then calculate TE using the summation formula
of individual Shannon entropies.
Figure 1: Estimated transfer entropies between the pairs of Lorenz equations
using three different methods
## EXPERIMENT AND CONCLUSION
We tested three methods on the nonlinear coupled Lorenz equations given below.
In climatology, they represent a model of an atmospheric convection roll where
x, y, z denote convective velocity, vertical temperature difference and mean
convective heat flow, respectively. Using the three methods, we obtained the
TE estimates illustrated in Figure 1.
$\frac{dx}{dt}=\sigma\left({y-x}\right)$ (25) $\frac{dy}{dt}={-xz+rx-y}$
$\frac{dz}{dt}={xy-bz}$
where $\sigma=10$, $b=\frac{8}{3}$, r: Rayleigh number and $r=28$ in a chaotic
regime.
In conclusion, computer simulations demonstrate that we can find a reliable
parameter regime for all methods at the same time and estimate TE direction
from data so that we can identify the information flow between the variables
reliably, as all methods agree both mathematically and physically. Currently,
we are working on the magnitude differences between the methods so that we can
apply TE to identify the information flow between the relevant climate
variables of the tropical disturbance, called Madden Julian Oscillation (MJO).
Later, this technique will be developed for us to better understand the highly
nonlinear nature of atmospheric dynamics.
## References
* Wallace et al. (2006) J. Wallace, P. Hobbs, Atmospheric Science, Second Edition: An Introductory Survey, 2006
* Gourvitch & Eggermont (2007) B. Gourvitch and J. Eggermont, “Evaluating Information Transfer Between Auditory Cortical Neurons”, J. Neurophysiology, vol. 97, no.3, pp. 2533-2543.
* Schrieber (2000) T. Schreiber, “Measuring information transfer”, Phys. Rev. Let., Vol. 85, No. 2, pp.461-464, 2000
* Sabesan et al. (2007) S. Sabesan, K. Narayanan, A. Prasad, L. Iasemidis, A. Spanias and K. Tsakalis, “Information Flow in Coupled Nonlinear Systems: Application to the Epileptic Human Brain”, Data Mining in Biomedicine, Data Mining in Biomedicine, 2007
* Darbellay & Vajda (1999) G. Darbellay and I. Vajda, “Estimation of the Information by an adaptive partitioning of the observation space”, IEEE Trans. On Information Theory, Vol. 45, No.4, 1999
Reconstructing the Galactic halo’s accretion history: A finite mixture model
approach
Duane Lee
Department of Astronomy
Columbia University
New York, New York, USA
Will Jessop
Department of Statistics
Columbia University
New York, New York, USA
## Abstract
The stellar halo that surrounds our Milky Way Galaxy is thought to have been
built, at least in part, from the agglomeration of stars from many smaller
galaxies. This talk outlines an approach to reconstructing the history of
Galactic accretion events by looking at the distribution of the chemical
abundances of halo stars. The full distribution is assumed to result from the
superpositionof stellar populations accreted at different times from
progenitor galaxies of different sizes. Our approach uses the Expectation-
Maximization (EM) algorithm to find the maximum-likelihood estimators that
assess the contribution of each of these progenitors in forming the Galaxy.
Scientific Program
| Thursday, February 24, 2011 |
---|---|---
10:00 | Welcome | Michael Way (GISS)
| | Catherine Naud (GISS)
10:20 | How long will it take. A historical approach to boundary crossing. | Victor de la Peña (Columbia)
10:45 | Cosmology through the Large-Scale Structure of the Universe | Eyal Kazin (NYU)
11:30 | On the shoulders of Gauss, Bessel, and Poisson: links, chunks, spheres, and conditional models | William Heavlin (Google)
12:15 | Mining Citizen Science Data: Machine Learning Challenges | Kirk Borne (GMU)
13:00 | Lunch Break |
14:30 | Tracking Climate Models: Advances in Climate Informatics | Claire Monteleoni (Columbia)
15:15 | Spectral Analysis Methods for Complex Source Mixtures | Kevin Knuth (SUNY/Albany)
16:00 | Beyond Objects: Using machines to understand the diffuse universe | Joshua Peek (Columbia)
16:45 | Viewpoints: A high-performance visualization and analysis tool | Michael Way (GISS)
| Friday, February 25, 2011 |
10:00 | Clustering Approach for Partitioning Directional Data in Earth and Space Sciences | Christian Klose (Think Geohazards)
10:45 | Planetary Detection: The Kepler Mission | Jon Jenkins (NASA/Ames)
11:30 | Understanding the possible influence of the solar activity on the terrestrial climate: A times series analysis approach | Elizabeth Martínez-Gómez (Penn State)
12:15 | Bayesian adaptive exploration applied to optimal scheduling of exoplanet Radial Velocity observations | Tom Loredo (Cornell)
13:00 | Lunch Break |
14:30 | Bayesian Inference from Photometric Surveys | Tamás Budavári (JHU)
15:15 | Long-Range Forecasts Using Data Clustering and Information Theory | Dimitris Giannakis (NYU)
16:00 | Comparison of Information-Theoretic Methods to estimate the information flow in a dynamical system | Deniz Gencaga (CCNY)
16:45 | Reconstructing the Galactic halo’s accretion history: A finite mixture model approach | Duane Lee and Will Jessop (Columbia)
Participants
Ryan Abrahams | CUNY Graduate Center |
---|---|---
Hannah Aizenman | CCNY/Computer Science |
Arif Albayrak | NASA GES DISC |
Kehinde-Paul Alabi | CCNY/Computer Science |
Vivienne Baldassare | CUNY/Hunter College/Physics |
Imre Bartos | Columbia/Physics |
Mike Bauer | Columbia-NASA/GISS |
Rohit Bhardwaj | Columbia/Computer Science |
Amitai Bin-Nun | University of Pennsylvania/Astronomy |
James Booth | NASA/GISS |
Kirk Borne | George Mason University | speaker
Andreas Breiler | IdeLaboratoriet |
Douglas Brenner | American Museum of Natural History |
Tamas Budavari | Johns Hopkins University | speaker
Samantha Chan | Lamont-Doherty Earth Observatory, Columbia University |
Milena C. Cuellar | BCC. CUNY |
Victor de al Pena | Columbia/Statistics | speaker
Danielle Dowling | CUNY Graduate Center |
Tarek El-Gaaly | Rutgers/Computer Science |
Daniel Feldman | CUNY/College of Staten Island |
Stephanie Fiorenza | CUNY/College of Staten Island/Physics |
Carl Friedberg | NYC/Comets.com |
Deniz Gencaga | CCNY/EE, NOAA-CREST | speaker
Dimitrios Giannakis | Center for Atmosphere Ocean Science, Courant Institute of Mathematical Sciences | speaker
Irina Gladkova | CCNY/Computer Science |
Elizabeth Gomez | Penn State University/Statistics | speaker
Joshua Gordon | Columbia/Computer Science |
Sam Gordon | Columbia |
Michael Grossberg | CCNY/Computer Science |
Participants cont’
Michael Haken | NASA/GSFC, Science Systems & Applications, Inc. |
---|---|---
Zachary Haberman | SUNY at Albany/Physics |
Naveed Hasan | Columbia/Computer Science |
Nicholas Hatzopoulos | CUNY Gradate Center |
William D Heavlin | Google, Inc | speaker
Kay Hiranaka | CUNY Graduate Center |
Hao Huang | Stony Brook University/Computer Science |
Darryl Hutchinson | CUNY Graduate Center |
Ilknur Icke | CUNY Graduate Center/Computer Science |
Geetha Jagannathan | Columbia/Computer Science |
Jon Jenkins | SETI Institute/NASA Ames Research Center | speaker
Will Jessop | Columbia/Statistics | speaker
Ed Kaita | SSAI@GSFC/NASA |
Eyal Kazin | New York University | speaker
Lee Case Klippel | Columbia University |
Christian D. Klose | Think GeoHazards | speaker
Kirk Knobelspiesse | ORAU-NASA/GISS |
Kevin Knuth | SUNY at Albany | speaker
Ioannis Kostopoulos | CUNY Graduate Center |
Duane Lee | Columbia/Astronomy | speaker
Erbo Li | Columbia/Computer Science |
Charles Liu | CUNY at CSI |
Thomas Loredo | Cornell/Astronomy | speaker
Chris Malone | SUNY at Stony Brook/Physics and Astronomy |
Szabolcs Marka | Columbia/Physics |
Zsuzsa Marka | Columbia Astrophysics Laboratory |
Haley Maunu | SUNY at Albany/Physics |
Vinod Mohan | Columbia |
Claire Monteleoni | Columbia/Computer Science | speaker
Participants cont’
Catherine Naud | NASA/GISS |
---|---|---
Alejandro Nunez | CUNY Graduate Center |
Indrani Pal | IRI, The Earth Institute |
Vladimir Pavlovic | Rutgers/Computer Science |
Joshua Peek | Columbia/Astronomy | speaker
Fergie Ramos | Student/College of New Rochelle |
Khary Richardson | CUNY Graduate Center |
William Rossow | CCNY/NOAA-CREST |
Syamantak Saha | NYU Stern |
Destry Saul | Columbia/Astronomy |
Joshua Schlieder | SUNY at Stony Brook |
Frank Scalzo | NASA/GISS |
Bodhisattva Sen | Columbia/Statistics |
Monika Sikand | Stevens Institute of Tech/Physics |
Michal Simon | SUNY at Stony Brook/Physics and Astronomy |
Ishan Singh | Columbia |
Damain Sowinski | CUNY Graduate Center |
Ilya Tsinis | NYU/Courant |
Maria Tzortziou | GSFC/NASA - University of Maryland |
Michael Way | NASA/GISS |
Adrian Weller | Columbia/Computer Science |
Laird Whitehill | Cornell/Astronomy |
Alexander Wong | JP Morgan Chase |
Nettie Wong | American Museum of Natural History |
Emmi Yonekura | Columbia University |
Shinjae Yoo | Brookhaven National Lab/Computer Science Center |
Sejong Yoon | Rutgers/Computer Science |
Dantong Yu | Brookhaven National Lab |
Video Links
Introduction | Michael Way
---|---
http://www.youtube.com/watch?v=CHEoya_e2Do | (NASA/GISS)
How long will it take. A historical approach to boundary crossing | Victor de la Peña
http://www.youtube.com/watch?v=3gfHeerVqHs | (Columbia U)
Cosmology through the Large-Scale Structure of the Universe | Eyal Kazin
http://www.youtube.com/watch?v=es4dH0jBJYw | (NYU)
On the shoulders of Gauss, Bessel, and Poisson: links, chunks, spheres, and conditional models | William Heavlin
http://www.youtube.com/watch?v=FzFAxaXTkUQ | (Google Inc.)
Mining Citizen Science Data: Machine Learning Challenges | Kirk Borne
http://www.youtube.com/watch?v=XoS_4axsb5A | (GMU)
Tracking Climate Models: Advances in Climate Informatics | Claire Monteleoni
http://www.youtube.com/watch?v=78IeffwV6bU | (Columbia U)
Spectral Analysis Methods for Complex Source Mixtures | Kevin Knuth
http://www.youtube.com/watch?v=t9mCoff5YZo | (Suny/Albany)
Beyond Objects: Using machines to understand the diffuse universe | Joshua Peek
http://www.youtube.com/watch?v=hgp8CeR43So | (Columbia U)
Viewpoints: A high-performance visualization and analysis tool | Michael Way
http://www.youtube.com/watch?v=FsW64idYw6c | (NASA/GISS)
Clustering Approach for Partitioning Directional Data in Earth and Space Sciences | Christian Klose
http://www.youtube.com/watch?v=5Ak6_rvayqg | (Think Geohazards)
Planetary Detection: The Kepler Mission | Jon Jenkins
http://www.youtube.com/watch?v=2dnA95smRE0 | (NASA/Ames)
Understanding the possible influence of the solar activity on the terrestrial climate: A times series analysis approach | Elizabeth Martínez-Gómez
http://www.youtube.com/watch?v=CXmC1dh9Wdg | (Penn State)
Bayesian adaptive exploration applied to optimal scheduling of exoplanet Radial Velocity observations | Tom Loredo
http://www.youtube.com/watch?v=B1pVtHmu9E0 | (Cornell U)
Bayesian Inference from Photometric Surveys | Tamas Budavari
http://www.youtube.com/watch?v=xCzUl0HtWGM | (JHU)
Long-Range Forecasts Using Data Clustering and Information Theory | Dimitris Giannakis
http://www.youtube.com/watch?v=pFDqP94btCg | (NYU)
Comparison of Information-Theoretic Methods to estimate the information flow in a dynamical system | Deniz Gencaga
http://www.youtube.com/watch?v=ejX_MWImP6A | (CCNY)
Reconstructing the Galactic halo’s accretion history: | Duane Lee &
A finite mixture model approach | Will Jessop
http://www.youtube.com/watch?v=moehYYsIOFw | (Columbia U)
|
arxiv-papers
| 2011-04-08T14:08:47 |
2024-09-04T02:49:18.177083
|
{
"license": "Public Domain",
"authors": "Michael J. Way and Catherine Naud (NASA Goddard Institute for Space\n Studies)",
"submitter": "Michael Way",
"url": "https://arxiv.org/abs/1104.1580"
}
|
1104.1683
|
# Stellar electron-capture rates calculated with the finite-temperature
relativistic random-phase approximation
Y. F. Niu1 N. Paar2 D. Vretenar2 J. Meng3,1,4 mengj@pku.edu.cn 1State Key
Laboratory of Nuclear Physics and Technology, School of Physics, Peking
University, Beijing 100871, China 2Physics Department, Faculty of Science,
University of Zagreb, Croatia 3School of Physics and Nuclear Energy
Engineering, Beihang University, Beijing 100191, China 4Department of
Physics, University of Stellenbosch, Stellenbosch 7602, South Africa
###### Abstract
We introduce a self-consistent microscopic theoretical framework for modelling
the process of electron capture on nuclei in stellar environment, based on
relativistic energy density functionals. The finite-temperature relativistic
mean-field model is used to calculate the single-nucleon basis and the
occupation factors in a target nucleus, and $J^{\pi}=0^{\pm}$, $1^{\pm}$,
$2^{\pm}$ charge-exchange transitions are described by the self-consistent
finite-temperature relativistic random-phase approximation. Cross sections and
rates are calculated for electron capture on 54,56Fe and 76,78Ge in stellar
environment, and results compared with predictions of similar and
complementary model calculations.
###### pacs:
21.60.Jz, 23.40.Bw, 23.40.Hc, 26.50.+x
today
## I Introduction
Weak interaction processes play a crucial role in the late evolution stages of
massive stars by determining the core entropy and electron-to-baryon ratio
$Y_{e}$, two important quantities associated with the dynamics of core-
collapse supernovae Bethe et al. (1979). At the end of its life, a massive
star exhausts the nuclear fuel and, therefore, the core can only be stabilized
by the electron degeneracy pressure as long as its mass does not exceed the
corresponding Chandrasekhar mass $M_{\rm Ch}$, proportional to $Y_{e}^{2}$.
When this mass limit is exceeded, the core cannot attain a stable
configuration and it collapses. During the pre-collapse phase, electron
capture reduces the number of electrons available for pressure support,
whereas beta-decay acts in the opposite direction. At the same time, the
neutrinos produced by electron capture freely escape from the star for values
of the matter density $\lesssim 10^{11}$ g cm-3, removing energy and entropy
from the core Bethe (1990); Langanke and Martínez-Pinedo (2003); Janka et al.
(2007). For initial values of $Y_{e}$ around $0.5$, $\beta^{-}$ decay
processes can be effectively hindered by electron degeneracy, but get to be
competitive when nuclei become more neutron-rich.
For central stellar densities less than a few $10^{10}$ g/cm3 and temperatures
between $300$ keV and $800$ keV, electron capture mainly occurs on nuclei in
the mass region $A\sim 60$. Under such conditions electron-capture rates are
sensitive to the detailed Gamow-Teller (GT) strength distribution, because the
electron chemical potential is of the same order of magnitude as the nuclear
$Q$-value (defined as the difference between neutron and proton chemical
potentials). For even higher densities and temperature, nuclei with mass
numbers $A>65$ become quite abundant. The electron chemical potential is
noticeably larger than the $Q$-value, thus electron-capture rates are
principally determined by the total GT strength and its centroid energy. At
core densities $\rho>10^{11}$ g/cm3, the electron chemical potential reaches
values larger than about $20$ MeV, and forbidden transitions can no longer be
neglected Langanke and Martínez-Pinedo (2003); Janka et al. (2007).
Because of its relevance in modelling supernovae evolution, the process of
electron capture has been studied employing various approaches, often based on
available data. The first standard tabulation of nuclear weak-interaction
rates for astrophysical applications was that of Fuller, Fowler and Newman
(FFN) Fuller et al. (1980, 1982a, 1982b, 1985). It was based on the
independent particle model, but used experimental information whenever
available. The tables included rates for electron capture, positron capture,
$\beta$-decay, and positron emission for relevant nuclei in the mass range
$21\leq A\leq 60$. The shell model Monte Carlo method (SMMC) was used to
determine for the first time in a fully microscopic way the GT contributions
to presupernova electron-capture rates for fp-shell nuclei, taking into
account thermal effects. The electroweak interaction matrix elements were
calculated in the zero-momentum transfer limit, with the GT operators as the
main ingredient. The GT strength distributions were obtained from the response
function in the canonical ensemble, solved in the $0\hbar\omega$ fp-shell
space Dean et al. (1998). The diagonalization of the correponding Hamiltonian
matrix in the complete pf-shell model space reproduces the experimental GT+
distributions Langanke and Martínez-Pinedo (1998); Caurier et al. (1999);
Langanke and Martínez-Pinedo (2000). An updated tabulation of weak interaction
rates for more than $100$ nuclei in the mass range $45\leq A\leq 65$, with the
same temperature and density grid as the one reported by FFN, was carried out
based on the large-scale shell-model diagonalization (LSSM) approach Langanke
and Martínez-Pinedo (2001).
An alternative approach to the calculation of weak-interaction rates is based
on the random-phase approximation (RPA). This framework is generally more
suitable for the inclusion of forbidden transitions, and for global
calculations involving a large number of nuclei included in nuclear networks.
To overcome the limitations of the shell model, in a study of nuclei beyond
the fp-shell a hybrid model was introduced. In this approach the SMMC is used
to obtain the finite-temperature occupation numbers in the parent nucleus, and
the allowed and forbidden transitions for the electron-capture process are
calculated in the random-phase approximation using mean-field wave functions
with the SMMC occupation numbers Langanke et al. (2001). More recently the
hybrid model plus the RPA, with a global parametrization of single-particle
occupation numbers, has been employed in estimates of electron-capture rates
of a large number of nuclei involved in stellar core collapse Juodagalvis et
al. (2010).
Electron-capture rates were also calculated for sd-shell and fpg-shell nuclei
using the proton-neutron quasiparticle RPA (QRPA) approach, based on the
Nilsson model and separable GT forces Nabi and Klapdor-Kleingrothaus (1999,
2004). However, the use of experimental masses for calculation of $Q$-values
limits the application of this model to nuclei with known masses. More
recently a thermal QRPA approach (TQRPA) has been introduced, based on the
Woods-Saxon potential and separable multipole and spin-multipole particle-hole
interactions, with temperature taken into account using the thermofield
dynamics (TFD) formalism Dzhioev et al. (2010). A fully self-consistent
microscopic framework for evaluation of nuclear weak-interaction rates at
finite temperature has recently been introduced, based on Skyrme density
functionals. The single-nucleon basis and the corresponding thermal occupation
factors of the initial nuclear state are determined in the finite-temperature
Skyrme Hartree-Fock model, and charge-exchange transitions to excited states
are computed using the finite-temperature RPA Paar et al. (2009).
An important class of nuclear structure models belongs to the framework of
relativistic energy density functionals (EDF). In particular, a number of very
successful relativistic mean-field (RMF) models have been very successfully
employed in analyses of a variety of nuclear structure phenomena, not only in
nuclei along the valley of $\beta$-stability, but also in exotic nuclei with
extreme isospin values and close to the particle drip lines Ring (1996); Meng
et al. (2006); Vretenar et al. (2005). Based on this framework, the
relativistic (Q)RPA has been developed and applied in studies of collective
excitations in nuclei, including giant resonances, spin-isospin resonances,
and exotic modes of excitation in unstable nuclei Ma et al. (2001); Ring et
al. (2001); Nikšić et al. (2002); Paar et al. (2003, 2004a); Liang et al.
(2008); Paar et al. (2007). By employing a small set of universal parameters
adjusted to data, both ground-state properties and collective excitations over
the whole chart of nuclides, from relatively light systems to superheavy
nuclei, can be accurately described. For studies of astrophysical processes,
temperature effects have recently been included in the self-consistent
relativistic RPA. The low-energy monopole and dipole response of nuclei at
finite temperatures were investigated Niu et al. (2009). An extension of the
finite-temperature relativistic RPA (FTRRPA) to include charge-exchange
transitions, will certainly provide a very useful theoretical tool for studies
of the electron-capture process in presupernova collapse.
In this work we introduce the theoretical framework, based on the charge-
exchange FTRRPA, for the calculation of electron-capture cross sections and
stellar electron-capture rates on selected medium-mass nuclei. The single
nucleon basis and the thermal occupation factors of the initial nuclear state
are determined in a finite-temperature RMF model, and charge-exchange
transitions to the excited states are computed using the FTRRPA. The same
relativistic energy density functional is consistently used both in the RMF
and RPA equations. The advantage of this approach is that the calculation is
completely determined by a given energy density functional and, therefore, can
be extended over arbitrary mass regions of the nuclide chart, without
additional assumptions or adjustment of parameters, as for instance single-
particle energies, to transitions within specific shells. In a simple RPA, of
course, correlations are described only on the one-particle – one-hole level,
and therefore one cannot expect the model to reproduce the details of the
fragmentation of GT strength distributions.
The paper is organized as follows. In Sec. II the framework of the charge-
exchange FTRRPA and the formalism for the electron-capture cross sections and
rates are introduced. The Gamow-Teller strength distributions at finite
temperature are discussed in Sec. III. The calculated electron-capture cross
sections and rates in a stellar environment are presented in Sec. IV and V,
respectively. Sec. VI summarizes the present work and ends with an outlook for
future studies.
## II Formalism
Since electron capture on nuclei involves charge-exchange transitions, for the
purpose of the present study we extend the self-consistent finite-temperature
relativistic random-phase approximation (FTRRPA) Niu et al. (2009) and
implement the model in the charge-exchange channel. The characteristic
properties of the nuclear initial state, that is, the single-nucleon basis and
the corresponding thermal occupation probabilities, are obtained using an RMF
model at finite temperature. This framework was introduced in Ref. Gambhir et
al. (2000), based on the nonlinear effective Lagrangian with the NL3
parameterization Lalazissis et al. (1997). In this work the RMF at finite
temperature is implemented using an effective Lagrangian with medium-dependent
meson-nucleon couplings Typel and Wolter (1999); Nikšić et al. (2002). The
corresponding FTRRPA equations are derived using the single-nucleon basis of
the RMF model at finite temperature Niu et al. (2009). In a self-consistent
approach the residual interaction terms in the FTRRPA matrix are obtained from
the same Lagrangian. The proton-neutron FTRRPA equation reads
$\left(\begin{array}[]{cc}A^{J}_{pnp^{\prime}n^{\prime}}&B^{J}_{pnp^{\prime}n^{\prime}}\\\
-B^{J}_{pnp^{\prime}n^{\prime}}&-A^{J}_{pnp^{\prime}n^{\prime}}\end{array}\right)\left(\begin{array}[]{c}X^{J}_{p^{\prime}n^{\prime}}\\\
Y^{J}_{p^{\prime}n^{\prime}}\end{array}\right)=\omega_{\nu}\left(\begin{array}[]{c}X^{J}_{pn}\\\
Y^{J}_{pn}\end{array}\right),$ (1)
where $A$ and $B$ are the matrix elements of the particle-hole residual
interaction,
$\displaystyle A^{J}_{pnp^{\prime}n^{\prime}}$ $\displaystyle=$
$\displaystyle(\epsilon_{P}-\epsilon_{H})\delta_{pp^{\prime}}\delta_{nn^{\prime}}+V^{J}_{pn^{\prime}np^{\prime}}(\tilde{u}_{p}\tilde{v}_{n}\tilde{u}_{p^{\prime}}\tilde{v}_{n^{\prime}}+\tilde{v}_{p}\tilde{u}_{n}\tilde{v}_{p^{\prime}}\tilde{u}_{n^{\prime}})(|f_{n^{\prime}}-f_{p^{\prime}}|),$
(2) $\displaystyle B^{J}_{pnp^{\prime}n^{\prime}}$ $\displaystyle=$
$\displaystyle
V^{J}_{pn^{\prime}np^{\prime}}(\tilde{u}_{p}\tilde{v}_{n}\tilde{v}_{p^{\prime}}\tilde{u}_{n^{\prime}}+\tilde{v}_{p}\tilde{u}_{n}\tilde{u}_{p^{\prime}}\tilde{v}_{n^{\prime}})(|f_{p^{\prime}}-f_{n^{\prime}}|).$
(3)
The diagonal matrix elements contain differences of single-particle energies
between particles and holes $\epsilon_{P}-\epsilon_{H}$, and these could be
either $\epsilon_{p}-\epsilon_{\bar{n}}$ or $\epsilon_{n}-\epsilon_{\bar{p}}$,
where $p$, $n$ denote proton and neutron states, respectively. For a given
proton-neutron pair configuration, the state with larger occupation
probability is defined as a hole state, whereas the other one is a particle
state. In the relativistic RPA, the configuration space includes not only
proton-neutron pairs in the Fermi sea, but also pairs formed from the fully or
partially occupied states in the Fermi sea and the empty negative-energy
states from the Dirac sea. The residual interaction term
$V^{J}_{pn^{\prime}np^{\prime}}$ is coupled to the angular momentum $J$ of the
final state. The spin-isospin-dependent interaction terms are generated by the
exchange of $\pi$ and $\rho$ mesons. Although the direct one-pion contribution
to the nuclear ground state vanishes at the mean-field level because of parity
conservation, the pion nevertheless must be included in the calculation of
spin-isospin excitations that contribute to the electron-capture cross
section. For the $\rho$-meson density-dependent coupling strength we choose
the same functional form used in the RMF effective interaction Nikšić et al.
(2002). More details about the corresponding particle-hole residual
interaction are given in Ref. Paar et al. (2004b). The factors $f_{p(n)}$ in
the matrix elements $A$ Eq. (2) and $B$ Eq. (3), denote the thermal occupation
probabilities for protons and neutrons, respectively. These factors are given
by the corresponding Fermi-Dirac distribution
$f_{p(n)}=\frac{1}{1+{\rm exp}(\frac{\epsilon_{p(n)}-\mu_{p(n)}}{kT})},$ (4)
where $\mu_{p(n)}$ is the chemical potential determined by the conservation of
the number of nucleons $\sum_{p(n)}f_{p(n)}=Z(N)$. The factors
$\tilde{u},\tilde{v}$ are introduced in order to distinguish the GT- and GT+
channel, that is
$\displaystyle\tilde{u}_{p}=0,\quad\tilde{v}_{p}=1,\quad\tilde{u}_{n}=1,\quad\tilde{v}_{n}=0,\text{
when }f_{p}>f_{n}\quad(\bar{p}n),$ (5)
$\displaystyle\tilde{u}_{p}=1,\quad\tilde{v}_{p}=0,\quad\tilde{u}_{n}=0,\quad\tilde{v}_{n}=1,\text{
when }f_{p}<f_{n}\quad(p\bar{n}).$ (6)
With this definition the FTRRPA matrix is decoupled into two subspaces for the
GT- and GT+ channels.
The FTRRPA equations are solved by diagonalization, and the results are the
excitation energies $E_{\nu}$ and the corresponding forward- and backward-
going amplitudes $X^{J\nu}$ and $Y^{J\nu}$, respectively. The normalization
reads
$\sum_{pn}[(X^{J\nu}_{pn})^{2}-(Y^{J\nu}_{pn})^{2}](|f_{p}-f_{n}|)=1.$ (7)
The transition strengths for GT± operators are calculated using the relations
$\displaystyle B_{J\nu}^{T_{-}}$ $\displaystyle=$
$\displaystyle|\sum_{pn}(X^{J\nu}_{pn}\tilde{u}_{p}\tilde{v}_{n}+Y^{J\nu}_{pn}\tilde{v}_{p}\tilde{u}_{n})\langle
p||T^{-}||n\rangle(|f_{n}-f_{p}|)|^{2},$ $\displaystyle B_{J\nu}^{T_{+}}$
$\displaystyle=$
$\displaystyle|\sum_{pn}(X^{J\nu}_{pn}\tilde{v}_{p}\tilde{u}_{n}+Y^{J\nu}_{pn}\tilde{u}_{p}\tilde{v}_{n})\langle
p||T^{+}||n\rangle(|f_{n}-f_{p}|)|^{2},$ (8)
where the spin-isospin operators read:
$T^{\pm}=\sum_{i=1}^{A}\bm{\sigma}\tau_{\pm}$.
For the process of electron capture on a nucleus
$e^{-}+_{Z}^{A}X_{N}\rightarrow_{Z-1}^{A}X_{N+1}^{*}+\nu_{e},$ (9)
the cross section is derived from Fermi’s golden rule:
$\frac{d\sigma}{d\Omega}=\frac{1}{(2\pi)^{2}}V^{2}E^{2}_{\nu}\frac{1}{2}\sum_{\text{lepton
spins}}\frac{1}{2J_{i}+1}\sum_{M_{i}M_{f}}|\langle
f|\hat{H}_{W}|i\rangle|^{2},$ (10)
where $V$ is the quantization volume, and $E_{\nu}$ is the energy of the
outgoing electron neutrino. The weak-interaction Hamiltonian $\hat{H}_{W}$ of
semileptonic processes is written in the current-current form Walecka (1975)
$\hat{H}_{W}=-\frac{G}{\sqrt{2}}\int d\bm{x}{\cal
J}_{\mu}(\bm{x})j_{\mu}(\bm{x}),$ (11)
where $j_{\mu}(\bm{x})$ and ${\cal J}_{\mu}(\bm{x})$ are the weak leptonic and
hadronic current density operators, respectively. The matrix elements of
leptonic part are evaluated using the standard electroweak model, and contain
both vector and axial-vector components Langanke and Martínez-Pinedo (2003).
The hadronic current is obtained by using arguments of Lorentz covariance and
isospin invariance of the strong interaction. The expression for the electron
capture cross sections (see Refs. O’Connell et al. (1972); Walecka (1975) for
more details) reads
$\displaystyle\frac{d\sigma_{\rm ec}}{d\Omega}=\frac{G_{F}^{2}{\rm
cos}^{2}\theta_{c}}{2\pi}\frac{F(Z,E_{e})}{(2J_{i}+1)}$
$\displaystyle\times\Bigg{\\{}\sum_{J\geq
1}\mathcal{W}(E_{e},E_{\nu})\Big{\\{}{\left(1-(\hat{\bm{\nu}}\cdot\hat{\bm{q}})(\bm{\beta}\cdot\hat{\bm{q}})\right)}\left[|\langle
J_{f}||\hat{\mathcal{T}}_{J}^{MAG}||J_{i}\rangle|^{2}+|\langle
J_{f}||\hat{\mathcal{T}}_{J}^{EL}||J_{i}\rangle|^{2}\ \right]$
$\displaystyle-2\hat{\bm{q}}\cdot(\hat{\bm{\nu}}-\bm{\beta}){\rm Re}\langle
J_{f}||\hat{\mathcal{T}}_{J}^{MAG}||J_{i}\rangle\langle
J_{f}||\hat{\mathcal{T}}_{J}^{EL}||J_{i}\rangle^{*}\Big{\\}}$
$\displaystyle+\sum_{J\geq
0}\mathcal{W}(E_{e},E_{\nu})\Big{\\{}(1-\hat{\bm{\nu}}\cdot\bm{\beta}+2(\hat{\bm{\nu}}\cdot\hat{\bm{q}})(\bm{\beta}\cdot\hat{\bm{q}}))|\langle
J_{f}||\hat{\mathcal{L}}_{J}||J_{i}\rangle|^{2}+(1+\hat{\bm{\nu}}\cdot\bm{\beta})|\langle
J_{f}||\hat{\mathcal{M}}_{J}||J_{i}\rangle|^{2}$
$\displaystyle-2\hat{\bm{q}}\cdot(\hat{\bm{\nu}}+\bm{\beta}){\rm Re}\langle
J_{f}||\hat{\mathcal{L}}_{J}||J_{i}\rangle\langle
J_{f}||\hat{\mathcal{M}}_{J}||J_{i}\rangle^{*}\Big{\\}}\Bigg{\\}}\;,$ (12)
where the momentum transfer $\bm{q}=\bm{\nu}-\bm{k}$ is defined as the
difference between neutrino and electron momenta, $\hat{\bm{q}}$ and
$\hat{\bm{\nu}}$ are the corresponding unit vectors, and
$\bm{\beta}=\bm{k}/E_{e}$. The energies of the incoming electron and outgoing
neutrino are denoted by $E_{e}$ and $E_{\nu}$, respectively. The Fermi
function $F(Z,E_{e})$ corrects the cross section for the distortion of the
electron wave function by the Coulomb field of the nucleus E. et al. (2003).
The explicit energy dependence of the cross section is given by the term
$\displaystyle\mathcal{W}(E_{e},E_{\nu})=\frac{E_{\nu}^{2}}{(1+E_{e}/M_{T}(1-\hat{\bm{\nu}}\cdot\bm{\beta}))}\;,$
(13)
where the phase-space factor
$(1+E_{e}/M_{T}(1-\hat{\bm{\nu}}\cdot\bm{\beta}))^{-1}$ accounts for the
nuclear recoil, and $M_{T}$ is the mass of the target nucleus. The nuclear
transition matrix elements between the initial state $|J_{i}\rangle$ and final
state $|J_{f}\rangle$, correspond to the charge $\hat{\mathcal{M}}_{J}$,
longitudinal $\hat{\mathcal{L}}_{J}$, transverse electric
$\hat{\mathcal{T}}_{J}^{EL}$, and transverse magnetic
$\hat{\mathcal{T}}_{J}^{MAG}$ multipole operators O’Connell et al. (1972);
Walecka (1975). The initial and final nuclear states in the hadronic matrix
elements are characterized by angular momentum and parity $J^{\pi}$. In the
present calculation a number of multipoles contributing to the cross section
Eq. (12) will be taken into account.
In the electron capture process, the excitation energy of the daughter nucleus
${}_{Z-1}^{A}X_{N+1}$ is obtained by the sum of the RPA energy $E_{\rm RPA}$
given with respect to the ground state of the parent nucleus and the binding
energy difference between daughter and parent nucleus Colò et al. (1994). Thus
the energy of the outgoing neutrino is determined by the conservation
relation:
$E_{\nu}=E_{e}-E_{\rm RPA}-\Delta_{np},$ (14)
where $E_{e}$ is the energy of incoming electron, and $\Delta_{np}=1.294$ MeV
is the mass difference between the neutron and the proton. The axial-vector
coupling constant $g_{A}=-1.0$ is quenched for all the multipole excitations
with respect to its free-nucleon value $g_{A}=-1.26$. The reason to consider
quenching the strength in all multipole channels, rather than just for the GT
is, of course, that the axial form factor appears in all four transition
operators in Eq. (12) that induce transitions between the initial and final
states, irrespective of their multipolarity. The study based on continuum
random phase approximation Kolbe et al. (1994, 2000) showed that there is no
indication of the necessity to apply any quenching to the operators
responsible for the muon capture on nuclei. However, recent calculations of
the muon capture rates based on the RQRPA Marketin et al. (2009), employed on
a large set of nuclei, showed that reducing $g_{A}$ by $10\%$ for all
multipole transitions reproduces the experimental muon capture rates to better
than $10\%$ accuracy.
The electron capture rate is expressed in terms of the cross section Eq. (12)
and the distribution of electrons $f(E_{e},\mu_{e},T)$ at a given temperature:
$\lambda_{\rm
ec}=\frac{1}{\pi^{2}\hbar^{3}}\int_{E^{0}_{e}}^{\infty}p_{e}E_{e}\sigma_{ec}(E_{e})f(E_{e},\mu_{e},T)dE_{e}.$
(15)
$E_{e}^{0}=max(|Q_{if}|,m_{e}c^{2})$ is the minimum electron energy that
allows for the capture process, that is, the threshold energy for electrons,
where $Q_{if}=-E_{\rm RPA}-\Delta_{np}$.
$p_{e}=(E_{e}^{2}-m_{e}^{2}c^{4})^{1/2}$ is the electron momentum. Under
stellar conditions that correspond to the core collapse of a supernova, the
electron distribution is described by the Fermi-Dirac expression Juodagalvis
et al. (2010)
$f(E_{e},\mu_{e},T)=\frac{1}{{\rm exp}(\frac{E_{e}-\mu_{e}}{kT})+1}.$ (16)
$T$ is the temperature, and the chemical potential $\mu_{e}$ is determined
from the baryon density $\rho$ by inverting the relation
$\rho
Y_{e}=\frac{1}{\pi^{2}N_{A}}\left(\frac{m_{e}c}{\hbar}\right)^{3}\int_{0}^{\infty}(f_{e}-f_{e^{+}})p^{2}dp,$
(17)
where $Y_{e}$ is the ratio of the number of electrons to the number of
baryons, $N_{A}$ is Avogadro’s number, and $f_{e^{+}}$ denotes the positron
distribution function similar to Eq. (16), but with $\mu_{e^{+}}=-\mu_{e}$. We
assume that the phase space is not blocked by neutrinos.
## III Gamow-Teller transition strength at finite temperature
In this section we present an analysis of Gamow-Teller transition strength
distributions at finite temperature for iron isotopes and neutron-rich
germanium isotopes. The GT+ transition is the dominant process not only in
electron capture on nuclei near the stability line, but also on neutron-rich
nuclei because of the thermal unblocking effect at finite temperature. Here we
employ the finite-temperature relativistic RPA to calculate the GT+ strength
distribution. At zero temperature, however, pairing correlations have to be
taken into account for open shell nuclei, and thus the Relativistic Hartree
Bogoliubov model and the quasiparticle RPA with the finite range Gogny pairing
force are used in the corresponding calculations (more details are given in
Ref. Paar et al. (2004a)). In atomic nuclei the phase transition from a
superfluid to normal state occurs at temperatures $T\approx 0.5-1$ MeV Khan et
al. (2007); Goodman (1981a, b); Cooperstein and Wambach (1984) and, therefore,
for the temperature range considered in the present analysis, the FTRRPA
should provide a reasonable description of the Gamow-Teller transitions and
electron-capture rates.
Figure 1: (Color online) The GT+ strength distributions for 54,56Fe as
functions of the excitation energy with respect to the ground state of the
parent nucleus, calculated with the proton-neutron RQRPA at zero temperature,
and the FTRRPA at $T=0,1,2$ MeV, for the DD-ME2 relativistic density
functional. For comparison, the GT+ strength calculated with the non-
relativistic RPA based on the SLy5 Skyrme functional (green dashed lines), and
the centroid energies (blue arrows) and distributions of the LSSM calculation
Caurier et al. (1999) at $T=0$ MeV are shown. The experimental centroid
energies from Ref. Vetterli et al. (1989); Rönnqvist et al. (1993); El-Kateb
et al. (1994); Frekers (2005) are indicated by black arrows, and the
experimental distributions from Ref. Rönnqvist et al. (1993) for 54Fe and Ref.
El-Kateb et al. (1994) for 56Fe are shown by solid circles.
In Fig. 1 we display the GT+ strength distributions for 54,56Fe at $T=0,1,2$
MeV, as functions of excitation energy with respect to the ground state of the
parent nucleus. At zero temperature both the RQRPA and RRPA results are shown,
whereas the finite temperature transition spectra are calculated using only
the FTRRPA, that is, pairing is not included in calculations at finite
temperatures. The self-consistent results correspond to the DD-ME2
relativistic density functional Lalazissis et al. (2005). For comparison, the
GT+ strength at zero temperature calculated with the RPA based on the Skyrme
functionals SLy5 parameterization is also shown. The transition energy is
higher and the strength somewhat larger as compared to the results of the
relativistic model. Of course, the simple (Q)RPA approach cannot reproduce the
empirical fragmentation of the strength, that is, the spreading width. This
could only be accomplished by including additional correlations going beyond
the RPA as, for instance, in the second RPA involving $2p-2h$ configurations
Drożdż et al. (1990), or in the particle-vibration coupling model Colò et al.
(1994); Litvinova et al. (2007). The present analysis is primarily focused on
the centroid energy of GT+ transitions, and model calculations are only
carried out on the (Q)RPA level. Fig.1 also includes the centroid energies and
strength distributions of a large-scale shell model (LSSM) diagonalization
Caurier et al. (1999). The experimental centroid energies Vetterli et al.
(1989); Rönnqvist et al. (1993); El-Kateb et al. (1994); Frekers (2005),
defined as the energy-weighted integrated strength over the total strength,
$m_{1}/m_{0}$, are indicated by arrows in the figure. The experimental
strength distributions from Ref. Rönnqvist et al. (1993) for 54Fe and Ref. El-
Kateb et al. (1994) for 56Fe are also shown. The centroid energies and
distributions obtained in the LSSM calculation and the experimental values are
displayed with respect to the ground states of the parent nuclei, for
convenience of comparison with the RPA results.
One might notice that the RQRPA calculation is in fair agreement with the
experimental centroid energies. Compared to the LSSM, the RQRPA excitation
energies are $\approx 1$ MeV lower for both nuclei. By comparing the RRPA and
RQRPA , we notice that pairing correlations shift the GT+ transition to higher
energy by about $1\sim 1.5$ MeV, because additional energy is needed to break
a proton pair. When the temperature is increased to $1$ MeV, the transition
energy is lowered by about $1.1$ MeV for 54Fe, and $1.6$ MeV for 56Fe. This
decrease in energy is mainly caused by the pairing collapse. With a further
increase in temperature to $2$ MeV, the GT+ transition energy decreases by
about $0.5$ MeV in both nuclei. This continuous decrease has its origin in the
softening of the repulsive residual interaction because of the occupation
factors that appear in the FTRRPA matrix elements. To demonstrate this in a
quantitative way, we consider the example of 56Fe, and analyze the unperturbed
energies $E_{\rm unper}$, that is, the transition energy without residual
interaction, and the energy shift caused by the residual interaction. For 56Fe
the principal contribution to the GT+ comes from the transition from the
proton orbital $\pi 1f_{7/2}$ to the neutron orbital $\nu 1f_{5/2}$. In the
QRPA the unperturbed energy approximately equals the sum of two quasiparticle
energies, and the chemical potential difference of neutrons and protons,
resulting in $E_{\rm unper}\simeq 3.6$ MeV. The energy shift induced by the
repulsive residual interaction is $0.9$ MeV. If pairing correlations are not
included, that is in RPA, the unperturbed energy corresponds to the difference
between the single-particle energies of the two orbitals, and this is $1.8$
MeV at zero temperature, and $1.7$ MeV at $T=2$ MeV. Therefore the residual
interaction shifts the energy by $1.1$ MeV at zero temperature, and by $0.7$
MeV at $T=2$ MeV. Obviously the partial occupation factors (the smearing of
the Fermi surface), induced either by pairing correlations or by temperature
effects, will lead to the weakening of the residual interaction. The
temperature effect appears to be more pronounced because the Fermi surface is
more diffuse at $T=2$ MeV. In addition to the excitation energy, the
transition strength could also be reduced by the smearing of the Fermi surface
through the occupation factors in Eq. (II). Therefore, the transition strength
becomes weaker with increasing temperature or with the inclusion of pairing
correlations. We have verified that the Ikeda sum rule Ikeda et al. (1963) is
satisfied at finite temperature.
Figure 2: (Color online) The GT+ strength distributions of 76,78Ge, calculated
with the proton-neutron RQRPA at $T=0$ MeV, and with the FTRRPA at $T=0,1,2$
MeV, using the DD-ME2 relativistic density functional.
In Fig. 2 we plot the GT+ strength distributions of the neutron-rich nuclei
76,78Ge at $T=0,1,2$ MeV. At zero temperature results obtained with both the
RQRPA and the FTRRPA are shown. It is found that almost no transition strength
appears at zero temperature without the inclusion of pairing correlations,
because the GT+ transition channels are Pauli-blocked for these neutron-rich
nuclei. As shown in the figure, the transition channels can be unblocked by
two mechanisms, that is, by pairing correlations or thermal excitations. Two
unblocked single-particle transitions principally contribute to the total GT+
strength: the $\pi 1g_{9/2}\rightarrow\nu 1g_{7/2}$ particle-particle, and the
$\pi 1f_{7/2}\rightarrow\nu 1f_{5/2}$ hole-hole transitions, where particle
(hole) denotes a state above (below) the chemical potential.
Let us consider 76Ge as an example, and analyze its evolution behavior with
temperature. With the inclusion of pairing correlations at $T=0$ MeV, two
major peaks are calculated at $E=15.8$ MeV and $E=16.9$ MeV. The first state
mainly corresponds to the transition $\pi 1f_{7/2}\rightarrow\nu 1f_{5/2}$,
whereas the higher state results from a superposition of the transitions $\pi
1g_{9/2}\rightarrow\nu 1g_{7/2}$ and $\pi 1f_{5/2}\rightarrow\nu 2f_{7/2}$. At
$T=1$ MeV the GT+ excitations shift to $E=2.8$ MeV and $E=4.3$ MeV, and
correspond to the transitions $\pi 1f_{7/2}\rightarrow\nu 1f_{5/2}$ and $\pi
1g_{9/2}\rightarrow\nu 1g_{7/2}$, respectively, with very weak transition
strength. When the temperature is further increased to $T=2$ MeV, the
excitation energies are only slightly lowered (by $0.1$ MeV), but the
transition strengths are considerably enhanced.
The shift in energy from $T=0$ MeV with pairing correlations, to $T=1$ MeV is
about 13 MeV. This cannot be explained solely by the removal of the extra
energy needed to break a proton pair. To explain this result, we analyze the
unperturbed transition energies. It is found that the unperturbed energies are
much higher when pairing correlations are included, as compared with the
effect of finite temperature, resulting in considerable difference between the
corresponding GT+ energies. However, it is not only the pairing gaps that
raise the unperturbed energy because, for instance, the pairing gaps for $\pi
1g_{9/2}$ and $\nu 1g_{7/2}$ are both about $1.8$ MeV. As these unblocked
channels are particle-particle or hole-hole transitions, the sum of the
quasiparticle energies $E_{\rm
qp}=\sqrt{(\epsilon_{p}-\lambda_{p})^{2}+\Delta_{p}^{2}}+\sqrt{(\epsilon_{n}-\lambda_{n})^{2}+\Delta_{n}^{2}}$
is much larger than the difference of the single-particle energies
$\epsilon_{n}-\epsilon_{p}$, that corresponds to the unperturbed energies at
finite temperature. This decrease of GT+ excitation energies is in accordance
with the results of Ref. Dzhioev et al. (2010).
The large difference between the RQRPA GT+ strength at $T=0$ MeV and the
FTRRPA strength at $T=1$ MeV is mainly caused by the diffuseness of the Fermi
surface induced by pairing correlations at zero temperature. With a further
increase of temperature to $T=2$ MeV, the Fermi surface becomes more diffuse,
and this leads to enhancement of the GT+ strength. A similar trend with
temperature increase is found when the nucleus becomes even more neutron-rich
(cf. the case of 78Ge in Fig. 2), but the transition channels are more
difficult to unblock by thermal excitations, and this result in a weaker
transition strength. In the present calculation for 78Ge only the particle-
particle channel $\pi 1g_{9/2}\rightarrow\nu 1g_{7/2}$ is unblocked at finite
temperature.
To test the sensitivity of the results to the choice of the effective
interaction, we have also carried out the same calculations for 54,56Fe and
76,78Ge using the relativistic density-dependent effective interaction PKDD
Long et al. (2004). The same general behavior is found with both interactions,
but with PKDD the excitation energies are systematically larger by about $0.5$
MeV for Fe, and by $0.3$ MeV for the Ge isotopes, whereas the transition
strengths are slightly enhanced compared to the DD-ME2 results.
## IV Electron-capture cross sections
In this section we calculate electron-capture cross sections for selected
medium-mass target nuclei using RQRPA at zero temperature, and the FTRRPA at
temperatures $T=0,1,$ and 2 MeV.
Figure 3: (Color online) Electron-capture cross sections for the 56Fe and 76Ge
target nuclei at $T=1$ MeV, calculated with the FTRRPA using the DD-ME2
effective interaction. In addition to the total cross section which includes
multipole transitions $J^{\pi}=0^{\pm}$, $1^{\pm}$, and $2^{\pm}$,
contributions from the individual channels are shown in the plot as functions
of the incident electron energy $E_{e}$.
In Fig. 3 the cross sections for electron capture on 56Fe and 76Ge at $T=1$
MeV are plotted as functions of the incident electron energy $E_{e}$. The
cross sections are calculated using the expression of Eq. (12), and the FTRRPA
with the DD-ME2 relativistic density functional Lalazissis et al. (2005) is
used to evaluate the transition matrix elements. In addition to the total
cross sections which include multipole transitions $J^{\pi}=0^{\pm}$,
$1^{\pm}$, and $2^{\pm}$, contributions from the individual channels are shown
in the plot, as functions of the incident electron energy $E_{e}$. For 56Fe
the total cross section is completely dominated by the $1^{+}$ channel (GT+)
all the way up to $E_{e}=30$ MeV, with contributions from other channels being
orders of magnitude smaller. In the case of the neutron-rich nucleus 76Ge, on
the other hand, forbidden transitions play a more prominent role, already
starting from $E_{e}>12$ MeV. Their contribution to the total cross section
further increases with the electron energy $E_{e}$. Obviously in systematic
calculations of electron capture rates on heavier, more neutron-rich nuclei,
contributions from forbidden transitions should also be included in addition
to the GT+ channel.
Figure 4: (Color online) Electron-capture cross sections for the target nuclei
54,56Fe at $T=0,1,$ and $2$ MeV, as functions of the incident electron energy
$E_{e}$. The results obtained with the proton-neutron RQRPA at $T=0$ MeV, and
with the FTRRPA at $T=0,1,$ and 2 MeV, using the DD-ME2 effective interaction,
are shown in comparison with cross sections calculated from the SMMC GT+
strength distributions Dean et al. (1998).
Next we illustrate how the capture cross sections evolve with temperature.
Fig. 4 displays the electron-capture cross sections for the target nuclei
54,56Fe at $T=0,1$, and 2 MeV, as functions of the incident electron energy
$E_{e}$. Since for 54,56Fe forbidden transitions in the range of electron
energy up to 30 MeV give negligible contributions to the total cross section
(cf. Fig. 3), here only the 1+ transitions are included in the calculation.
Results obtained with the proton-neutron RQRPA at $T=0$ MeV, and with the
FTRRPA at $T=0,1,$ and 2 MeV, using the DD-ME2 effective interaction, are
shown in comparison with cross sections calculated from the SMMC GT+ strength
distributions Dean et al. (1998). Note, however, that in the SMMC calculation
only the $0\hbar\omega$ Gamow-Teller transition strength is considered, rather
than the total strength in the $1^{+}$ channel. We notice that the principal
effect of increasing the temperature in this interval is the lowering of the
electron-capture threshold energy. From $T=0$ MeV (RQRPA) to $T=1$ MeV
(FTRRPA) this decrease is more pronounced than the one from $T=1$ to $2$ MeV,
in accordance with the behavior of GT+ distributions discussed in the previous
section. At low electron energy below 10 MeV one notices a pronounced
difference between the RQRPA and FTRRPA results, reflecting the treatment of
pairing correlations at zero temperature. Of course, the calculated cross
sections become almost independent of temperature at high electron energies.
The results of the present calculation are in qualitative agreement with those
of the SMMC model Dean et al. (1998), calculated at temperature T=0.5 MeV.
Cross sections calculated at very low electron energies are sensitive to the
discrete level structure of the Gamow-Teller transitions and, therefore, one
expects that the SMMC approach will produce more accurate results. These cross
sections, however, are orders of magnitude smaller than those for $E_{e}\geq
10$ MeV and, when folded with the electron flux to calculate capture rates,
the differences between values predicted by various models in the low-energy
interval will not have a pronounced effect on the electron-capture rates.
Figure 5: (Color online) Electron-capture cross sections for the target nuclei
76,78Ge at $T=0,1$, and 2 MeV, as functions of the incident electron energy
$E_{e}$. The results are obtained by employing the DD-ME2 effective
interaction in the proton-neutron RQRPA at $T=0$ MeV, and in the FTRRPA at
$T=0,1,$ and 2 MeV. For 76Ge the results are also compared with the cross
section obtained in the hybrid model (SMMC/RPA) at $T=0.5$ MeV Langanke et al.
(2001).
In Fig. 5 we also illustrate the temperature dependence of the electron-
capture cross sections for the neutron-rich nuclei 76,78Ge. The calculation
includes the multipole transitions $J^{\pi}=0^{\pm},1^{\pm},2^{\pm}$. For 76Ge
the results are also compared with the cross section obtained in the hybrid
model (SMMC/RPA) at $T=0.5$ MeV Langanke et al. (2001). One might notice that
the cross sections are reduced by about an order of magnitude when compared to
the Fe isotopes, but overall a similar evolution with temperature is found. By
increasing the temperature the threshold energy for electron capture is
reduced. The cross sections exhibit a rather strong temperature dependence at
electron energies $E_{e}\leq 12$ MeV. At $E_{e}=12$ MeV, by increasing the
temperature by $1$ MeV, the cross sections are enhanced about half an order of
magnitude. Since at $E_{e}\leq 12$ MeV the electron capture predominantly
corresponds to GT+ transitions (see Fig. 3), the enhancement of the cross
sections is caused by the thermal unblocking of the GT+ channel, similar as
predicted by the hybrid SMMC/RPA model Langanke et al. (2001). For higher
electron energies, forbidden transitions become more important. The results of
the present analysis are in qualitative agreement with those of the TQRPA
model calculation Dzhioev et al. (2010), and the finite-temperature RPA
approach based on Skyrme functionals Paar et al. (2009). It is also found that
the hybrid model Langanke et al. (2001) predicts slightly larger cross
sections at lower energies, as anticipated due to the strong configuration
mixing in SMMC calculations. In general, by increasing the number of neutrons
in target nucleus, the electron capture occurs with a higher threshold and
smaller cross sections.
## V Stellar electron-capture rates
In modelling electron-capture rates in stellar environment one assumes that
the atoms are completely ionized, and the electron gas is described by the
Fermi-Dirac distribution (16). By folding the FTRRPA cross sections at finite
temperature with the distribution of electrons in Eq. (15), we calculate the
rates for electron capture on Fe and Ge isotopes, under different conditions
associated with the initial phase of the core-collapse supernova.
Figure 6: (Color online) Rates for electron capture on 54,56Fe as functions of
the temperature $T_{9}$ ($T_{9}=10^{9}$ K), at selected densities $\rho Ye$ (g
cm-3). The results calculated using the FTRRPA with the DD-ME2 effective
interaction, are shown in comparison with the rates obtained with LSSM
calculations Langanke and Martínez-Pinedo (2001) and the TQRPA model Dzhioev
et al. (2010).
Figure 6 shows the calculated rates for electron capture on 54,56Fe as
functions of the temperature $T_{9}$ ($T_{9}=10^{9}$ K), selected densities
$\rho Y_{e}$ (g cm-3). For comparison with the FTRRPA results, the rates
obtained with LSSM calculations Langanke and Martínez-Pinedo (2001) and the
TQRPA model Dzhioev et al. (2010) are also included in the figure. Here only
the 1+ transitions are included in the calculation of cross section. Although
the three models compared here are based on rather different assumptions, the
resulting capture rates nevertheless show similar trends. In general the
electron-capture rates increase with temperature and electron density. For
high electron densities the rates increase slower, and at density $\rho
Y_{e}=10^{10}$ g/cm3 the temperature dependence almost vanishes. At high
densities characterized by large values of the electron chemical potential,
high-energy electrons excite most or even all the GT+ transitions independent
of temperature. Under such condidtions the increase in temperature will not
have a pronounced effect on the capture rates. By increasing the number of
neutrons from 54Fe to 56Fe, one notices that the capture rates are slightly
reduced in 56Fe, reflecting the behavior of the cross sections.
The FTRRPA results generally reproduce the temperature dependence of the rates
predicted by the LSSM, but on the average the values calculated with the
FTRRPA are somewhat larger, especially for 56Fe. For 54Fe and at lower
densities $\rho Y_{e}=10^{7}$ or $10^{8}$ g/cm3, the FTRRPA results
essentially coincide with the shell model calculation. At higher density, e.g.
$\rho Y_{e}=10^{9}$ g/cm3, and with the electron chemical potential $\approx
5$ MeV close to the threshold energy, the FTRRPA yields higher rates at lower
temperature. One can understand this difference from the fragmentation of the
shell model GT+ strength over the energy range $0\sim 10$ MeV Caurier et al.
(1999). While electrons at lower temperature excite all the GT+ strength in
FTRRPA (see Fig. 1), only a fraction of the shell-model strength is excited.
Because part of the shell-model GT+ strength is located at higher energies
than in the FTRRPA calculation, the resulting LSSM rates are smaller. At even
higher densities, e.g. at $\rho Y_{e}=10^{10}$ g/cm3 with the chemical
potential $\approx 11$ MeV, already at lower temperatures the high-energy
electrons excite all the transition shell-model strength, and the resulting
rates are essentially the same as those calculated with the FTRRPA. For
electron capture on 56Fe, at lower densities $\rho Y_{e}=10^{7}$ and $10^{8}$
g/cm3 the FTRRPA results are in better agreement with the TQRPA calculation,
whereas the LSSM predicts lower rates. At higher densities the trend predicted
by the FTRRPA is closer to the LSSM, but the calculated values are still above
the shell model results. In general, the differences between the FTRRPA and
the shell-model rates are larger in 56Fe than 54Fe. The principal reason lies
in the difference between the GT+ centroid energies calculated in the two
models (cf. see Fig. 1). As in the case of 54Fe, the largest difference
between the FTRRPA and LSSM is at $\rho Y_{e}=10^{9}$ g/cm3, because the
electron chemical potential at this density is close to the threshold energy,
hence the capture rates are sensitive to the detailed GT+ distribution.
Figure 7: (Color online) Rates for electron capture on 76,78Ge as functions of
the temperature, at selected densities $\rho Ye$ (g cm-3). The results
calculated using the FTRRPA with the DD-ME2 effective interaction, are shown
in comparison with the rates obtained with the hybrid model (SMMC/RPA) and the
TQRPA model Dzhioev et al. (2010).
Fig. 7 compares the rates for electron capture on 76,78Ge, calculated using
the FTRRPA with the DD-ME2 effective interaction, to the values obtained with
the hybrid model (SMMC/RPA) and the TQRPA model Dzhioev et al. (2010). In
order to allow a direct comparison with the hybrid model, the same quenching
of the axial-vector coupling constant with respect to its free-nucleon value
is employed, i.e. $g_{A}^{*}$=0.7$g_{A}$. Because for 76,78Ge the contribution
of forbidden transition is not negligible, the calculations of rates Eq. (12)
includes the multipole transitions $J^{\pi}=0^{\pm},1^{\pm},2^{\pm}$. Similar
to the case of Fe nuclei, the calculated capture rates increase with
temperature and density, and are reduced by adding neutrons from 76Ge to 78Ge.
For 76,78Ge the rates predicted by the FTRRPA display a temperature and
density dependence very similar to that of the TQRPA model, whereas the hybrid
model predicts a very weak temperature dependence at all densities considered
in Fig. 7. In general, both the FTRRPA and the TQRPA predict smaller values of
capture rates compared to the hybrid model. The reason is that the probability
of unblocking transition channels is larger in the hybrid model because it
includes many-body correlations beyond the RPA level. At the density $\rho
Ye=10^{10}$ g/cm3 the FTRRPA capture rates exhibit a relatively strong
temperature dependence. The electron chemical potential is $\approx 11$ MeV,
and the cross sections are dominated by GT+ transitions. By increasing
temperature the GT+ transitions are unblocked, resulting in a large
enhancement of the cross sections as shown in Fig. 5. A similar trend is also
predicted by the TQRPA calculation Dzhioev et al. (2010), whereas the
temperature dependence of the capture rates is much weaker in the hybrid
model. With a further increase in density to $\rho Ye=10^{11}$ g/cm3, the
chemical potential reaches $\approx 23$ MeV. At these energies forbidden
transitions dominate the calculated cross sections, the FTRRPA yields cross
sections similar to the TQRPA, and the same for the capture rates. At even
higher densities the temperature dependence of the FTRRPA and TQRPA results
becomes weaker, because the cross sections are less sensitive to temperature.
At the density $\rho Ye=5\times 10^{11}$ g/cm3 the capture rates predicted by
the FTRRPA are larger than the TQRPA results, and reach values similar to
those of the hybrid model.
## VI Conclusion
In this work we have introduced a self-consistent theoretical framework for
modelling the process of electron capture in the initial phase of supernova
core collapse, based on relativistic energy density functionals. The finite-
temperature RMF model is employed to determine the single particle energies,
wave functions and thermal occupation probabilities for the initial nuclear
states. The relevant charge-exchange transitions $J^{\pi}=0^{\pm}$, $1^{\pm}$,
$2^{\pm}$ are described by the finite-temperature relativistic random-phase
approximation (FTRRPA). The FTRMF+FTRRPA framework is self-consistent in the
sense that the same relativistic energy density functional is employed both in
the finite-temperature RMF model and in the RRPA matrix equations.
In the calculation of the electron capture cross sections, the GT+ transitions
provide the major contribution in the case of 54,56Fe, whereas for more
neutron-rich nuclei such as 76,78Ge forbidden transitions play a more
prominent role already starting at incident electron energy above $\approx$10
MeV. The principal effect of increasing temperature is the lowering of the
electron-capture threshold energy. For 76,78Ge the cross sections in the low-
energy region are sensitive to temperature because of the dominant role of GT+
transition channel, but these correlation becomes weaker at higher energies
dominated by major contributions from forbidden transitions.
Electron capture rates for different stellar environments, densities and
temperatures, characteristic for core collapse supernovae have been calculated
and compared with previous results of shell-model, hybrid shell-model plus
RPA, and thermal QRPA (TQRPA) calculations. For 54,56Fe, the FTRRPA results
generally reproduce the temperature dependence of the rates predicted by
shell-model calculations, but on the average the values calculated with the
FTRRPA are somewhat larger, especially for 56Fe. For 76,78Ge the FTRRPA
capture rates display a trend very similar to that of the TQRPA calculation,
especially for the temperature dependence, whereas this dependence of the
capture rates is much weaker in the hybrid model.
The results obtained in the present study demonstrate that the framework of
finite-temperature RMF and FTRRPA provides a universal theoretical tool for
the analysis of stellar weak-interaction processes in a fully consistent
microscopic approach. This is especially important for regions of neutron-rich
nuclei where the shell-model diagonalization approach is not feasible. A
microscopic approach has a big advantage in comparison to empirical models
that explicitly necessitate data as input for calculations, as in many mass
regions data will not be available. Of course, the present framework is
limited to the level of RPA and does not include important many-body
correlations that are taken into account in a shell-model approach. However,
as discussed previously, at higher densities and temperatures in the stellar
environment, the detailed fragmentation of transition spectra does not play
such a significant role, and the FTRRPA represents a very good approximate
framework that can be used in systematic calculations of electron-capture
rates. Further improvements of the current version of the model are under
development. For open-shell nuclei at very low temperatures, pairing
correlations need to be taken into account. To obtain the empirical
fragmentation of the transition spectra, the inclusion of higher-order
correlations beyond the RPA level, that is, the coupling to $2p-2h$ states
will be necessary.
ACKNOWLEDGMENTS
Y.F. Niu would like to acknowledge discussions with Z. M. Niu and H. Z. Liang.
This work is supported by the State 973 Program 2007CB815000, the NSF of China
under Grants No. 10975007 and No. 10975008, the Unity through Knowledge Fund
(UKF Grant No. 17/08), and by MZOS - project 1191005-1010, and the Chinese-
Croatian project “Nuclear structure and astrophysical applications”.
## References
* Bethe et al. (1979) H. A. Bethe, G. E. Brown, J. Applegate, and J. M. Lattimer, Nucl. Phys. A 324, 487 (1979).
* Bethe (1990) H. A. Bethe, Rev. Mod. Phys. 62, 801 (1990).
* Langanke and Martínez-Pinedo (2003) K. Langanke and G. Martínez-Pinedo, Rev. Mod. Phys. 75, 819 (2003).
* Janka et al. (2007) H.-T. Janka, K. Langanke, A. Marek, G. Martínez-Pinedo, and B. Müller, Phys. Rep. 442, 38 (2007).
* Fuller et al. (1980) G. M. Fuller, W. A. Fowler, and M. J. Newman, Astrophys. J. Suppl. Ser. 42, 447 (1980).
* Fuller et al. (1982a) G. M. Fuller, W. A. Fowler, and M. J. Newman, Astrophys. J. Suppl. Ser. 48, 279 (1982a).
* Fuller et al. (1982b) G. M. Fuller, W. A. Fowler, and M. J. Newman, Astrophys. J. 252, 715 (1982b).
* Fuller et al. (1985) G. M. Fuller, W. A. Fowler, and M. J. Newman, Astrophys. J. 293, 1 (1985).
* Dean et al. (1998) D. J. Dean, K. Langanke, L. Chatterjee, P. B. Radha, and M. R. Strayer, Phys. Rev. C 58, 536 (1998).
* Langanke and Martínez-Pinedo (1998) K. Langanke and G. Martínez-Pinedo, Phys. Lett. B 436, 19 (1998).
* Caurier et al. (1999) E. Caurier, K. Langanke, G. Martínez-Pinedo, and F. Nowacki, Nucl. Phys. A 653, 439 (1999).
* Langanke and Martínez-Pinedo (2000) K. Langanke and G. Martínez-Pinedo, Nucl. Phys. A 673, 481 (2000).
* Langanke and Martínez-Pinedo (2001) K. Langanke and G. Martínez-Pinedo, At. Data Nucl. Data Tables 79, 1 (2001).
* Langanke et al. (2001) K. Langanke, E. Kolbe, and D. J. Dean, Phys. Rev. C 63, 032801 (2001).
* Juodagalvis et al. (2010) A. Juodagalvis, K. Langanke, W. R. Hix, G. Martínez-Pinedo, and J. M. Sampaio, Nucl. Phys. A 848, 454 (2010).
* Nabi and Klapdor-Kleingrothaus (1999) J.-U. Nabi and H. V. Klapdor-Kleingrothaus, At. Data Nucl. Data Tables 71, 149 (1999).
* Nabi and Klapdor-Kleingrothaus (2004) J.-U. Nabi and H. V. Klapdor-Kleingrothaus, At. Data Nucl.Data Tables 88, 237 (2004).
* Dzhioev et al. (2010) A. A. Dzhioev, A. I. Vdovin, V. Y. Ponomarev, J. Wambach, K. Langanke, and G. Martínez-Pinedo, Phys. Rev. C 81, 015804 (2010).
* Paar et al. (2009) N. Paar, G. Colò, E. Khan, and D. Vretenar, Phys. Rev. C 80, 055801 (2009).
* Ring (1996) P. Ring, Prog. Part. Nucl. Phys. 37, 193 (1996).
* Meng et al. (2006) J. Meng, H. Toki, S. Zhou, S. Zhang, W. Long, and L. Geng, Prog. Part. Nucl. Phys. 57, 470 (2006).
* Vretenar et al. (2005) D. Vretenar, A. V. Afanasjev, G. A. Lalazissis, and P. Ring, Phys. Rep. 409, 101 (2005).
* Ma et al. (2001) Z. Ma, N. Van Giai, A. Wandelt, D. Vretenar, and P. Ring, Nucl. Phys. A 686, 173 (2001).
* Ring et al. (2001) P. Ring, Z. Ma, N. Van Giai, D. Vretenar, A. Wandelt, and L. Cao, Nucl. Phys. A 694, 249 (2001).
* Nikšić et al. (2002) T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C 66, 064302 (2002).
* Paar et al. (2003) N. Paar, P. Ring, T. Nikšić, and D. Vretenar, Phys. Rev. C 67, 034312 (2003).
* Paar et al. (2004a) N. Paar, T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C 69, 054303 (2004a).
* Liang et al. (2008) H. Liang, N. Van Giai, and J. Meng, Phys. Rev. Lett. 101, 122502 (2008).
* Paar et al. (2007) N. Paar, D. Vretenar, E. Khan, and G. Colò, Rep. Prog. Phys. 70, 691 (2007).
* Niu et al. (2009) Y. F. Niu, N. Paar, D. Vretenar, and J. Meng, Phys. Lett. B 681, 315 (2009).
* Gambhir et al. (2000) Y. K. Gambhir, J. P. Maharana, G. A. Lalazissis, C. P. Panos, and P. Ring, Phys. Rev. C 62, 054610 (2000).
* Lalazissis et al. (1997) G. A. Lalazissis, J. König, and P. Ring, Phys. Rev. C 55, 540 (1997).
* Typel and Wolter (1999) S. Typel and H. H. Wolter, Nucl. Phys. A 656, 331 (1999).
* Nikšić et al. (2002) T. Nikšić, D. Vretenar, P. Finelli, and P. Ring, Phys. Rev. C 66, 024306 (2002).
* Paar et al. (2004b) N. Paar, T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C 69, 054303 (2004b).
* Walecka (1975) J. D. Walecka, _Muon Physics_ , vol. II (Academic, New York, 1975).
* O’Connell et al. (1972) J. S. O’Connell, T. W. Donnelly, and J. D. Walecka, Phys. Rev. C 6, 719 (1972).
* E. et al. (2003) K. E., K. Langanke, G. Martínez-Pinedo, and P. Vogel, J. Phys. G 29, 2569 (2003).
* Colò et al. (1994) G. Colò, N. Van Giai, P. F. Bortignon, and R. A. Broglia, Phys. Rev. C 50, 1496 (1994).
* Kolbe et al. (1994) E. Kolbe, K. Langanke, and P. Vogel, Phys. Rev. C 50, 2576 (1994).
* Kolbe et al. (2000) E. Kolbe, K. Langanke, and P. Vogel, Phys. Rev. C 62, 055502 (2000).
* Marketin et al. (2009) T. Marketin, N. Paar, T. Nikšić, and D. Vretenar, Phys. Rev. C 79, 054323 (2009).
* Khan et al. (2007) E. Khan, N. Van Giai, and N. Sandulescu, Nucl. Phys. A 789, 94 (2007).
* Goodman (1981a) A. L. Goodman, Nucl. Phys. A 352, 30 (1981a).
* Goodman (1981b) A. L. Goodman, Nucl. Phys. A 352, 45 (1981b).
* Cooperstein and Wambach (1984) J. Cooperstein and J. Wambach, Nucl. Phys. A 420, 591 (1984).
* Vetterli et al. (1989) M. C. Vetterli, O. Häusser, R. Abegg, W. P. Alford, A. Celler, D. Frekers, R. Helmer, R. Henderson, K. H. Hicks, K. P. Jackson, et al., Phys. Rev. C 40, 559 (1989).
* Rönnqvist et al. (1993) T. Rönnqvist, H. Condé, N. Olsson, E. Ramström, R. Zorro, J. Blomgren, A. Håkansson, A. Ringbom, G. Tibell, O. Jonsson, et al., Nucl. Phys. A 563, 225 (1993).
* El-Kateb et al. (1994) S. El-Kateb, K. P. Jackson, W. P. Alford, R. Abegg, R. E. Azuma, B. A. Brown, A. Celler, D. Frekers, O. Häusser, R. Helmer, et al., Phys. Rev. C 49, 3128 (1994).
* Frekers (2005) D. Frekers, Nucl. Phys. A 752, 580 (2005).
* Lalazissis et al. (2005) G. A. Lalazissis, T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C 71, 024312 (2005).
* Drożdż et al. (1990) S. Drożdż, S. Nishizaki, J. Speth, and J. Wambach, Physics Reports 197, 1 (1990).
* Litvinova et al. (2007) E. Litvinova, P. Ring, and D. Vretenar, Phys. Lett. B 647, 111 (2007).
* Ikeda et al. (1963) K. Ikeda, S. Fujii, and J. I. Fujita, Phys. Lett. 3, 271 (1963).
* Long et al. (2004) W. Long, J. Meng, N. Van Giai, and S.-G. Zhou, Phys. Rev. C 69, 034319 (2004).
|
arxiv-papers
| 2011-04-09T08:00:59 |
2024-09-04T02:49:18.189485
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "YiFei Niu, Nils Paar, Dario Vretenar, Jie Meng",
"submitter": "Yifei Niu",
"url": "https://arxiv.org/abs/1104.1683"
}
|
1104.1702
|
# Smoothing metrics on closed Riemannian manifolds through the Ricci flow
Yunyan Yang yunyanyang@ruc.edu.cn Department of Mathematics, Renmin
University of China, Beijing 100872, P. R. China
###### Abstract
Under the assumption of the uniform local Sobolev inequality, it is proved
that Riemannian metrics with an absolute Ricci curvature bound and a small
Riemannian curvature integral bound can be smoothed to having a sectional
curvature bound. This partly extends previous a priori estimates of Ye Li (J.
Geom. Anal. 17 (2007) 495-511; Advances in Mathematics 223 (2010) 1924-1957).
###### keywords:
Ricci flow; Smoothing; Moser iteration
###### MSC:
53C20; 53C21; 58J35
††journal: Ann Glob Anal Geom
## 1 Introduction
If a Riemannian manifold has bounded sectional curvature, then its geometric
structure is better understood than that with weaker curvature bounds, say
Ricci curvature bounds. Thus it is of significance to deform or smooth a
Riemannian metric with a Ricci curvature bound to a metric with a sectional
curvature bound. One way to do this is using the Ricci flow. In this regard we
refer the reader to the pioneer works [3, 6, 17, 18]. If the initial metric
has bounded curvatures, one can show the short time existence of the Ricci
flow and obtain the covariant derivatives bounds for the curvature tensors
along the Ricci flow [2, 14]. If the initial metric has bounded Ricci
curvature, under some additional assumption on conjugate radius, Dai, etc.
studied how to deform the metric on closed manifolds [6]. Also one can deform
a metric locally by using the local Ricci flow [11, 12, 13, 16, 18].
Throughout this paper, we use ${\rm Rm}(g)$ and ${\rm Ric}(g)$ to denote the
Riemannian curvature tensor and Ricci tensor with respect to the metric $g$
respectively. Our main result is the following:
Theorem 1.1. Assume $(M,g_{0})$ is a closed Riemannian manifold of dimension
$n$ ($n\geq 3$) and $|{\rm Ric}(g_{0})|\leq K$ for some constant $K$. Let
$B_{r}(x)$ be a geodesic ball centered at $x\in M$ with radius $r$. Suppose
there exists a constant $A_{0}>0$ such that for all $x\in M$ and some
$r\leq\min(\frac{1}{2}{\rm diam}(g_{0}),1)$
$\left(\int_{B_{r}(x)}|u|^{\frac{2n}{n-2}}dv_{g_{0}}\right)^{(n-2)/n}\leq
A_{0}\int_{B_{r}(x)}|\nabla_{g_{0}}u|^{2}dv_{g_{0}},\quad\forall u\in
C_{0}^{\infty}(B_{r}(x)).$ (1.1)
Then there exist constants $\epsilon$, $c_{1}$, $c_{2}$ depending only on $n$
and $K$ such that if
$\left(\int_{B_{r}(x)}|{\rm
Rm}(g_{0})|^{\frac{n}{2}}dv_{g_{0}}\right)^{{2}/{n}}\leq{\epsilon}{A_{0}^{-1}}\quad{\rm
for\,\,all}\,\,x\in M,$ (1.2)
then the Ricci flow
$\left\\{\begin{array}[]{lll}\displaystyle\frac{\partial g}{\partial
t}&=-2Ric(g),\\\\[5.16663pt] g(0)&=g_{0}\end{array}\right.$ (1.3)
has a unique smooth solution satisfying the following estimates
$\displaystyle|g(t)-g_{0}|_{g_{0}}$ $\displaystyle\leq$ $\displaystyle
c_{2}t^{\frac{2}{n+2}},$ (1.4) $\displaystyle|{\rm Rm}(g(t))|_{\infty}$
$\displaystyle\leq$ $\displaystyle c_{2}t^{-1},$ (1.5) $\displaystyle|{\rm
Ric(g(t))}|_{\infty}$ $\displaystyle\leq$ $\displaystyle
c_{2}t^{-\frac{n}{n+2}}$ (1.6)
for $0\leq t\leq T$ with $T\geq c_{1}\min(r^{2},K^{-1})$.
When $(M,g_{0})$ is a complete noncompact Riemannian manifold, similar results
were obtained by Ye Li [13] and G. Xu [16]. The assumptions of [13] is much
weaker than (1.1) and (1.2) in case $n=4$. It comes from Cheeger and Tian’s
work [5] concerning the collapsing Einstein 4-manifolds. Here Theorem 1.1 is
just the beginning of extending the results [5, 12, 13], which may depend on
the Gauss-Bonnet-Chern formula, to general dimensional case.
For the proof of Theorem 1.1, we follow the lines of [6, 7, 13, 18]. Let’s
roughly describe the idea. First it is well known [10, 8] that the Ricci flow
(1.3) has a unique smooth solution $g(t)$ for a very short time interval.
Using Moser’s iteration and Gromov’s covering argument, we derive a priori
estimates on ${\rm Rm}(g(t))$ and ${\rm Ric}(g(t))$. Let $[0,T_{\rm max}]$ be
a maximum time interval on which $g(t)$ exists. Then based on those a priori
estimates, $T_{\rm max}$ has the desired lower bound.
Such kind of results are very useful when considering the relation between
curvature and topology [1, 6, 12]. Using Theorem 1.1, we can easily generalize
Gromov’s almost flat manifold theorem [9]. Particularly one has the following:
Theorem 1.2. There exist constants $\epsilon$ and $\delta$ depending only on
$n$ and $K$ such that if a closed Riemannian manifold $(M,g_{0})$ satisfies
$|{\rm Ric(g_{0})}|\leq K$, ${\rm diam}(g_{0})\leq\delta$, (1.1) and (1.2)
hold for all $x\in M$, then the universal covering space of $(M,g_{0})$ is
$\mathbb{R}^{n}$. If all the above hypothesis on $(M,g_{0})$ are satisfied and
moreover the fundamental group $\pi(g_{0})$ is commutative, then $(M,g_{0})$
is diffeomorphic to a torus.
Before ending this introduction, we would like to mention [15] for local
regularity estimates for Riemannian curvatures. The remaining part of the
paper is organized as follows. In Sect. 2, we derive two weak maximum
principles by using the Moser’s iteration. In Sect. 3, we estimate the time
interval on which the solution of Ricci flow exists, and prove Theorem 1.1.
Finally Theorem 1.2 is proved in Sect. 4.
## 2 Weak maximum principles
In this section, following the lines of [13, 18], we give two maximum
principles via the Moser’s iteration. Throughout this section the manifolds
need not to be compact. Suppose $(M,g(t))$ are complete Riemannian manifolds
for $0\leq t\leq T$. Let $\nabla_{g(t)}$ denote the covariant differentiation
with respect to $g(t)$ and $-\Delta_{g(t)}$ be the corresponding Laplace-
Beltrami operator, which will be also denoted by $\nabla$ and $-\Delta$ for
simplicity, the reader can easily recognize it from the context. Let $A$ be a
constant such that for all $t\in[0,T]$,
$\left(\int_{B_{r}(x)}|u|^{\frac{2n}{n-2}}dv_{t}\right)^{(n-2)/n}\leq
A\int_{B_{r}(x)}|\nabla u|^{2}dv_{t},\quad\forall u\in
C_{0}^{\infty}(B_{r}(x)),$ (2.1)
where $dv_{t}=dv_{g(t)}$. Assume that for all $0\leq t\leq T$,
$\frac{1}{2}g_{0}\leq g(t)\leq 2g_{0}\quad{\rm on}\quad M.$ (2.2)
Here and in the sequel, all geodesic balls are defined with respect to
$g_{0}$.
Firstly we have the following maximum principle:
Theorem 2.1. Let $(M,g(t))$ be complete Riemannian manifolds and (2.1), (2.2)
are satisfied for $0\leq t\leq T$. Let $f(x,t)$ be such that
$\frac{\partial f}{\partial t}\leq\Delta f+uf\quad{\rm on}\quad
B_{r}(x)\times[0,T]$ (2.3)
with $f\geq 0$, $u\geq 0$,
$\frac{\partial}{\partial t}dv_{t}\leq cudv_{t},$ (2.4)
for some constant $c$ depending only on $n$ and for some $q>n$
$\left(\int_{B_{r}(x)}u^{\frac{q}{2}}dv_{t}\right)^{\frac{2}{q}}\leq\mu
t^{-\frac{q-n}{q}},$ (2.5)
where $\mu>0$ is a constant. Then for any $p>1$, $t\in[0,T]$, we have
$f(x,t)\leq
CA^{\frac{n}{2p}}\left(\frac{1+A^{\frac{n}{q-n}}\mu^{\frac{q}{q-n}}}{t}+\frac{1}{r^{2}}\right)^{\frac{n+2}{2p}}\left(\int_{0}^{T}\int_{B_{r}(x)}f^{p}dv_{t}\right)^{\frac{1}{p}},$
(2.6)
where $C$ is a constant depending only on $n$, $q$ and $p$.
Proof. Let $\eta$ be a nonnegative Lipschitz function supported in $B_{r}(x)$.
We first consider the case $p\geq 2$. By the partial differential inequality
(2.3) and (2.4), we have
$\displaystyle\frac{1}{p}\frac{\partial}{\partial t}\int\eta^{2}f^{p}dv_{t}$
$\displaystyle\leq$ $\displaystyle\int\eta^{2}f^{p-1}\Delta fdv_{t}+C_{1}\int
uf^{p}\eta^{2}dv_{t},$
where $C_{1}$ is a constant depending only on $n$. Integration by parts
implies
$\displaystyle\int\eta^{2}f^{p-1}\Delta fdv_{t}$ $\displaystyle=$
$\displaystyle-2\int\eta f^{p-1}\nabla\eta\nabla
fdv_{t}-(p-1)\int\eta^{2}f^{p-2}|\nabla f|^{2}dv_{t}$ $\displaystyle=$
$\displaystyle-\frac{4}{p}\int\left(f^{\frac{p}{2}}\nabla\eta\nabla(\eta
f^{\frac{p}{2}})-|\nabla\eta|^{2}f^{p}\right)dv_{t}-\frac{4(p-1)}{p^{2}}$
$\displaystyle\times\int\left(|\nabla(\eta
f^{\frac{p}{2}})|^{2}+|\nabla\eta|^{2}f^{p}-2f^{\frac{p}{2}}\nabla\eta\nabla(\eta
f^{\frac{p}{2}})\right)dv_{t}$ $\displaystyle=$
$\displaystyle-\frac{4(p-1)}{p^{2}}\int|\nabla(\eta
f^{\frac{p}{2}})|^{2}dv_{t}+\frac{4}{p^{2}}\int|\nabla\eta|^{2}f^{p}dv_{t}$
$\displaystyle+\frac{4p-8}{p^{2}}\int f^{\frac{p}{2}}\nabla\eta\nabla(\eta
f^{\frac{p}{2}})dv_{t}$ $\displaystyle\leq$
$\displaystyle-\frac{2}{p}\int|\nabla(\eta
f^{\frac{p}{2}})|^{2}dv_{t}+\frac{2}{p}\int|\nabla\eta|^{2}f^{p}dv_{t}.$
Here we have used the elementary inequality $2ab\leq a^{2}+b^{2}$. By the
Hölder inequality, we have
$\displaystyle\int uf^{p}\eta^{2}dv_{t}$ $\displaystyle\leq$
$\displaystyle\left(\int
u^{\frac{q}{2}}dv_{t}\right)^{\frac{2}{q}}\left(\int(\eta^{2}f^{p})^{\alpha
q_{1}}dv_{t}\right)^{\frac{1}{q_{1}}}\left(\int(\eta^{2}f^{p})^{(1-\alpha)q_{2}}dv_{t}\right)^{\frac{1}{q_{2}}},$
where $\frac{1}{q_{1}}+\frac{1}{q_{2}}+\frac{2}{q}=1$ and $0<\alpha<1$. Let
$\alpha q_{1}=\frac{n}{n-2}$ and $(1-\alpha)q_{2}=1$. This implies
$q_{1}=\frac{q}{n-2}$, $q_{2}=\frac{q}{q-n}$ and $\alpha=\frac{n}{q}$. Using
the Sobolev inequality (2.1) and the Young inequality, we obtain
$\displaystyle\int uf^{p}\eta^{2}dv_{t}$ $\displaystyle\leq$ $\displaystyle\mu
t^{-\frac{q-n}{q}}\left(\int(\eta^{2}f^{p})^{\frac{n}{n-2}}dv_{t}\right)^{\frac{n-2}{q}}\left(\int\eta^{2}f^{p}dv_{t}\right)^{\frac{q-n}{q}}$
$\displaystyle\leq$ $\displaystyle\mu
t^{-\frac{q-n}{q}}\left(A\int|\nabla(\eta
f^{\frac{p}{2}})|^{2}dv_{t}\right)^{\frac{n}{q}}\left(\int\eta^{2}f^{p}dv_{t}\right)^{\frac{q-n}{q}}$
$\displaystyle\leq$ $\displaystyle\frac{1}{pC_{1}}\int|\nabla(\eta
f^{\frac{p}{2}})|^{2}dv_{t}+C_{2}p^{\frac{n}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}t^{-1}\int\eta^{2}f^{p}dv_{t}$
for some constant $C_{2}$ depending only on $n$ and $q$. Combining all the
above estimates one has
$\displaystyle\frac{\partial}{\partial
t}\int\eta^{2}f^{p}dv_{t}+\int|\nabla(\eta f^{\frac{p}{2}})|^{2}dv_{t}\leq
2\int|\nabla\eta|^{2}f^{p}dv_{t}$ (2.7)
$\displaystyle\quad\quad\quad\quad\quad\quad+C_{1}C_{2}p^{\frac{q}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}t^{-1}\int\eta^{2}f^{p}dv_{t}.{}$
For $0<\tau<\tau^{\prime}<T$, let
$\psi(t)=\left\\{\begin{array}[]{lll}0,&0\leq t\leq\tau\\\\[5.16663pt]
\frac{t-\tau}{\tau^{\prime}-\tau},&\tau\leq t\leq\tau^{\prime}\\\\[5.16663pt]
1,&\tau^{\prime}\leq t\leq T.\end{array}\right.$
Multiplying (2.7) by $\psi$, we have
$\displaystyle\frac{\partial}{\partial
t}\left(\psi\int\eta^{2}f^{p}dv_{t}\right)+\psi\int|\nabla(\eta
f^{\frac{p}{2}})|^{2}dv_{t}\leq 2\psi\int|\nabla\eta|^{2}f^{p}dv_{t}$
$\displaystyle\quad\quad\quad\quad\quad\quad+\left(C_{1}C_{2}p^{\frac{q}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}t^{-1}\psi+\psi^{\prime}\right)\int\eta^{2}f^{p}dv_{t}.$
(2.8)
Assume $\tau<\tau^{\prime}<t\leq T$. Since on the time interval
$[\tau,\tau^{\prime}]$
$0\leq\frac{\psi(t)}{t}=\frac{1}{\tau^{\prime}-\tau}-\frac{\tau}{\tau^{\prime}-\tau}\frac{1}{t}\leq\frac{1}{\tau^{\prime}-\tau}\left(1-\frac{\tau}{\tau^{\prime}}\right)=\frac{1}{\tau^{\prime}},$
and on the time interval $[\tau^{\prime},T]$
$\frac{1}{T}\leq\frac{\psi(t)}{t}\leq\frac{1}{\tau^{\prime}},$
we have
$\int_{\tau}^{t}\frac{\psi(t)}{t}\left(\int\eta^{2}f^{p}dv_{t}\right)dt\leq\frac{1}{\tau^{\prime}}\int_{\tau}^{t}\int\eta^{2}f^{p}dv_{t}dt.$
(2.9)
Notice that $0\leq\psi\leq 1$ and
$0\leq\psi^{\prime}\leq\frac{1}{\tau^{\prime}-\tau}$. Integrating the
differential inequality (2.8) from $\tau$ to $t$, we obtain by using (2.9)
$\displaystyle\int\eta^{2}f^{p}dv_{t}+\int_{\tau^{\prime}}^{t}\int|\nabla(\eta
f^{\frac{p}{2}})|^{2}dv_{t}dt\leq
2\int_{\tau}^{t}\int|\nabla\eta|^{2}f^{p}dv_{t}dt$
$\displaystyle\quad\quad\quad\quad+\left(\frac{C_{1}C_{2}p^{\frac{q}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}}{\tau^{\prime}}+\frac{1}{\tau^{\prime}-\tau}\right)\int_{\tau}^{T}\int\eta^{2}f^{p}dv_{t}dt.$
Applying this estimate and the Sobolev inequality we derive
$\displaystyle\quad\quad\int_{\tau^{\prime}}^{T}\int
f^{p(1+\frac{2}{n})}\eta^{2+\frac{1}{n}}dv_{t}dt$ $\displaystyle\leq$
$\displaystyle\int_{\tau^{\prime}}^{T}\left(\int\eta^{2}f^{p}dv_{t}\right)^{\frac{2}{n}}\left(\int
f^{\frac{pn}{n-2}}\eta^{\frac{2n}{n-2}}dv_{t}\right)^{\frac{n-2}{n}}dt$
$\displaystyle\leq$ $\displaystyle A\left(\sup_{\tau^{\prime}\leq t\leq
T}\int\eta^{2}f^{p}\right)^{\frac{2}{n}}\int_{\tau^{\prime}}^{T}\int|\nabla(\eta
f^{\frac{p}{2}})|^{2}dv_{t}dt$ $\displaystyle\leq$ $\displaystyle
A\left[2\int_{\tau}^{t}\int|\nabla\eta|^{2}f^{p}dv_{t}dt+\left(\frac{C_{1}C_{2}p^{\frac{q}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}}{\tau^{\prime}}\right.\right.$
$\displaystyle\quad\quad\quad\left.\left.+\frac{1}{\tau^{\prime}-\tau}\right)\int_{\tau}^{T}\int\eta^{2}f^{p}dv_{t}dt\right]^{1+\frac{2}{n}}.$
For $p\geq p_{0}\geq 2$ and $0\leq\tau\leq T$, we set
$H(p,\tau,r)=\int_{\tau}^{T}\int_{B_{r}(x)}f^{p}dv_{t}dt,$
where $B_{r}(x)$ is the geodesic ball centered at $x$ with radius $r$ measured
in $g(0)$. Choosing a suitable cut-off function $\eta$ and noting that
$|\nabla\eta|_{t}\leq 2|\nabla\eta|_{0}$, we obtain from (2)
$\displaystyle H\left(p\left(1+\frac{2}{n}\right),\tau^{\prime},r\right)$
(2.11) $\displaystyle\leq
AC_{3}\left(\frac{p^{\frac{q}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}}{\tau^{\prime}}+\frac{1}{\tau^{\prime}-\tau}+\frac{1}{(r^{\prime}-r)^{2}}\right)^{1+\frac{2}{n}}H(p,\tau,r^{\prime})^{1+\frac{2}{n}},$
where $0<r<r^{\prime}$, $C_{3}$ is a constant depending only on $n$ and $q$.
Set
$\nu=1+\frac{2}{n},\quad
p_{k}=p_{0}\nu^{k},\quad\tau_{k}=(1-\nu^{-\frac{qk}{q-n}})t,\quad\quad
r_{k}=(1+\nu^{-\frac{qk}{q-n}})r/2.$
Then the inequality (2.11) gives
$H(p_{k+1},\tau_{k+1},r_{k+1})\leq
AC_{3}\left(\frac{1+p_{0}^{\frac{q}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}}{t}+\frac{1}{r^{2}}\right)^{\nu}\eta^{k\nu}H(p_{k},\tau_{k},r_{k})^{\nu},$
where $\eta=\nu^{\frac{2q}{q-n}}$. It follows that
$\displaystyle H(p_{k+1},\tau_{k+1},r_{k+1})^{\frac{1}{p_{k+1}}}$
$\displaystyle\leq(AC_{3})^{\frac{1}{p_{k+1}}}\left(\frac{1+p_{0}^{\frac{q}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}}{t}+\frac{1}{r^{2}}\right)^{\frac{1}{p_{k}}}\eta^{\frac{k}{p_{k}}}H(p_{k},\tau_{k},r_{k})^{\frac{1}{p_{k}}}.$
Hence we obtain for any fixed $k$
$\displaystyle H(p_{k+1},\tau_{k+1},r_{k+1})^{\frac{1}{p_{k+1}}}$
$\displaystyle\leq$
$\displaystyle(AC_{3})^{\sum_{j=0}^{k}\frac{1}{p_{j+1}}}\left(\frac{1+p_{0}^{\frac{q}{q-n}}\mu^{\frac{q}{q-n}}A^{\frac{n}{q-n}}}{t}+\frac{1}{r^{2}}\right)^{\sum_{j=0}^{k}\frac{1}{p_{j}}}$
$\displaystyle\eta^{\sum_{j=0}^{k}\frac{j}{p_{j}}}H(p_{0},\tau_{0},r_{0})^{\frac{1}{p_{0}}}.$
Passing to the limit $k\rightarrow\infty$, one concludes
$\displaystyle
f(x,t)\leq(CA)^{\frac{n}{2p_{0}}}\left(\frac{1+(p_{0}\mu)^{\frac{q}{q-n}}A^{\frac{n}{q-n}}}{t}+\frac{1}{r^{2}}\right)^{\frac{n+2}{2p_{0}}}\left(\int_{0}^{T}\int
f^{p_{0}}dv_{t}dt\right)^{\frac{1}{p_{0}}}.$
This proves (2.6) in the case $p\geq 2$.
Assuming $f$ satisfies (2.3) and $f\geq 0$. We define a sequence of functions
$f_{j}=f+1/j,\quad j\in\mathbb{N}.$
Then $f_{j}$ also satisfies (2.3) and $f_{j}^{p/2}$ is Lipschitz continuous
for $1<p<2$. The same argument as the case $p\geq 2$ also yields
$\displaystyle
f_{j}(x,t)\leq(CA)^{\frac{n}{2p_{0}}}\left(\frac{1+(p_{0}\mu)^{\frac{q}{q-n}}A^{\frac{n}{q-n}}}{t}+\frac{1}{r^{2}}\right)^{\frac{n+2}{2p_{0}}}\left(\int_{0}^{T}\int
f_{j}^{p_{0}}dv_{t}dt\right)^{\frac{1}{p_{0}}}$
for some constant $C$ depending only on $n$ and $q$, where $1<p_{0}<2$.
Passing to the limit $j\rightarrow\infty$, we can see that (2.6) holds when
$1<p<2$. $\hfill\Box$
To proceed we need the following covering lemma belonging to M. Gromov.
Lemma 2.2 ([4], Proposition 3.11). Let $(M,g)$ be a complete Riemannian
manifold, the Ricci curvature of $M$ satisfy ${\rm Ric}(g)\geq(n-1)H$. Then
given $r,\epsilon>0$ and $p\in M$, there exists a covering,
$B_{r}(p)\subset\cup_{i=1}^{N}B_{\epsilon}(p_{i})$, ($p_{i}$ in $B_{r}(p)$)
with $N\leq N_{1}(n,Hr^{2},r/\epsilon)$. Moreover, the multiplicity of this
covering is at most $N_{2}(n,Hr^{2})$.
For any complete Riemannian manifold $(M,g_{0})$ of dimension $n$ with $|{\rm
Ric}(g_{0})|\leq K$, it follows from Lemma 2.2 that there exists an absolute
constant $N$ depending only on $K$ and $n$ such that
$B_{2r}(x)\subset\cup_{i=1}^{N}B_{r}(y_{i}),\quad y_{i}\in
B_{\frac{3}{2}r}(x).$ (2.12)
Suppose (2.1) and (2.2) hold for all $x\in M$ and $0\leq t\leq T$,
$g(0)=g_{0}$. Let $f(x,t)$ and $u(x,t)$ be two nonnegative functions
satisfying
$\frac{\partial f}{\partial t}\leq\Delta f+C_{0}f^{2},\quad\frac{\partial
u}{\partial t}\leq\Delta u+C_{0}fu$
on $M\times[0,T]$. Assume that there hold on $M\times[0,T]$
$u\leq c(n)f,\quad\frac{\partial}{\partial t}dv_{t}\leq c(n)fdv_{t}.$
Define
$e_{0}(t)=\sup_{x\in M,\,0\leq\tau\leq
t}\left(\int_{B_{r/2}(x)}f^{\frac{n}{2}}dv_{\tau}\right)^{{2}/{n}}.$ (2.13)
Then we have the following proposition of $f$ and $u$.
Proposition 2.3. Let $f$ and $u$ be as above, $A$ be given by (2.1) and
$e_{0}(t)$ be defined by (2.13). Suppose there holds for all $x\in M$
$\left(\int_{B_{r/2}(x)}f_{0}^{\frac{n}{2}}dv_{0}\right)^{\frac{2}{n}}\leq(2N^{1+\frac{2}{n}}n(C_{0}+c(n))A)^{-1},$
where $N=N(n,K)$ is given by (2.12), $f_{0}(x)=f(x,0)$ and
$dv_{0}=dv_{g_{0}}$. Then there exist two constants $C_{1}$ and $C_{2}$
depending only on $n$ and $C_{0}$ such that if $0<t<\min(T,C_{2}N^{-1}r^{2})$,
then $f(x,t)\leq C_{1}t^{-1}$ and
$\displaystyle u(x,t)\leq
C_{1}A^{\frac{n}{n+2}}t^{-\frac{n}{n+2}}\left[\left(\int_{B_{r}(x)}u_{0}^{\frac{n+2}{2}}dv_{0}\right)^{\frac{2}{n+2}}+{r^{-\frac{4}{n+2}}}e_{0}(t)\right].$
Proof. Let $[0,T^{\prime}]\subset[0,T]$ be the maximal interval such that
$e_{0}(T^{\prime})=\sup_{x\in M,\,0\leq t\leq
T^{\prime}}\left(\int_{B_{r/2}(x)}f^{\frac{n}{2}}dv_{t}\right)^{\frac{2}{n}}\leq((C_{0}+c(n))nNA)^{-1}.$
(2.14)
For any cut-off function $\phi$ supported in $B_{r}(x)$, using the same method
of deriving (2.7), we calculate when $p\leq n$ and $m\leq n$,
$\displaystyle\frac{1}{p}\frac{\partial}{\partial t}\int\phi^{m+2}f^{p}dv_{t}$
$\displaystyle\leq$ $\displaystyle\int\phi^{m+2}f^{p-1}(\Delta
f+C_{0}f^{2})dv_{t}+\frac{c(n)}{p}\int\phi^{m+2}f^{p+1}dv_{t}$
$\displaystyle\leq$ $\displaystyle-\int\nabla(\phi^{m+2}f^{p-1})\nabla
fdv_{t}+\left(C_{0}+\frac{c(n)}{p}\right)$
$\displaystyle\quad\times\left(\int_{B_{2r}(x)}f^{\frac{n}{2}}dv_{t}\right)^{\frac{2}{n}}\left(\int(\phi^{m+2}f^{p})^{\frac{n}{n-2}}dv_{t}\right)^{\frac{n-2}{n}}$
$\displaystyle\leq$
$\displaystyle-\frac{2}{p}\int|\nabla(\phi^{\frac{m}{2}+1}f^{\frac{p}{2}})|^{2}dv_{t}+\frac{2}{p}\int|\nabla\phi^{\frac{m}{2}+1}|^{2}f^{p}dv_{t}$
$\displaystyle+\left(C_{0}+\frac{c(n)}{p}\right)Ne_{0}A\int|\nabla(\phi^{\frac{m}{2}+1}f^{\frac{p}{2}})|^{2}dv_{t}.$
$\displaystyle\leq$
$\displaystyle-\frac{1}{p}\int|\nabla(\phi^{\frac{m}{2}+1}f^{\frac{p}{2}})|^{2}dv_{t}+\frac{(m+2)^{2}}{2p}|\nabla\phi|_{\infty}^{2}\int\phi^{m}f^{p}dv_{t}.$
Here in the second and third inequalities we used (2.12) and the Sobolev
inequality. Hence
$\frac{\partial}{\partial
t}\int\phi^{m+2}f^{p}dv_{t}+\int|\nabla(\phi^{\frac{m}{2}+1}f^{\frac{p}{2}})|^{2}dv_{t}\leq\frac{(m+2)^{2}}{2}|\nabla\phi|_{\infty}^{2}\int\phi^{m}f^{p}dv_{t}.$
(2.15)
Take $\phi$ supported in $B_{r}(x)$, which is 1 on $B_{r/2}(x)$ and
$|\nabla_{g_{0}}\phi|_{\infty}^{2}\leq 5/r^{2}$. Since
$\frac{1}{2}g_{ij}(0)\leq g_{ij}(t)\leq 2g_{ij}(0)$, we have
$|\nabla_{g(t)}\phi|_{\infty}^{2}\leq 10/r^{2}$. Taking $p=\frac{n}{2}$ in
(2.15) and integrating it from $0$ to $t$, we obtain by using (2.12) again
$\displaystyle\int_{B_{r/2}(x)}f^{\frac{n}{2}}dv_{t}{}$ $\displaystyle\leq$
$\displaystyle\int_{B_{r}(x)}f_{0}^{\frac{n}{2}}dv_{0}+\frac{2(m+2)^{2}}{r^{2}}\int_{0}^{t}\int\phi^{m}f^{\frac{n}{2}}dv_{t}dt$
(2.16) $\displaystyle\leq$ $\displaystyle
N\left(2N^{1+\frac{2}{n}}n(C_{0}+c(n))A\right)^{-\frac{n}{2}}+{2(m+2)^{2}}{r^{-2}}N(e_{0}(t))^{\frac{n}{2}}t.$
Noting that $x$ is arbitrary, one concludes
$\left(1-{2(m+2)^{2}}{r^{-2}}Nt\right)(e_{0}(t))^{\frac{n}{2}}\leq
N\left(2N^{1+\frac{2}{n}}n(C_{0}+c(n))A\right)^{-\frac{n}{2}}.$
If $T^{\prime}<\frac{r^{2}}{8(m+2)^{2}N}$, then for all $t\in[0,T^{\prime}]$
$e_{0}(t)<\left(\frac{4}{3}\right)^{{2}/{n}}\left(2Nn(C_{0}+c(n))A\right)^{-1}.$
This contradicts the maximality of $[0,T^{\prime}]$. We can therefore assume
that $T^{\prime}\geq\min(C_{2}N^{-1}r^{2},T)$.
It follows from (2.15) that
$\displaystyle\frac{\partial}{\partial
t}\left(t\int\phi^{m+2}f^{p}dv_{t}\right)$ $\displaystyle=$ $\displaystyle
t\frac{\partial}{\partial
t}\int\phi^{m+2}f^{p}dv_{t}+\int\phi^{m+2}f^{p}dv_{t}$ $\displaystyle\leq$
$\displaystyle\left(\frac{(m+2)^{2}}{2}|\nabla\phi|_{\infty}^{2}t+1\right)\int\phi^{m}f^{p}dv_{t}.$
When $0\leq t\leq\min(C_{2}N^{-1}r^{2},T)$, integrating the above inequality
from $0$ to $t$, we have
$\displaystyle\int\phi^{m+2}f^{p}dv_{t}$ $\displaystyle\leq$
$\displaystyle\left(\frac{2(m+2)^{2}}{r^{2}}+\frac{1}{t}\right)\int_{0}^{t}\int\phi^{m}f^{p}dv_{t}dt$
(2.17) $\displaystyle\leq$ $\displaystyle
c\,t^{-1}\int_{0}^{t}\int\phi^{m}f^{p}dv_{t}dt$
for some constant $c$ depending only on $n$. Moreover, integrating (2.15) from
$0$ to $t$, we derive
$\int_{0}^{t}\int|\nabla(\phi^{\frac{m}{2}+1}f^{\frac{p}{2}})|^{2}dv_{t}dt\leq\int\phi^{m+2}f_{0}^{p}dv_{0}+\frac{2(m+2)^{2}}{r^{2}}\int_{0}^{t}\int\phi^{m}f^{p}dv_{t}dt.$
(2.18)
Noting that $\frac{1}{r^{2}}\leq\frac{C_{2}}{Nt}$ and $m\leq n$, we calculate
by using (2.17) and (2.18)
$\displaystyle\int_{B_{r/2}(x)}f^{\frac{n}{2}+1}dv_{t}$ $\displaystyle\leq$
$\displaystyle\int_{B_{r}(x)}\phi^{m+4}f^{\frac{n}{2}+1}dv_{t}$
$\displaystyle\leq$ $\displaystyle
Ct^{-1}\int_{0}^{t}\int\phi^{m+2}f^{\frac{n}{2}+1}dv_{t}dt$
$\displaystyle\leq$ $\displaystyle
Ct^{-1}\int_{0}^{t}\left(\int_{B_{r}(x)}f^{\frac{n}{2}}dv_{t}\right)^{\frac{2}{n}}\left(\int(\phi^{m+2}f^{\frac{n}{2}})^{\frac{n}{n-2}}dv_{t}\right)^{\frac{n-2}{n}}dt$
$\displaystyle\leq$ $\displaystyle
Ct^{-1}N^{\frac{2}{n}}e_{0}(t)A\int_{0}^{t}\int|\nabla(\phi^{\frac{m}{2}+1}f^{\frac{n}{4}})|^{2}dv_{t}dt$
$\displaystyle\leq$ $\displaystyle
Ct^{-1}N^{\frac{2}{n}}e_{0}(t)A(N(e_{0}(t))^{\frac{n}{2}}+N(e_{0}(t))^{\frac{n}{2}}t)$
$\displaystyle\leq$ $\displaystyle
CN^{1+\frac{2}{n}}A(e_{0}(t))^{1+\frac{n}{2}}t^{-1},$
or equivalently
$\left(\int_{B_{r/2}(x)}f^{\frac{n+2}{2}}dv_{t}\right)^{\frac{2}{n+2}}\leq
CNA^{\frac{2}{n+2}}e_{0}(t)t^{-\frac{2}{n+2}},$ (2.19)
where $C$ is a constant depending only on $n$, here and in the sequel, we
often denote various constants by the same $C$. Setting $q=n+2$,
$p=\frac{n}{2}$ and $\mu=CNA^{\frac{2}{n+2}}e_{0}(T^{\prime})$, we obtain by
employing Theorem 2.1
$\displaystyle f(x,t)$ $\displaystyle\leq$ $\displaystyle
CA\left(\frac{1+A^{\frac{n}{2}}\mu^{\frac{n+2}{2}}}{t}+\frac{1}{r^{2}}\right)^{\frac{n+2}{n}}\left(\int_{0}^{t}\int_{B_{r}(x)}f^{\frac{n}{2}}dv_{t}dt\right)^{\frac{2}{n}}$
$\displaystyle\leq$ $\displaystyle
CAe_{0}(T^{\prime}){t}^{\frac{2}{n}}\left(\frac{1+A^{\frac{n}{2}}\mu^{\frac{n+2}{2}}}{t}+\frac{1}{r^{2}}\right)^{\frac{n+2}{n}}$
for $t\in[0,T^{\prime}]$. Recalling the definition of $e_{0}(T^{\prime})$ (see
(2.14) above), we can see that $Ae_{0}(T^{\prime})$ is bounded and
$A^{\frac{n}{2}}\mu^{\frac{n+2}{2}}=(CNAe_{0}(T^{\prime}))^{\frac{n+2}{2}}$
(2.20)
is also bounded. Therefore when $0<t<\min(T,C_{2}N^{-1}r^{2})$, $f(x,t)\leq
C_{1}t^{-1}$ for some constants $C_{1}$ and $C_{2}$ depending only on $n$,
$C_{0}$.
Using $u\leq c(n)f$ and $\partial_{t}dv_{t}\leq c(n)fdv_{t}$ and mimicking the
method of proving (2.15), we obtain
$\frac{\partial}{\partial
t}\int\phi^{m+2}u^{p}dv_{t}+\int|\nabla(\phi^{\frac{m}{2}+1}u^{\frac{p}{2}})|^{2}dv_{t}\leq\frac{C}{r^{2}}\int\phi^{m}u^{p}dv_{t}.$
(2.21)
Taking $m=0$, $p=n/2$ and integrating this inequality, we have by using
(2.12))
$\int_{0}^{t}\int|\nabla(\phi
u^{\frac{n}{4}})|^{2}dv_{t}dt\leq\int_{B_{r}(x)}u_{0}^{\frac{n}{2}}dv_{0}+\frac{C}{r^{2}}N(e_{0}(t))^{\frac{n}{2}}t.$
(2.22)
Integrating (2.21) with $m=2$, $p={(n+2)}/{2}$, and using the Sobolev
inequality (2.1), we obtain
$\displaystyle\int_{B_{r/2}(x)}u^{\frac{n+2}{2}}dv_{t}$ $\displaystyle\leq$
$\displaystyle\int_{B_{r}(x)}u_{0}^{\frac{n+2}{2}}dv_{0}+\frac{C}{r^{2}}\int_{0}^{t}\int\phi^{2}u^{\frac{n+2}{2}}dv_{t}dt$
$\displaystyle\leq$
$\displaystyle\int_{B_{r}(x)}u_{0}^{\frac{n+2}{2}}dv_{0}+\frac{C}{r^{2}}e_{0}(t)A\int_{0}^{t}\int|\nabla(\phi
u^{\frac{n}{4}})|^{2}dv_{t}dt,$
which together with (2.22) and (2.12) gives
$\displaystyle\int_{B_{r/2}(x)}u^{\frac{n+2}{2}}dv_{t}$ $\displaystyle\leq$
$\displaystyle\int_{B_{r}(x)}u_{0}^{\frac{n+2}{2}}dv_{0}+\frac{C}{r^{2}}e_{0}(t)A\left(\int_{B_{r}(x)}u_{0}^{\frac{n}{2}}dv_{0}+\frac{C}{r^{2}}Ne_{0}(t)^{\frac{n}{2}}t\right)$
(2.23) $\displaystyle\leq$
$\displaystyle\int_{B_{r}(x)}u_{0}^{\frac{n+2}{2}}dv_{0}+\frac{C}{r^{2}}NA(e_{0}(t))^{1+\frac{n}{2}}\left(1+\frac{1}{r^{2}}t\right).$
Notice that when $0\leq t\leq\min(C_{2}r^{2}/N,T)$, (2.19) implies
$\int_{B_{r/2}(x)}f^{\frac{n+2}{2}}dv_{t}\leq\mu t^{-1}.$
Without loss of generality we can assume $A>1$ (otherwise we can substitute
$A$ for $A+1$). In view of (2.20) and (2.23), we obtain by using Theorem 2.1
in the case $q=n+2$ and $p=(n+2)/2$
$\displaystyle u(x,t)$ $\displaystyle\leq$ $\displaystyle
CA^{\frac{n}{n+2}}\left(\frac{1}{t}+\frac{1}{r^{2}}\right)\left(\int_{0}^{t}\int_{B_{r/2}(x)}u^{\frac{n+2}{2}}dv_{t}dt\right)^{\frac{2}{n+2}}$
$\displaystyle\leq$ $\displaystyle
CA^{\frac{n}{n+2}}t^{-\frac{n}{n+2}}\left[\left(\int_{B_{r}(x)}u_{0}^{\frac{n+2}{2}}dv_{0}\right)^{\frac{2}{n+2}}+{r^{-\frac{4}{n+2}}}e_{0}(t)\right],$
provided that $0\leq t\leq\min(C_{2}r^{2}/N,T)$. $\hfill\Box$
Remark 2.4. We remark that Theorem 2.1 and Proposition 2.3 are very similar to
Theorem A.1 and Corollary A.10 of Dean Yang’s paper [17] respectively. The
differences are that we have heat flow type inequalities, but Dean Yang has
heat flow type inequalities with cut-off function. It seems that Dean Yang’s
Corollary A.10 is stronger than our Proposition 2.3, which is enough for our
use here. Also we should compare Theorem 2.1 with ([6, 7], Theorem 2.1), where
Dai-Wei-Ye obtained a similar result by using a similar method. Here the
constant $C$ of (2.6) depends only on $n$, $q$, $p$, but not on the Sobolev
constant $A$. While in [6, 7], since the Sobolev constants $C_{S}(t)$ along
the flow are bounded, they need not care how the constant $C$ exactly depends
on $C_{S}$.
## 3 Short time existence of the Ricci flow
In this section we focus on closed Riemannian manifolds. Precisely, following
the lines of [13, 18], we study the short time existence of the Ricci flow and
give the proof of Theorem 1.1. Assume $(M,g_{0})$ is a closed Riemannian
manifold of dimension $n(\geq 3)$ with $|{\rm Ric}(g_{0})|\leq K$. Consider
the Ricci flow
$\left\\{\begin{array}[]{lll}\displaystyle\frac{\partial g}{\partial
t}&=-2Ric(g),\\\\[5.16663pt] g(0)&=g_{0}.\end{array}\right.$ (3.1)
It is well known [10] that the Riemannian curvature tensor and the Ricci
curvature tensor satisfy the following evolution equations
$\displaystyle\frac{\partial{\rm Rm}}{\partial t}$ $\displaystyle=$
$\displaystyle\Delta{\rm Rm}+{\rm Rm}*{\rm Rm},$ (3.2)
$\displaystyle\frac{\partial{\rm Ric}}{\partial t}$ $\displaystyle=$
$\displaystyle\Delta{\rm Ric}+{\rm Rm}*{\rm Ric},$ (3.3)
where ${\rm Rm}*{\rm Rm}$ is a tensor that is quadratic in ${\rm Rm}$, ${\rm
Ric}*{\rm Rm}$ can be understood in a similar way. It follows that
$\displaystyle\frac{\partial|{\rm Rm}|}{\partial t}$ $\displaystyle\leq$
$\displaystyle\Delta|{\rm Rm}|+c(n)|{\rm Rm}|^{2},$ (3.4)
$\displaystyle{}\frac{\partial|{\rm Ric}|}{\partial t}$ $\displaystyle\leq$
$\displaystyle\Delta|{\rm Ric}|+c(n)|{\rm Rm}||{\rm Ric}|.$ (3.5)
To prove Theorem 1.1, it suffices to prove the following:
Proposition 3.1. Let $(M,g_{0})$ be a closed Riemannian manifold of dimension
$n(\geq 3)$ with $|{\rm Ric}(g_{0})|\leq K$. Suppose there exists a constant
$A_{0}>0$ such that the following local Sobolev inequalities hold for all
$x\in M$
$\|u\|_{{2n}/{(n-2)}}^{2}\leq A_{0}\|\nabla u\|_{2}^{2},\quad\forall u\in
C_{0}^{\infty}(B_{r}(x)).$
Then there exist constants $C_{1}$, $C_{3}$ depending only on $n$ and $K$, and
$C_{2}$ depending only on $n$ such that for $r\leq 1$, if
$\left(\int_{B_{r/2}(x)}|{\rm
Rm}(g_{0})|^{\frac{n}{2}}dv_{g_{0}}\right)^{2/n}\leq(C_{1}A_{0})^{-1}$
for all $x\in M$, then the Ricci flow (3.1) has a smooth solution for $0\leq
t\leq T$, where $T\geq C_{2}\min(r^{2}/N,K^{-1})$, such that for all $x\in M$
$\displaystyle\frac{1}{2}g_{0}\,\leq\,g(t)\,\leq 2g_{0},$ (3.8)
$\displaystyle\|u\|_{{2n}/{(n-2)}}^{2}\leq 4A_{0}\|\nabla
u\|_{2}^{2},\quad\forall u\in C_{0}^{\infty}(B_{r}(x)),$
$\displaystyle\left(\int_{B_{r/2}(x)}|{\rm
Rm}(g(t))|^{\frac{n}{2}}dv_{t}\right)^{2/n}\leq 2N(C_{1}A_{0})^{-1}.$
Proof. It is well known (see for example [8, 10]) that a smooth solution
$g(t)$ of the Ricci flow (3.1) exists for a short time interval and is unique.
Let $[0,T_{\rm max})$ be a maximum time interval on which $g(t)$ exists and
(3.8)-(3.8) hold. Clearly $T_{\rm max}>0$ since the strict inequalities in
(3.8)-(3.8) hold at $t=0$. Suppose $T_{\rm
max}<T_{0}=C_{2}\min(r^{2}/N,K^{-1})$ for some constant $C_{2}$ to be
determined later. Since the Ricci curvature satisfies (3.5), it follows from
Proposition 2.3 that for $0\leq t\leq T^{\prime}$,
$\displaystyle{}|{\rm Ric}(g(t))|$ $\displaystyle\leq$ $\displaystyle
CA_{0}^{\frac{n}{n+2}}t^{-\frac{n}{n+2}}\left[\left(\int_{B_{r}(x)}|{\rm
Ric}(g_{0})|^{\frac{n+2}{2}}dv_{0}\right)^{\frac{2}{n+2}}+{r^{-\frac{4}{n+2}}}e_{0}(T^{\prime})\right]$
$\displaystyle\leq$ $\displaystyle
CA_{0}^{\frac{n}{n+2}}t^{-\frac{n}{n+2}}\left(K^{\frac{2}{n+2}}(e_{0}(T^{\prime}))^{\frac{n}{n+2}}+{r^{-\frac{4}{n+2}}}e_{0}(T^{\prime})\right)$
$\displaystyle\leq$ $\displaystyle
C(K^{\frac{2}{n+2}}+r^{-\frac{4}{n+2}})t^{-\frac{n}{n+2}},$ (3.9)
where $T^{\prime}$ and $e_{0}(T^{\prime})$ are defined by (2.14) in the case
$f$ is replaced by $|{\rm Rm}|$. It follows that for all $x\in M$, $u\in
C_{0}^{\infty}(B_{r}(x))$ and $0\leq t\leq T^{\prime}$,
$\displaystyle\left|\frac{d}{dt}\int_{B_{r}(x)}|u|^{\frac{2n}{n-2}}dv_{t}\right|$
$\displaystyle\leq$ $\displaystyle 2|{\rm
Ric}(g(t))|_{\infty}\int_{B_{r}(x)}|u|^{\frac{2n}{n-2}}dv_{t}$
$\displaystyle\leq$ $\displaystyle
Ct^{-\frac{n}{n+2}}\int_{B_{r}(x)}|u|^{\frac{2n}{n-2}}dv_{t}.$
This implies
$e^{-C\,t^{\frac{2}{n+2}}}\int_{B_{r}(x)}|u|^{\frac{2n}{n-2}}dv_{0}\leq\int_{B_{r}(x)}|u|^{\frac{2n}{n-2}}dv_{t}\leq
e^{C\,t^{\frac{2}{n+2}}}\int_{B_{r}(x)}|u|^{\frac{2n}{n-2}}dv_{0}.$
Similarly we have
$\left|\frac{d}{dt}\int_{B_{r}(x)}|\nabla u|^{2}dv_{t}\right|\leq
Ct^{-\frac{n}{n+2}}\int_{B_{r}(x)}|\nabla u|^{2}dv_{t},$
and
$e^{-C\,t^{\frac{2}{n+2}}}\int_{B_{r}(x)}|\nabla
u|^{2}dv_{0}\leq\int_{B_{r}(x)}|\nabla u|^{2}dv_{t}\leq
e^{C\,t^{\frac{2}{n+2}}}\int_{B_{r}(x)}|\nabla u|^{2}dv_{0}.$
Hence if $T_{\rm max}<T_{0}=C_{2}\min(r^{2}/N,K^{-1})$ for sufficiently small
$C_{2}$ depending only on $n$ and $K$, then (3.8) holds with strict
inequality.
To show (3.8) holds with strict inequality, we fix a tangent vector $v$ and
calculate
$\displaystyle\frac{d}{dt}|v|_{g(t)}^{2}=\frac{d}{dt}(g_{ij}(t)v^{i}v^{j})=-2{\rm
Ric}_{ij}v^{i}v^{j},$
which together with (3.9) gives
$\left|\frac{d}{dt}\log|v|_{g(t)}^{2}\right|\leq
C(K^{\frac{2}{n+2}}+r^{-\frac{4}{n+2}})t^{-\frac{n}{n+2}}.$
Therefore we obtain for $0\leq t<C_{2}\min(r^{2},K^{-1})$,
$\frac{1}{2}|v|_{g(0)}^{2}<|v|_{g(t)}^{2}<2|v|_{g(0)}^{2}.$
Using the same method of deriving (2.16), one can see that the strict
inequality in (3.8) holds when $0\leq t<C_{2}\min(r^{2},K^{-1})$ for
sufficiently small $C_{2}$. By Proposition 2.3, $|{\rm Rm}(g(t))|_{\infty}\leq
Ct^{-1}$ for all $t\in[0,T_{\rm max}]$. Hence one can extend $g(t)$ smoothly
beyond $T_{\rm max}$ with (3.8)-(3.8) still holding. This contradicts the
assumed maximality of $T_{\rm max}$. Therefore $T_{\rm max}\geq T_{0}$.
$\hfill\Box$
Proof of Theorem 1.1. By Proposition 3.1, there exists a unique solution
$g(t)$ of the Ricci flow (3.1) such that (3.8)-(3.8) hold. Then by Proposition
2.3, one concludes
$|{\rm Rm}(g(t))|\leq Ct^{-1},\quad|{\rm Ric}(g(t))|\leq Ct^{-\frac{n}{n+2}}$
for $t\in[0,T_{0}]$. This completes the proof of Theorem 1.1. $\hfill\Box$
## 4 Applications
In this section, we will prove Theorem 1.2 by applying Theorem 1.1. It follows
from (1.4)-(1.6) that the deformed metric $g(t)$ has uniform sectional
curvature bounds away from $t=0$ and $g(t)$ is close to $g(0)$ when $t$ is
close to $0$. We first show that diameters of the flow are under control,
namely
Lemma 4.1. Let $g(t)$ be the Ricci flow in Theorem 1.1. Then for $0\leq t\leq
c_{1}\min(r^{2},K^{-1})$, there exists a constant $c$ depending only on $n$
and $K$ such that
$e^{-ct^{\frac{2}{n+2}}}{\rm diam}(g_{0})\leq{\rm diam}(g(t))\leq
e^{ct^{\frac{2}{n+2}}}{\rm diam}(g_{0}).$ (4.1)
where ${\rm diam}(g(t))$ means the diameter of the manifold $(M,g(t))$.
Proof. Let $\gamma:[0,1]\rightarrow M$ be any smooth curve. Denote the length
of $\gamma$ by
$l_{\gamma}(t)=\int_{0}^{1}|\dot{\gamma}(s)|_{g(t)}^{2}ds.$
We calculate by using the Ricci bound in Theorem 1.2
$\displaystyle\left|\frac{d}{dt}l_{\gamma}(t)\right|=\left|\int_{0}^{1}-2{\rm
Ric}_{g(t)}(\dot{\gamma}(s),\dot{\gamma}(s))ds\right|\leq
ct^{-\frac{n}{n+2}}l_{\gamma}(t).$
This implies
$l_{\gamma}(0)e^{-ct^{\frac{2}{n+2}}}\leq l_{\gamma}(t)\leq
l_{\gamma}(0)e^{ct^{\frac{2}{n+2}}}.$
It follows that
$e^{-ct^{\frac{2}{n+2}}}{\rm dist}_{g_{0}}(p,q)\leq{\rm dist}_{g(t)}(p,q)\leq
e^{ct^{\frac{2}{n+2}}}{\rm dist}_{g_{0}}(p,q),$
where ${\rm dist}_{g(t)}(p,q)$ denote the distance between $p$ and $q$ in the
metric $g(t)$. This gives the desired result. $\hfill\Box$
The following proposition is a corollary of Gromov’s almost flat manifold
theorem [9]:
Proposition 4.2 (Gromov). Let $(M,g)$ be a compact Riemannian manifold of
dimension $n$. Assume the sectional curvature is bounded, i.e., $|{\rm
Sec}(g)|\leq\Lambda$. Then there exists a constant $\epsilon_{0}$ depending
only on $n$ such that if
$\Lambda({\rm diam}(g))^{2}\leq\epsilon_{0},$ (4.2)
then the universal covering of $(M,g)$ is diffeomorphic to $\mathbb{R}^{n}$.
If in addition the fundamental group $\pi(M)$ is commutative, then $(M,g)$ is
diffeomorphic to a torus.
Proof of Theorem 1.2. Let $g(t)$ be a unique solution to the Ricci flow (1.3).
By (1.5), for $0\leq t\leq c_{1}\min(r^{2},K^{-1})$,
$|{\rm Sec}(g(t))|\leq ct^{-1},$
where ${\rm Sec}(g(t))$ denotes the sectional curvature of $(M,g(t))$. Let
$\epsilon_{0}$ be given by Proposition 4.2. Take
$t_{0}=c_{1}\min(r^{2},K^{-1})$ and
$\delta=\left(\epsilon_{0}t_{0}c^{-1}e^{-2ct_{0}^{\frac{2}{n+2}}}\right)^{1/2}.$
If ${\rm diam}(g_{0})\leq\delta$, then we obtain by Lemma 4.1
$|{\rm Sec}(g(t_{0}))({\rm diam}(g(t_{0})))^{2}|\leq
ct_{0}^{-1}e^{2ct_{0}^{\frac{2}{n+2}}}({\rm
diam}(g_{0}))^{2}\leq\epsilon_{0}.$
Applying Proposition 4.2 to $g(t_{0})$, we conclude Theorem 1.2. $\hfill\Box$
Acknowledgements. The author is partly supported by the program for NCET. He
thanks Ye Li for introducing this interesting topic to him. Also he thanks the
referee for valuable comments and suggestions, which improve this paper.
## References
* [1] M. Anderson: The $L^{2}$ structure of moduli spaces of Einstein metrics on 4-manifolds. Geom. Funct. Anal., 2: 29-89, 1992.
* [2] S. Bando: Real analyticity of solutions of Hamilton’s equation. Math. Z., 195: 93-97, 1987.
* [3] J. Bemelmans, Min-Oo and E. Ruh: Smoothing Riemannian metrics. Math. Z., 188: 69-74, 1984.
* [4] J. Cheeger: Critical points of distance functions and applications to geometry. Lecture Notes in Math., Springer Verlag, 1504 (1991) 1-38.
* [5] J. Cheeger and G. Tian: Curvature and injectivity radius estimates for Einstein 4-manifolds. J. Amer. Math. Soc., 19: 487-525, 2006.
* [6] X. Dai, G. Wei and R. Ye: Smoothing Riemannian metrics with Ricci curvature bounds. Manuscripta Math., 90: 49-61, 1996.
* [7] X. Dai, G. Wei and R. Ye: Smoothing Riemannian metrics with Ricci curvature bounds. arXiv: dg-ga/9411014, 1994.
* [8] D. DeTurk: Deforming metrics in the direction of the direction of their Ricci tensors. J. Diff. Geom., 18: 157-162, 1983.
* [9] M. Gromov: Almost flat manifolds. J. Diff. Deom., 13: 231-241, 1978.
* [10] R. Hamilton: Three-manifolds with positive Ricci curvature. J. Diff. Geom., 17: 255-306, 1982.
* [11] Y. Li: Local volume estimate for manifolds with $L^{2}$-bounded curvature. J. Geom. Anal., 17: 495-511, 2007.
* [12] Y. Li: Smoothing Riemannian metrics with bounded Ricci curvatures in dimension four. Advances in Math., 223: 1924-1957, 2010.
* [13] Y. Li: Smoothing Riemannian metrics with bounded Ricci curvatures in dimension four, II. arXiv: 0911.3104v1, 2009.
* [14] W. Shi: Deforming the metric on complete Riemannian manifolds. J. Diff. Geom., 30: 223-301, 1989.
* [15] G. Tian and J. Viaclovsky: Bach-flat asymptotically locally Euclidean metrics. Invent. Math., 160: 357-415, 2005.
* [16] G. Xu: Short-time existence of the Ricci flow on noncompact Riemannian manifolds. arXiv: 0907.5604v1, 2009.
* [17] D. Yang: $L^{p}$ pinching and compactness theorems for compact Riemannian manifolds. Sémminaire de théorie spectrale et géométrie, Chambéry-Grenoble, 1987-1988: 81-89.
* [18] D. Yang: Convergence of Riemannian manifolds with integral bounds on curvature I. Ann. Sci. Ecole Norm. Sup., 25: 77-105, 1992.
|
arxiv-papers
| 2011-04-09T12:30:50 |
2024-09-04T02:49:18.196921
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yunyan Yang",
"submitter": "Yunyan Yang",
"url": "https://arxiv.org/abs/1104.1702"
}
|
1104.1920
|
arxiv-papers
| 2011-04-11T11:57:24 |
2024-09-04T02:49:18.206033
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "ahmed becir and Mohamed Ridza Wahiddin",
"submitter": "Ahmed Becir",
"url": "https://arxiv.org/abs/1104.1920"
}
|
|
1104.2010
|
# Notes on Inhomogeneous Quantum Walks
Yutaka Shikano shikano@th.phys.titech.ac.jp Department of Physics, Tokyo
Institute of Technology, Meguro, Tokyo, 152-8551, Japan Department of
Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA
02139, USA Hosho Katsura hosho.katsura@gakushuin.ac.jp Department of
Physics, Gakushuin University, Toshima, Tokyo 171-8588, Japan Kavli Institute
for Theoretical Physics, University of California Santa Barbara, CA 93106, USA
###### Abstract
We study a class of discrete-time quantum walks with inhomogeneous coins
defined in [Y. Shikano and H. Katsura, Phys. Rev. E 82, 031122 (2010)]. We
establish symmetry properties of the spectrum of the evolution operator, which
resembles the Hofstadter butterfly.
###### pacs:
03.65.-w, 71.23.An, 02.90.+p
Throughout this paper, we focus on a one-dimensional discrete time quantum
walk (DTQW) with two-dimensional coins. The DTQW is defined as a quantum-
mechanical analogue of the classical random walk. The Hilbert space of the
system is a tensor product ${\cal H}_{p}\otimes{\cal H}_{c}$, where ${\cal
H}_{p}$ is the position space of a quantum walker spanned by the complete
orthonormal basis $|{n}\rangle$ ($n\in\mathbb{Z}$) and ${\cal H}_{c}$ is the
coin Hilbert space spanned by the two orthonormal states
$|{L}\rangle=(1,0)^{{\bf T}}$ and $|{R}\rangle=(0,1)^{{\bf T}}$. Here, the
superscript ${\bf T}$ denotes matrix transpose. A one-step dynamics is
described by a unitary operator $U=WC$ with
$\displaystyle C$
$\displaystyle=\sum_{n}\left[(a_{n}|{n,L}\rangle+c_{n}|{n,R}\rangle)\langle{n,L}|+(d_{n}|{n,R}\rangle+b_{n}|{n,L}\rangle)\langle{n,R}|\right],$
(1) $\displaystyle W$ $\displaystyle=\sum_{n}\left(|n-1,L\rangle\langle
n,L|+|n+1,R\rangle\langle n,R|\right),$ (2)
where $|{n,\xi}\rangle=:|{n}\rangle\otimes|{\xi}\rangle\in{\cal
H}_{p}\otimes{\cal H}_{c}\ (\xi=L,R)$ and the coefficients at each position
satisfy the following relations: $|a_{n}|^{2}+|c_{n}|^{2}=1$,
$a_{n}\overline{b}_{n}+c_{n}\overline{d}_{n}=0$,
$c_{n}=-\Delta_{n}\overline{b}_{n}$, $d_{n}=\Delta_{n}\overline{a}_{n}$, where
$\Delta_{n}=a_{n}d_{n}-b_{n}c_{n}$ with $|\Delta_{n}|=1$. Two operators $C$
and $W$ are called coin and shift operators, respectively. The probability
distribution at the position $n$ at the $t$th step is then defined by
$\Pr(n;t)=\sum_{\xi\in\\{L,R\\}}\left|\langle{n,\xi}|U^{t}|{0,\phi}\rangle\right|^{2}.$
(3)
A homogeneous version of this DTQW was first introduced in Ref. Ambainis .
Suppose that the coin operator is given by
$\displaystyle C(\alpha,\theta)$ $\displaystyle=\sum_{n}\left[(\cos(2\pi\alpha
n+2\pi\theta)|n,L\rangle+\sin(2\pi\alpha n+2\pi\theta)|n,R\rangle)\langle
n,L|\right.$ $\displaystyle\ \ \ \ \ \left.+(\cos(2\pi\alpha
n+2\pi\theta)|n,R\rangle-\sin(2\pi\alpha n+2\pi\theta)|n,L\rangle)\langle
n,R|\right]$
$\displaystyle:=\sum_{n}|{n}\rangle\\!\langle{n}|\otimes\hat{C}_{n}(\alpha,\theta),$
(4)
where $\alpha$ and $\theta$ are constant real numbers. Then this class of DTQW
is called an inhomogeneous quantum walk (QW). This model is based on the idea
of the Aubry-André model AA , which provides a solvable example of metal-
insulator transition in a one-dimensional incommensurate system. In this class
of DTQW, we have obtained the weak limit theorem as follows.
###### Theorem 1 (Shikano and Katsura SK ).
Fix $\theta=0$. For any irrational
$\alpha\in{\mathbb{R}}\setminus{\mathbb{Q}}$ and any special rational
$\alpha=\frac{P}{4Q}\in\mathbb{Q}$ with relatively prime $P$ (odd integer) and
$Q$, the limit distribution of the inhomogeneous QW is given by
$\frac{X_{t}}{t^{\eta}}\Rightarrow I~{}~{}~{}~{}(t\to\infty),$ (5)
where $X_{t}$ is the random variable for the position at the $t$ step,
“$\Rightarrow$” means the weak convergence, and $\eta\ (>0)$ is an arbitrary
positive parameter. Here, the limit distribution $I$ has the probability
density function $f(x)=\delta(x)~{}~{}(x\in{\mathbb{R}})$, where
$\delta(\cdot)$ is the Dirac delta function. This is called a localization for
the inhomogeneous QW.
However, in the case of the other rational $\alpha$, it has not yet been
clarified whether the inhomogeneous QW is localized or not. This is still an
open question. The situation becomes more complicated when we consider a
nonzero $\theta$. As seen in Figure 1, the reflection points for the quantum
walker (see more details in Ref. (SK, , Lemma 1 and Figure 2)) are changed by
the parameter $\theta$. In the rest of the paper, we will establish symmetry
properties of the eigenvalue distribution of the one-step evolution operator
($U=WC$) at $\theta=0$.
Figure 1: Probability distribution of the inhomogeneous QW at $300$th step
with $\alpha=1/3$. From simple algebra, it can be easily shown that the
inhomogeneous QW is finitely confined when $\theta=(2m-1)/12\
(m\in\mathbb{Z})$.
###### Theorem 2.
For the eigenvalues of the one-step evolution operator $WC$, the following
properties hold:
* (P1)
All the eigenvalues at $\alpha$ are identical to those at $1-\alpha$.
* (P2)
For every eigenvalue $\lambda$, there is an eigenvalue $\lambda^{*}$.
* (P3)
For every eigenvalue $\lambda$, there is an eigenvalue $-\lambda$.
* (P4)
All the eigenvalues are simple, i.e., nondegenerate.
* (P5)
There are four eigenvalues $\lambda=\pm 1,\pm i$ for any
$\alpha=\frac{P}{4Q}\in\mathbb{Q}$.
* (P6)
Every eigenvalue $\lambda$ at $\alpha=\frac{P}{4Q}\in\mathbb{Q}$ corresponds
to an eigenvalue $i\lambda$ at $\alpha+1/2$.
###### Proof.
The proofs of properties (P1) – (P5) can be found in Ref. SK . Here, we give a
proof of (P6). According to Ref. (SK, , Theorem 3), the eigenvalues of $WC$
and $WC$ are identical. Therefore, we only study the eigenvalues of $CW$.
First, we can express the wavefunction at the $t$th step evolving from the
state $|{0,{\tilde{\phi}}}\rangle$ by $CW$:
$(CW)^{t}|{0,{\tilde{\phi}}}\rangle:=\sum_{n\in\mathbb{Z},\
\xi\in\\{L,R\\}}{\varphi}_{t}(n,\xi)|{n,\xi}\rangle.$ (6)
The one-step time evolution of the coefficients ${\varphi}_{t}(n,\xi)$ is
given by
$\left(\begin{array}[]{c}{\varphi}_{t+1}(n;L)\\\
{\varphi}_{t+1}(n;R)\end{array}\right)={\hat{C}}_{n}(\alpha,0)\left(\begin{array}[]{c}{\varphi}_{t}(n+1;L)\\\
{\varphi}_{t}(n-1;R)\end{array}\right).$ (7)
Here, we define ${\vec{{\varphi}}}_{t}$ by ${\varphi}_{t}(n,\xi)$ and a square
matrix of order $4Q$, denoted as ${\sf CW}$, as
${\vec{{\varphi}}}_{t+1}={\sf CW}{\vec{{\varphi}}}_{t},$ (8)
see more details in Ref. SK . Let
${\vec{{\varphi}}}=(\varphi(-Q;R),\varphi(-Q+1;L),\varphi(-Q+1;R),...,\varphi(Q;L))^{{\bf
T}}$ be the eigenvector of ${\sf CW}$ at $\alpha$ with the eigenvalue
$\lambda$ and
${\vec{{\tilde{\varphi}}}}=({\tilde{\varphi}}(-Q;R),{\tilde{\varphi}}(-Q+1;L),{\tilde{\varphi}}(-Q+1;R),...,{\tilde{\varphi}}(Q;L))^{{\bf
T}}$ be at $\alpha+1/2$ with the eigenvalue $\tilde{\lambda}$. Then, according
to Eq. (7), we obtain
$\displaystyle\lambda{\varphi}(-Q;R)$
$\displaystyle=(-1)^{\frac{P+1}{2}}{\varphi}(-Q+1;L),$
$\displaystyle\lambda\left(\begin{array}[]{c}{\varphi}(n;L)\\\
{\varphi}(n;R)\end{array}\right)$
$\displaystyle={\hat{C}}_{n}(\alpha,0)\left(\begin{array}[]{c}{\varphi}(n+1;L)\\\
{\varphi}(n-1;R)\end{array}\right),(n\in(-Q,Q))$ (13)
$\displaystyle\lambda{\varphi}(Q;L)$
$\displaystyle=(-1)^{\frac{P+1}{2}}{\varphi}(Q-1;R)$ (14)
and
$\displaystyle\tilde{\lambda}{\tilde{\varphi}}(-Q;R)$
$\displaystyle=(-1)^{-Q}(-1)^{\frac{P+1}{2}}{\tilde{\varphi}}(-Q+1;L),$
$\displaystyle\tilde{\lambda}\left(\begin{array}[]{c}{\tilde{\varphi}}(n;L)\\\
{\tilde{\varphi}}(n;R)\end{array}\right)$
$\displaystyle=(-1)^{n}{\hat{C}}_{n}(\alpha,0)\left(\begin{array}[]{c}{\tilde{\varphi}}(n+1;L)\\\
{\tilde{\varphi}}(n-1;R)\end{array}\right),(n\in(-Q,Q))$ (19)
$\displaystyle\tilde{\lambda}{\tilde{\varphi}}(Q;L)$
$\displaystyle=(-1)^{Q}(-1)^{\frac{P+1}{2}}{\tilde{\varphi}}(Q-1;R),$ (20)
where we have used the fact
${\hat{C}}_{n}(\alpha+1/2)=(-1)^{n}{\hat{C}}_{n}(\alpha,0)$. Now we apply the
following local unitary transformation to Eq. (20):
${\tilde{\varphi}}(n;\xi)=\begin{cases}{\varphi}^{\prime}(n;\xi)&{\rm when}\
n\ {\rm is\ even,}\\\ i{\varphi}^{\prime}(n;\xi)&{\rm when}\ n\ {\rm is\
odd.}\end{cases}$ (21)
According to Eq. (14), $\vec{{\varphi}^{\prime}}$ defined by Eq. (21) can be
taken as the eigenvector ${\sf CW}$ at $\alpha$ with the eigenvalue
$\tilde{\lambda}=i\lambda$. ∎
Figure 2 shows the numerically obtained spectrum of ${\sf CW}$ as a function
of $\alpha$, which is quite similar to the Hofstadter butterfly Hofstadter .
By combining all the properties of (P1)-(P6), the smallest fundamental domain
of this diagram is identified as the triangular region shown in Figure 2.
Therefore, we have rigorously established all the symmetries in Figure 2.
Figure 2: Eigenvalue distribution of the one-step operator for the
inhomogeneous QW ($U$). Arguments of the eigenvalues of $WC$ (vertical axis)
are plotted as a function of the parameter $\alpha=\frac{P}{4Q}$ (horizontal
axis) with $Q\leq 60$. Here, $P$ (odd number) and $Q$ are relatively prime.
One of the authors (Y.S.) thanks Shu Tanaka, Reinhard F. Werner and Volkher
Scholz for useful discussions. Y.S. is supported by JSPS Research Fellowships
for Young Scientists (Grant No. 21008624). H.K. is supported by the JSPS
Postdoctoral Fellowships for Research Abroad and NSF Grant No. PHY05-51164.
## References
* (1) A. Ambainis, E. Bach, A. Nayak, A. Vishwanath, and J. Watrous, in Proceedings of the 33rd Annual ACM Symposium on Theory of Computing (STOC’01) (ACM Press, New York, 2001), pp. 37 - 49.
* (2) S. Aubry and G. André, Ann. Israel Phys. Soc. 3, 133 (1980).
* (3) Y. Shikano and H. Katsura, Phys. Rev. E 82, 031122 (2010).
* (4) D. R. Hofstadter, Phys. Rev. B 14, 2239 (1976).
|
arxiv-papers
| 2011-04-11T17:27:11 |
2024-09-04T02:49:18.212093
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yutaka Shikano, Hosho Katsura",
"submitter": "Yutaka Shikano",
"url": "https://arxiv.org/abs/1104.2010"
}
|
1104.2246
|
# Magnetic Connectivity between Active Regions 10987, 10988, and 10989 by
Means of Nonlinear Force-Free Field Extrapolation
Tilaye Tadesse1 2 1 Max Planck Institut für Sonnensystemforschung, Max-Planck
Str. 2, D–37191 Katlenburg-Lindau, Germany email: tilaye.tadesse@gmail.com,
email: wiegelmann@mps.mpg.de, email: inhester@mps.mpg.de
2Addis Ababa University, College of Education, Department of Physics
Education, Po.Box 1176, Addis Ababa, Ethiopia
3 National Solar Observatory, Sunspot, NM 88349, U.S.A. email:
apevtsov@nso.edu
T. Wiegelmann1 1 Max Planck Institut für Sonnensystemforschung, Max-Planck
Str. 2, D–37191 Katlenburg-Lindau, Germany email: tilaye.tadesse@gmail.com,
email: wiegelmann@mps.mpg.de, email: inhester@mps.mpg.de
2Addis Ababa University, College of Education, Department of Physics
Education, Po.Box 1176, Addis Ababa, Ethiopia
3 National Solar Observatory, Sunspot, NM 88349, U.S.A. email:
apevtsov@nso.edu
and B. Inhester1 1 Max Planck Institut für Sonnensystemforschung, Max-Planck
Str. 2, D–37191 Katlenburg-Lindau, Germany email: tilaye.tadesse@gmail.com,
email: wiegelmann@mps.mpg.de, email: inhester@mps.mpg.de
2Addis Ababa University, College of Education, Department of Physics
Education, Po.Box 1176, Addis Ababa, Ethiopia
3 National Solar Observatory, Sunspot, NM 88349, U.S.A. email:
apevtsov@nso.edu
and A. Pevtsov3 1 Max Planck Institut für Sonnensystemforschung, Max-Planck
Str. 2, D–37191 Katlenburg-Lindau, Germany email: tilaye.tadesse@gmail.com,
email: wiegelmann@mps.mpg.de, email: inhester@mps.mpg.de
2Addis Ababa University, College of Education, Department of Physics
Education, Po.Box 1176, Addis Ababa, Ethiopia
3 National Solar Observatory, Sunspot, NM 88349, U.S.A. email:
apevtsov@nso.edu
###### Abstract
Extrapolation codes for modelling the magnetic field in the corona in
cartesian geometry do not take the curvature of the Sun’s surface into account
and can only be applied to relatively small areas, e.g., a single active
region. We apply a method for nonlinear force-free coronal magnetic field
modelling of photospheric vector magnetograms in spherical geometry which
allows us to study the connectivity between multi-active regions. We use
vector magnetograph data from the Synoptic Optical Long-term Investigations of
the Sun survey (SOLIS)/Vector Spectromagnetograph(VSM) to model the coronal
magnetic field, where we study three neighbouring magnetically connected
active regions (ARs: 10987, 10988, 10989) observed on 28, 29, and 30 March
2008, respectively. We compare the magnetic field topologies and the magnetic
energy densities and study the connectivities between the active regions(ARs).
We have studied the time evolution of magnetic field over the period of three
days and found no major changes in topologies as there was no major eruption
event. From this study we have concluded that active regions are much more
connected magnetically than the electric current.
###### keywords:
Magnetic fields Photosphere Corona
## 1 Introduction
S-Introduction In order to model and understand the physical mechanisms
underlying the various activity phenomena that can be observed in the solar
atmosphere, like, for instance, the onset of flares and coronal mass
ejections, the stability of active region, and to monitor the magnetic
helicity and free magnetic energy, the magnetic field vector throughout the
atmosphere must be known. However, routine measurements of the solar magnetic
field are mainly carried out in the photosphere. The magnetic field in the
photosphere is measured using the Zeeman effect of magnetically-sensitive
solar spectral lines. The problem of measuring the coronal field and its
embedded electrical currents thus leads us to use numerical modelling to infer
the field strength in the higher layers of the solar atmosphere from the
measured photospheric field. Except in eruptions, the magnetic field in the
solar corona evolves slowly as it responds to changes in the surface field,
implying that the electromagnetic Lorentz forces in this low-$\beta$
environment are relatively weak and that any electrical currents that exist
must be essentially parallel or antiparallel to the magnetic field wherever
the field is not negligible.
Due to the low value of the plasma $\beta$ (the ratio of gas pressure to
magnetic pressure), the solar corona is magnetically dominated [Gary (2001)].
To describe the equilibrium structure of the static coronal magnetic field
when non-magnetic forces are negligible, the force-free assumption is
appropriate:
$(\nabla\times\textbf{B})\times\textbf{B}=0\ilabel{one}$ (1)
$\nabla\cdot\textbf{B}=0\ilabel{two}$ (2)
$\textbf{B}=\textbf{B}_{\textrm{obs}}\quad\mbox{on photosphere}\ilabel{three}$
(3)
where B is the magnetic field and $\textbf{B}_{\textrm{obs}}$ is measured
vector field on the photosphere. Equation (one) states that the Lorentz force
vanishes (as a consequence of $\textbf{J}\parallel\textbf{B}$, where J is the
electric current density) and Equation (two) describes the absence of magnetic
monopoles.
The extrapolation methods based on this assumption are termed nonlinear force-
free field extrapolation [Sakurai (1981), Amari et al. (1997), Amari,
Boulmezaoud, and Mikic (1999), Amari, Boulmezaoud, and Aly (2006), Wu et al.
(1990), Cuperman, Demoulin, and Semel (1991), Demoulin, Cuperman, and Semel
(1992), Inhester and Wiegelmann (2006), Mikic and McClymont (1994),
Roumeliotis (1996), Yan and Sakurai (2000), Valori, Kliem, and Keppens (2005),
Wiegelmann (2004), Wheatland (2004), Wheatland and Régnier (2009), Wheatland
and Leka (2010), Amari and Aly (2010)]. For a more complete review of existing
methods for computing nonlinear force-free coronal magnetic fields, we refer
to the review papers by Amari97, Schrijver06, Metcalf, and Wiegelmann08.
Wiegelmann:2006 has developed a code for the self-consistent computation of
the coronal magnetic fields and the coronal plasma that uses non-force-free
MHD equilibria.
The magnetic field is not force-free in the photosphere, but becomes force-
free roughly 400 km above the photosphere [Metcalf et al. (1995)].
Furthermore, measurement errors, in particular for the transverse field
components (i.e., perpendicular to the line of sight of the observer), would
destroy the compatibility of a magnetogram with the condition of being force-
free. One way to ease these problems is to preprocess the magnetogram data as
suggested by Wiegelmann06sak. The preprocessing modifies the boundary values
of B within the error margins of the measurement in such a way that the moduli
of force-free integral constraints of Molodenskii [Molodensky (1974)] are
minimized. The resulting boundary values are expected to be more suitable for
an extrapolation into a force-free field than the original values.
In the present work, we use a larger computational domain which accommodates
most of the connectivity within the coronal region. We also take the
uncertainties of measurements in vector magnetograms into account as suggested
in DeRosa. We apply a preprocessing procedure to SOLIS data in spherical
geometry [Tadesse, Wiegelmann, and Inhester (2009)] by taking account of the
curvature of the Sun’s surface. For our observations, performed on 28, 29, and
30 March 2008, respectively, the large field of view contains three active
regions (ARs: 10987, 10988, 10989).
The full inversion of SOLIS/VSM magnetograms yields the magnetic filling
factor for each pixel, and it also corrects for magneto-optical effects in the
spectral line formation. The full inversion is performed in the framework of
Milne-Eddington model (ME)[Unno (1956)] only for pixels whose polarization is
above a selected threshold. Pixels with polarization below threshold are left
undetermined. These data gaps represent a major difficulty for existing
magnetic field extrapolation schemes. Due to the large area of missing data in
the example treated here in this work, the reconstructed field model obtained
must be treated with some caution. It is very likely that the field strength
in the area of missing data was small because the inversion procedure, which
calculates the surface field from the Stokes line spectra, abandons the
calculation if the signal is below a certain threshold. The magnetic field in
the corona, however, is dominated by the strongest flux elements on the
surface, even if they occupy only a small portion of the surface. We are
therefore confident that these dominant flux elements are accounted for in the
surface magnetogram, so that the resulting field model is fairly realistic. At
any rate, it is the field close to the real, which can be constructed from the
available sparse data. Therefore, we use a procedure which allows us to
incorporate measurement error and treat regions with lacking observational
data as in Tilaye:2010. The technique has been tested in cartesian geometry in
Wiegelmann10 for synthetic boundary data.
## 2 Optimization Principle in Spherical Geometry
Wheatland00 have proposed the variational principle to be solved iteratively
which minimizes Lorentz forces (one) and the divergence of magnetic field
(two) throughout the volume of interest, $V$. Later on the procedure has been
improved by Wiegelmann04 for cartesian geometry in a such way that it can only
uses the bottom boundary on the photosphere as an input. Here we use
optimization approach for the functional $(\mathcal{L}_{\mathrm{\omega}})$ in
spherical geometry [Wiegelmann (2007), Tadesse, Wiegelmann, and Inhester
(2009)] and iterate B to minimize $\mathcal{L}_{\mathrm{\omega}}$. The
modification concerns the input bottom boundary field
$\textbf{B}_{\textrm{obs}}$ which the model field B is not forced to match
exactly but we allow deviations of the order of the observational errors. The
modified variational problem is [Wiegelmann and Inhester (2010), Tadesse et
al. (2011)]:
$\textbf{B}=\textrm{argmin}(\mathcal{L}_{\omega})$
$\mathcal{L}_{\mathrm{\omega}}=\mathcal{L}_{\textrm{f}}+\mathcal{L}_{\textrm{d}}+\nu\mathcal{L}_{\textrm{photo}}\ilabel{four}$
(4)
$\mathcal{L}_{\textrm{f}}=\int_{V}\omega_{\textrm{f}}(r,\theta,\phi)B^{-2}\big{|}(\nabla\times\textbf{B})\times\textbf{B}\big{|}^{2}r^{2}\sin\theta
drd\theta d\phi$
$\mathcal{L}_{\textrm{d}}=\int_{V}\omega_{\textrm{d}}(r,\theta,\phi)\big{|}\nabla\cdot\textbf{B}\big{|}^{2}r^{2}\sin\theta
drd\theta d\phi$
$\mathcal{L}_{\textrm{photo}}=\int_{S}\big{(}\textbf{B}-\textbf{B}_{\textrm{obs}}\big{)}\cdot\textbf{W}(\theta,\phi)\cdot\big{(}\textbf{B}-\textbf{B}_{\textrm{obs}}\big{)}r^{2}\sin\theta
d\theta d\phi$
where $\mathcal{L}_{\mathrm{\textrm{f}}}$ and
$\mathcal{L}_{\mathrm{\textrm{d}}}$ measure how well the force-free Equations
(one) and divergence-free (two) conditions are fulfilled, respectively.
$\omega_{\textrm{f}}(r,\theta,\phi)$ and $\omega_{\textrm{d}}(r,\theta,\phi)$
are weighting functions for force-free term and divergence-free term,
respectively and identical for this study. The third integral,
$\mathcal{L}_{\textrm{photo}}$, is a surface integral over the photosphere
which serves to relax the field on the photosphere towards a force-free
solution without too much deviation from the original surface field data,
$\textbf{B}_{\textrm{obs}}$. In this integral,
$\textbf{W}(\theta,\phi)=\textrm{diag}(w_{\textrm{radial}},w_{\textrm{trans}},w_{\textrm{trans}})$
is diagonal matrix which gives different weights for observed surface field
components depending on its relative accuracy in measurement. In this sense,
lacking data is considered most inaccurate and is taken account of by setting
$W(\theta,\phi)$ to zero in all elements of the matrix.
Figure 1.: Surface contour plot of radial magnetic field component and vector
field plot of transverse field with white arrows. The color coding shows
$B_{r}$ on the photosphere. The vertical and horizontal axes show latitude,
$\theta$(in degree) and longitude, $\phi$(in degree) on the photosphere. In
the area coloured in olive, field values are lacking. The region inside the
black box corresponds to the physical domain where the weighting function is
unity and the outside region is the buffer zone where it declines to zero. The
blue boxes indicate the domains of the three active regions.fig1
We use a spherical grid $r$, $\theta$, $\phi$ with $n_{r}$, $n_{\theta}$,
$n_{\phi}$ grid points in the direction of radius, latitude, and longitude,
respectively. In the code, we normalize the magnetic field with the average
radial magnetic field on the photosphere and the length scale with a solar
radius for numerical reason. Figure fig1 shows a map of the radial component
of the field as color-coded and the transverse magnetic field depicted as
white arrows. For this particular dataset, about $86\%$ of the data pixels are
undetermined. The method works as follows:
* •
We compute an initial source surface potential field in the computational
domain from $\textbf{B}_{\textrm{obs}}\cdot\hat{r}$, the normal component of
the surface field at the photosphere at $r=1R_{\mathrm{\odot}}$. The
computation is performed by assuming that a currentless ($\textbf{J}=0$ or
$\nabla\times\textbf{B}=0$) approximation holds between the photosphere and
some spherical surface $S_{s}$ (source surface where the magnetic field vector
is assumed radial). We compute the solution of this boundary-value problem in
a standard form of spherical harmonics expansion.
* •
We minimize $\mathcal{L}_{\omega}$(Equations four) iteratively. The model
magnetic field B at the surface is gradually driven towards the observed field
$\textbf{B}_{\textrm{obs}}$ while the field in the volume $V$ relaxes to
force-free. If the observed field, $\textbf{B}_{\textrm{obs}}$, is
inconsistent, the difference $\textbf{B}-\textbf{B}_{\textrm{obs}}$ remains
finite depending in the control parameter $\nu$. At data gaps in
$\textbf{B}_{\textrm{obs}}$, the respective field value is automatically
ignored.
* •
The iteration stops when $\mathcal{L}_{\omega}$ becomes stationary as
$\Delta\mathcal{L}_{\omega}/\mathcal{L}_{\omega}<10^{-4}$,
$\Delta\mathcal{L}_{\omega}$ is the decrease of $\mathcal{L}_{\omega}$ during
an iterative steps.
* •
A convergence to $\mathcal{L}_{\omega}=0$ yields a perfect force-free and
divergence-free state and exact agreement of the boundary values B with
observations $\textbf{B}_{\textrm{obs}}$ in regions where the elements of W
are greater than zero. For inconsistent boundary data the force-free and
solenoidal conditions can still be fulfilled, but the surface term
$\mathcal{L}_{\textrm{photo}}$ will remain finite. This results in some
deviation of the bottom boundary data from the observations, especially in
regions where $w_{\textrm{radial}}$ and $w_{\textrm{trans}}$ are small. The
parameter $\nu$ is tuned so that these deviations do not exceed the local
estimated measurement error.
## 3 Results
In this work, we apply our extrapolation scheme to Milne-Eddington inverted
vector magnetograph data from the Synoptic Optical Long-term Investigations of
the Sun survey (SOLIS). As a first step for our work we remove non-magnetic
forces from the observed surface magnetic field using our spherical
preprocessing procedure. The code takes $\textbf{B}_{\textrm{obs}}$ as
improved boundary condition.
SOLIS/VSM provides full-disk vector-magnetograms, but for some individual
pixels the inversion from line profiles to field values may not have been
successful inverted and field data there will be missing for these pixels (see
Figure fig1). The different errors for the radial and transverse components of
$\textbf{B}_{\textrm{obs}}$ are taken account by different values for
$w_{\textrm{radial}}$ and $w_{\textrm{trans}}$. In this work we used
$w_{\textrm{radial}}=20w_{\textrm{trans}}$ for the surface preprocessed fields
as the radial component of $\textbf{B}_{\textrm{obs}}$ is measured with higher
accuracy.
We compute the 3D magnetic field above the observed surface region inside
wedge-shaped computational box of volume $V$, which includes an inner physical
domain $V^{\prime}$ and a buffer zone(the region outside the physical domain),
as shown in Figure fig1b. The physical domain $V^{\prime}$ is a wedge-shaped
volume, with two latitudinal boundaries at $\theta_{\textrm{min}}=-26^{\circ}$
and $\theta_{\textrm{max}}=16^{\circ}$, two longitudinal boundaries at
$\phi_{\textrm{min}}=129^{\circ}$ and $\phi_{\textrm{max}}=226^{\circ}$, and
two radial boundaries at the photosphere ($r=1R_{\odot}$) and
$r=1.75R_{\odot}$. Note that the longitude $\phi$ is measured from the center
meridian of the back side of the disk.
Figure 2.: Wedge-shaped computational box of volume $V$ with the inner
physical domain $V^{\prime}$ and a buffer zone. O is the center of the
Sun.fig1b
(a) 28 March 2008 15:45UT (b) 29 March 2008 15:48UT
(c) 30 March 2008 15:47UT (d) 28 March 2008 15:45UT
(e) 29 March 2008 15:48UT (f) 30 March 2008 15:47UT
(g) 28 March 2008 16:00UT (h) 29 March 2008 15:48UT
(i) 30 March 2008 15:48UT
Figure 3.: Top row: SOLIS/VSM magnetograms of respective dates. Middle row:
Magnetic field lines reconstructed from magnetograms on the top panel. Bottom
row: EIT image of the Sun at 195Å on indicated dates. fig2
(a) 28 March 2008 15:45UT (b) 29 March 2008 15:48UT (c) 30 March 2008 15:47UT
Figure 4.: Some magnetic field lines plots reconstructed from SOLIS
magnetograms using nonlinear force-free modelling. The color coding shows
$B_{r}$ on the photosphere.fig3
We define $V^{\prime}$ to be the inner region of $V$ (including the
photospheric boundary) with $\omega_{\textrm{f}}=\omega_{\textrm{d}}=1$
everywhere including its six inner boundaries $\delta V^{\prime}$. We use a
position-dependent weighting function to introduce a buffer boundary of
$nd=10$ grid points towards the side and top boundaries of the computational
box, $V$. The weighting functions, $\omega_{\textrm{f}}$ and
$\omega_{\textrm{d}}$ are chosen to be unity within the inner physical domain
$V^{\prime}$ and decline to 0 with a cosine profile in the buffer boundary
region [Wiegelmann (2004), Tadesse, Wiegelmann, and Inhester (2009)]. The
framed region in Figure fig1 corresponds to the lower boundary of the physical
domain $V^{\prime}$ with a resolution of $114\times 251$ pixels in the
photosphere.
The middle panel of Figure fig2 shows magnetic field line plots for three
consecutive dates of observation. The top and bottom panels of Figure fig2
show the position of the three active regions on the solar disk both for SOLIS
full-disk magnetogram 111http://solis.nso.edu/solis_data.html and SOHO/EIT
222http://sohowww.nascom.nasa.gov/data/archive image of the Sun observed at
195Å on the indicated dates and times. Figure fig3 shows some selected
magnetic field lines from reconstruction from the SOLIS magnetograms, zoomed
in from the middle panels of Figure fig2. In each column of Figure fig3 the
field lines are plotted from the same foot points to compare the change in
topology of the magnetic field over the period of the three days of
observation. In order to compare the fields at the three consecutive days
quantitatively, we computed the vector correlations between the three field
configurations. The vector correlation ($C_{\mathrm{\textrm{vec}}}$)
[Schrijver et al. (2006)] metric generalizes the standard correlation
coefficient for scalar functions and is given by
$C_{\mathrm{\textrm{vec}}}=\frac{\sum_{i}\textbf{v}_{i}\cdot\textbf{u}_{i}}{\sqrt{\sum_{i}|\textbf{v}_{i}|^{2}}\sqrt{\sum_{i}|\textbf{u}_{i}|^{2}}}\ilabel{nine}$
(5)
where $\textbf{v}_{i}$ and $\textbf{u}_{i}$ are 3D vectors at grid point $i$.
If the vector fields are identical, then $C_{\textrm{vec}}=1$; if
$\textbf{v}_{i}\perp\textbf{u}_{i}$ , then $C_{\textrm{vec}}=0$. The
correlations ($C_{\mathrm{\textrm{vec}}}$) of the 3D magnetic field vectors of
28 and 30 March with respect to the field on 29 March are $0.96$ and $0.93$
respectively. From these values we can see that there has been no major change
in the magnetic field configuration during this period.
(a) 28 March 2008 15:45UT (b) 29 March 2008 15:48UT (c) 30 March 2008 15:47UT
Figure 5.: Iso-surfaces (ISs) of the absolute NLFF magnetic energy density
($7.5\times 10^{16}$erg) for the three consecutive dates computed within the
entire computational domain.fig4
We also compute the values of the free magnetic energy estimated from the
excess energy of the extrapolated field beyond the potential field satisfying
the same $\textbf{B}_{\textrm{obs}}\cdot\hat{r}$ boundary condition. Similar
estimates have been made by Regnier and Thalmann for single active regions
observed at other times. From the corresponding potential and force-free
magnetic field, $\textbf{B}_{\textrm{pot}}$ and B, respectively, we can
estimate an upper limit to the free magnetic energy associated with coronal
currents
$E_{\mathrm{free}}=E_{\mathrm{\textrm{nlff}}}-E_{\mathrm{\textrm{pot}}}=\frac{1}{8\pi}\int_{V^{\prime}}\Big{(}B_{\textrm{nlff}}^{2}-B_{\textrm{pot}}^{2}\Big{)}r^{2}\textrm{sin}\theta
drd\theta d\phi.\ilabel{ten}$ (6)
Date | $E_{\textrm{nlff}}(10^{32}\textrm{erg})$ | $E_{\mathrm{\textrm{pot}}}(10^{32}\textrm{erg})$ | $E_{\mathrm{free}}(10^{32}\textrm{erg})$
---|---|---|---
28 March 2008 | $57.34$ | $53.89$ | $3.45$
29 March 2008 | $57.48$ | $54.07$ | $3.41$
30 March 2008 | $57.37$ | $53.93$ | $3.44$
Table 1.: The magnetic energy associated with extrapolated NLFF field
configurations for the three particular dates.table1
The computed energy values are listed in Table table1. The free energy on all
three days is about $3.5\times 10^{32}\textrm{erg}$. The magnetic energy
associated with the potential field configuration is about $54\times
10^{32}\textrm{erg}$. Hence $E_{\textrm{nlff}}$ exceeds $E_{\textrm{pot}}$ by
only 6$\%$. Figure fig4 shows Iso-surface plots of magnetic energy density in
the volume above the active regions. There are strong energy concentrations
above each active region. There were no major changes in the magnetic energy
density over the observation period and there was no major eruptive phenomenon
during those three days in the region observed.
In our previous work[Tadesse et al. (2011)], we have studied the connectivity
between two neighbouring active regions. In this work with an even larger
field of view, the three ARs share a decent amount of magnetic flux compared
to their internal flux from one polarity to the other (see Figure fig3). In
terms of the electric current they are much more isolated. In order to
quantify these connectivities, we have calculated the magnetic flux and the
electric currents shared between active regions. For the magnetic flux, e.g.,
we use
$\Phi_{\alpha\beta}=\sum_{i}|\textbf{B}_{i}\cdot\hat{r}|R^{2}_{\odot}\textrm{sin}(\theta_{i})\Delta\theta_{i}\Delta\phi_{i}\ilabel{ten}$
(7)
where the summation is over all pixels of $\mbox{AR}_{\alpha}$ from which the
field line ends in $\mbox{AR}_{\beta}$ or
$i\in\mbox{AR}_{\alpha}\|\,\mbox{conjugate footpoint}(i)\in\mbox{AR}_{\beta}$.
The indices $\alpha$ and $\beta$ denote the active regions and the index
number $1$ corresponds to AR $10989$, $2$ to AR $10988$ and $3$ to AR $10987$
of Figure fig1. For the electric current we replace the magnetic field,
$\textbf{B}_{i}\cdot\hat{r}$, by the vertical current density
$\textbf{J}_{i}\cdot\hat{r}$ in Equation (ten). Whenever the end point of a
field line falls outside (blue rectangles in Figure fig1) of the three ARs, we
categorize it as ending elsewhere. Both Table table2 and table3 show the
percentage of the total magnetic flux and electric current shared between the
three ARs. So, for example first column of Table table2 shows that $56.37\%$
of positive polarity of $\mbox{AR}_{1}$ is connected to negative polarity of
$\mbox{AR}_{1}$; line 2 shows that $13.66\%$ of positive/negative polarity of
$\mbox{AR}_{1}$ is connected to positive/negative polarity of $\mbox{AR}_{2}$,
and line 3 shows that there are no field lines $(0\%)$ connecting
positive/negative polarity of $\mbox{AR}_{1}$ with positive/negative polarity
of $\mbox{AR}_{3}$. The same technique applies for Table table3 too. The three
active regions are magnetically connected but much less by electric currents.
| | $28th$ | | | $29th$ | | | $30th$ |
---|---|---|---|---|---|---|---|---|---
$\Phi_{\alpha\beta}$ | $\alpha=1$ | $2$ | $3$ | $\alpha=1$ | $2$ | $3$ | $\alpha=1$ | $2$ | $3$
$\beta=1$ | $56.37$ | $5.59$ | $0.00$ | $56.50$ | $5.48$ | $0.00$ | $56.50$ | $5.48$ | $0.00$
$2$ | $13.66$ | $81.12$ | $1.43$ | $13.66$ | $81.22$ | $1.43$ | $13.66$ | $81.22$ | $2.22$
$3$ | $0.00$ | $0.48$ | $71.47$ | $0.00$ | $0.48$ | $71.80$ | $0.00$ | $0.48$ | $71.80$
Elsewhere | $29.97$ | $12.82$ | $27.10$ | $29.84$ | $12.82$ | $26.77$ | $29.84$ | $12.82$ | $25.98$
Table 2.: The percentage of the total magnetic flux shared between the three ARs. $\Phi_{11}$, $\Phi_{22}$ and $\Phi_{33}$ denote magnetic flux of AR 10989(left), AR 10988(middle) and AR 10987(right) of Figure fig1, respectively. table2 | | $28th$ | | | $29th$ | | | $30th$ |
---|---|---|---|---|---|---|---|---|---
$I_{\alpha\beta}$ | $\alpha=1$ | $2$ | $3$ | $\alpha=1$ | $2$ | $3$ | $\alpha=1$ | $2$ | $3$
$\beta=1$ | $82.47$ | $0.19$ | $0.00$ | $86.36$ | $0.19$ | $0.00$ | $94.16$ | $0.19$ | $0.00$
$2$ | $0.65$ | $85.25$ | $1.42$ | $0.65$ | $85.25$ | $1.42$ | $0.65$ | $85.25$ | $3.55$
$3$ | $0.00$ | $0.38$ | $82.27$ | $0.00$ | $0.38$ | $82.27$ | $0.00$ | $0.38$ | $82.27$
Elsewhere | $16.88$ | $14.18$ | $16.31$ | $12.99$ | $14.18$ | $16.31$ | $5.19$ | $14.18$ | $14.18$
Table 3.: The percentage of the total electric current shared between the
three ARs. $I_{11}$, $I_{22}$, and $I_{33}$ denote electric current of AR
10989(left), AR 10988(middle) and AR 10987(right) of Figure fig1,
respectively. table3
## 4 Conclusions
sect:disc We have investigated the coronal magnetic field associated with
three ARs 10987, 10987, 10989, on 28, 29 and 30 March 2008 by analysing
SOLIS/VSM data. We have used an optimization method for the reconstruction of
nonlinear force-free coronal magnetic fields in spherical geometry by
restricting the code to limited parts of the Sun [Wiegelmann (2007), Tadesse,
Wiegelmann, and Inhester (2009), Tadesse et al. (2011)]. The code was modified
so that it allows us to deal with lacking data and regions with poor signal-
to-noise ratio in a systematic manner[Wiegelmann and Inhester (2010), Tadesse
et al. (2011)].
We have studied the time evolution of magnetic field over the period of three
days and found no major changes in topologies as there was no major eruption
event. The magnetic energies calculated in the large wedge-shaped
computational box above the three active regions were not far apart in value.
This is the first study which contains three well separated ARs in our model.
This was made possible by the use of spherical coordinates and it allows us to
analyse linkage between the ARs. The active regions share a decent amount of
magnetic flux compared to their internal flux from one polarity to the other.
In terms of the electric current they are much more isolated.
## Acknowledgements
SOLIS/VSM vector magnetograms are produced cooperatively by NSF/NSO and
NASA/LWS. The National Solar Observatory (NSO) is operated by the Association
of Universities for Research in Astronomy, Inc., under cooperative agreement
with the National Science Foundation. Tilaye Tadesse Asfaw acknowledges a
fellowship of the International Max-Planck Research School at the Max-Planck
Institute for Solar System Research and the work of T. Wiegelmann was
supported by DLR-grant $50$ OC $0501$.
## References
* Amari and Aly (2010) Amari, T., Aly, J.: 2010,. Astron. Astrophys. 522, A52.
* Amari, Boulmezaoud, and Aly (2006) Amari, T., Boulmezaoud, T.Z., Aly, J.J.: 2006,. Astron. Astrophys. 446, 691\.
* Amari, Boulmezaoud, and Mikic (1999) Amari, T., Boulmezaoud, T.Z., Mikic, Z.: 1999,. Astron. Astrophys. 350, 1051\.
* Amari et al. (1997) Amari, T., Aly, J.J., Luciani, J.F., Boulmezaoud, T.Z., Mikic, Z.: 1997,. Solar Phys. 174, 129\.
* Cuperman, Demoulin, and Semel (1991) Cuperman, S., Demoulin, P., Semel, M.: 1991,. Astron. Astrophys. 245, 285\.
* Demoulin, Cuperman, and Semel (1992) Demoulin, P., Cuperman, S., Semel, M.: 1992,. Astron. Astrophys. 263, 351\.
* DeRosa et al. (2009) DeRosa, M.L., Schrijver, C.J., Barnes, G., Leka, K.D., Lites, B.W., Aschwanden, M.J., et al.: 2009,. Astrophys. J. 696, 1780\.
* Gary (2001) Gary, G.A.: 2001,. Solar Phys. 203, 71\.
* Inhester and Wiegelmann (2006) Inhester, B., Wiegelmann, T.: 2006,. Solar Phys. 235, 201\.
* Metcalf et al. (1995) Metcalf, T.R., Jiao, L., McClymont, A.N., Canfield, R.C., Uitenbroek, H.: 1995,. Astrophys. J. 439, 474\.
* Metcalf et al. (2008) Metcalf, T.R., Derosa, M.L., Schrijver, C.J., Barnes, G., van Ballegooijen, A.A., Wiegelmann, T., Wheatland, M.S., Valori, G., McTtiernan, J.M.: 2008,. Solar Phys. 247, 269\.
* Mikic and McClymont (1994) Mikic, Z., McClymont, A.N.: 1994,, Astronomical Society of the Pacific Conference Series 68, 225\.
* Molodensky (1974) Molodensky, M.M.: 1974,. Solar Phys. 39, 393\.
* Régnier and Priest (2007) Régnier, S., Priest, E.R.: 2007,. Astrophys. J. Lett. 669, L53.
* Roumeliotis (1996) Roumeliotis, G.: 1996,. Astrophys. J. 473, 1095\.
* Sakurai (1981) Sakurai, T.: 1981,. Solar Phys. 69, 343\.
* Schrijver et al. (2006) Schrijver, C.J., Derosa, M.L., Metcalf, T.R., Liu, Y., McTiernan, J., Régnier, S., Valori, G., Wheatland, M.S., Wiegelmann, T.: 2006,. Solar Phys. 235, 161\.
* Tadesse, Wiegelmann, and Inhester (2009) Tadesse, T., Wiegelmann, T., Inhester, B.: 2009,. Astron. Astrophys. 508, 421\.
* Tadesse et al. (2011) Tadesse, T., Wiegelmann, T., Inhester, B., Pevtsov, A.: 2011,. Astron. Astrophys. 527, A30.
* Thalmann, Wiegelmann, and Raouafi (2008) Thalmann, J.K., Wiegelmann, T., Raouafi, N.E.: 2008,. Astron. Astrophys. 488, L71.
* Unno (1956) Unno, W.: 1956,. Publ. Astron. Soc. Japan 8, 108\.
* Valori, Kliem, and Keppens (2005) Valori, G., Kliem, B., Keppens, R.: 2005,. Astron. Astrophys. 433, 335\.
* Wheatland (2004) Wheatland, M.S.: 2004,. Solar Phys. 222, 247\.
* Wheatland and Leka (2010) Wheatland, M.S., Leka, K.D.: 2010,. ArXiv e-prints.
* Wheatland and Régnier (2009) Wheatland, M.S., Régnier, S.: 2009,. Astrophys. J. Lett. 700, L88.
* Wheatland, Sturrock, and Roumeliotis (2000) Wheatland, M.S., Sturrock, P.A., Roumeliotis, G.: 2000,. Astrophys. J. 540, 1150\.
* Wiegelmann (2004) Wiegelmann, T.: 2004,. Solar Phys. 219, 87\.
* Wiegelmann (2007) Wiegelmann, T.: 2007,. Solar Phys. 240, 227\.
* Wiegelmann (2008) Wiegelmann, T.: 2008,. J. Geophys. Res. 113, 3\.
* Wiegelmann and Inhester (2010) Wiegelmann, T., Inhester, B.: 2010,. Astron. Astrophys. 516, A107.
* Wiegelmann and Neukirch (2006) Wiegelmann, T., Neukirch, T.: 2006,. Astron. Astrophys. 457, 1053\.
* Wiegelmann, Inhester, and Sakurai (2006) Wiegelmann, T., Inhester, B., Sakurai, T.: 2006,. Solar Phys. 233, 215\.
* Wu et al. (1990) Wu, S.T., Sun, M.T., Chang, H.M., Hagyard, M.J., Gary, G.A.: 1990,. Astrophys. J. 362, 698\.
* Yan and Sakurai (2000) Yan, Y., Sakurai, T.: 2000,. Solar Phys. 195, 89\.
|
arxiv-papers
| 2011-04-12T15:35:40 |
2024-09-04T02:49:18.218407
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "T. Tadesse, T. Wiegelmann, B. Inhester, A. Pevtsov",
"submitter": "Tilaye Tadesse",
"url": "https://arxiv.org/abs/1104.2246"
}
|
1104.2401
|
# Convexity of quotients of theta functions
Atul Dixit, Arindam Roy and Alexandru Zaharescu Department of Mathematics,
University of Illinois, 1409 West Green Street, Urbana, IL 61801, USA
aadixit2@illinois.edu, roy22@illinois.edu, zaharesc@math.uiuc.edu
###### Abstract.
For fixed $u$ and $v$ such that $0\leq u<v<1/2$, the monotonicity of the
quotients of Jacobi theta functions, namely, $\theta_{j}(u|i\pi
t)/\theta_{j}(v|i\pi t)$, $j=1,2,3,4$, on $0<t<\infty$ has been established in
the previous works of A.Yu. Solynin, K. Schiefermayr, and Solynin and the
first author. In the present paper, we show that the quotients
$\theta_{2}(u|i\pi t)/\theta_{2}(v|i\pi t)$ and $\theta_{3}(u|i\pi
t)/\theta_{3}(v|i\pi t)$ are convex on $0<t<\infty$.
2010 Mathematics Subject Classification. Primary 11F27, 33E05.
Keywords and phrases. Jacobi theta function, Weierstrass elliptic function,
Monotonicity, Heat equation.
The third author is supported by NSF grant number DMS - 0901621.
## 1\. Introduction
Let $q=e^{\pi i\tau}$ with Im $\tau>0$. The Jacobi theta functions are defined
by [8, p. 355, Section 13.19]
$\displaystyle\theta_{1}(z|\tau)$
$\displaystyle=2\sum_{n=0}^{\infty}(-1)^{n}q^{(n+\frac{1}{2})^{2}}\sin(2n+1)\pi
z,$ $\displaystyle\theta_{2}(z|\tau)$
$\displaystyle=2\sum_{n=0}^{\infty}q^{(n+\frac{1}{2})^{2}}\cos(2n+1)\pi z,$
$\displaystyle\theta_{3}(z|\tau)$
$\displaystyle=1+2\sum_{n=1}^{\infty}q^{n^{2}}\cos 2n\pi z,$
$\displaystyle\theta_{4}(z|\tau)$
$\displaystyle=1+2\sum_{n=1}^{\infty}(-1)^{n}q^{n^{2}}\cos 2n\pi z.$
We denote $\theta_{i}(z|\tau)$ by $\theta_{i}(z)$, $i=1,2,3$ and $4$, when the
dependence on $z$ is to be emphasized and that on $\tau$ is to be suppressed.
Moreover when $z=0$, we denote the above theta functions by $\theta_{i}$,
i.e., $\theta_{i}:=\theta_{i}(0|\tau),i=1,2,3$ and $4$, where it is easy to
see that $\theta_{1}=0$.
For $u,v\in\mathbb{C}$ and $\tau=i\pi t$ with Re $t>0$, define $S_{j}(u,v;t)$,
$j=1,2,3$ and $4$, to be the following quotient of theta functions:
$S_{j}:=S_{j}(u,v;t):=\frac{\theta_{j}(u/2|i\pi t)}{\theta_{j}(v/2|i\pi t)}.$
(1.1)
Monotonicity of these quotients has attracted a lot of attention in recent
years. Monotonicity of $S_{2}(u,v;t)$ on $0<t<\infty$ arose naturally in the
work of A.Yu. Solynin [14] where it is related to the steady-state
distribution of heat. In particular, Solynin used it to prove a special case
of a generalization of a conjecture due to A.A. Gonchar [4, Problem 7.45]
posed by A. Baernstein II [1]. (For complete history and progress on Gonchar’s
conjecture, the reader should consult [3, 7]). However, the proof for
$S_{2}(u,v;t)$ in [14] contained a small error. This was rectified by A.Yu.
Solynin and the first author in [7], where they also proved monotonicity of
$S_{1}(u,v;t),S_{3}(u,v;t)$ and $S_{4}(u,v;t)$. However, it turns out that K.
Schiefermayr [13, Theorem 1] obtained the same results as those in [7] on
monotonicity of $S_{3}(u,v;t)$ and $S_{4}(u,v;t)$ two years before the
appearance of [7], though the proofs in [7] and [13] use entirely different
ideas. These results on monotonicity of $S_{j}(u,v;t),j=1,2,3,4$, are stated
in [7] as follows.
For fixed $u$ and $v$ such that $0\leq u<v<1$, the functions $S_{1}(u,v;t)$
and $S_{4}(u,v;t)$ are positive and strictly increasing on $0<t<\infty$, while
the functions $S_{2}(u,v;t)$ and $S_{3}(u,v;t)$ are positive and strictly
decreasing on $0<t<\infty$.
At the end of the paper [7], based on numerical calculations, it was
conjectured that $S_{j}(u,v;t)$, $j=1,2,3,4$, are completely monotonic on
$0<t<\infty$. A function $f$ is said to be completely monotonic on
$[0,\infty)$ if $f\in C[0,\infty)$, $f\in C^{\infty}(0,\infty)$ and
$(-1)^{k}f^{(k)}(t)\geq 0$ for any $k$ non-negative and $t>0$. Several
functions related to gamma function, digamma function, polygamma function and
modified Bessel function etc. have been shown to be completely monotonic. See
[5, 9, 11]. For a survey on properties of completely monotonic functions, see
[12]. The above-mentioned conjecture can be precisely formulated (and
corrected) as follows.
###### Conjecture 1.1.
Let $S_{j}(u,v;t)$ be defined in (1.1). For fixed $u$ and $v$ such that $0\leq
u<v<1$, the functions $\frac{\partial}{\partial
t}S_{1}(u,v;t),S_{2}(u,v;t),S_{3}(u,v;t)$ and $\frac{\partial}{\partial
t}S_{4}(u,v;t)$ are completely monotonic on $0<t<\infty$.
If this conjecture is indeed true, by a theorem of S.N. Bernstein and D.
Widder [6, p. 95, Theorem 1] there exist non-decreasing bounded functions
$\gamma_{j}$ such that $S_{j}(u,v;t)=\int_{0}^{\infty}e^{-st}d\gamma_{j}(s)$
for $j=2,3,$ and $\frac{\partial}{\partial
t}S_{j}(u,v;t)=\int_{0}^{\infty}e^{-st}d\gamma_{j}(s)$ for $j=1,4$.
In the present paper, we study convexity of $S_{2}(u,v;t)$ and $S_{3}(u,v;t)$
as functions of $t$. Figures 1 and 2 seem to indicate that these quotients are
convex on $0<t<\infty$, which is consistent with the above conjecture.
|
---|---
Our main result given below shows that this is indeed true.
###### Theorem 1.2.
For fixed $u$ and $v$ such that $0\leq u<v<1$, the functions $S_{2}$ and
$S_{3}$ are strictly convex on $0<t<\infty$. In other words, $\frac{\partial
S_{2}}{\partial t}$ and $\frac{\partial S_{3}}{\partial t}$ are negative and
strictly increasing on $0<t<\infty$.
## 2\. Preliminary results
In this section, we collect main ingredients all of which are subsequently
required in the proofs of our results. We then prove certain lemmas also to be
used in the later sections. Then in Section $3$, we prove Theorem 1.2 for
$\frac{\partial S_{2}}{\partial t}$. Finally, Section $4$ is devoted to the
proof of Theorem 1.2 for $\frac{\partial S_{3}}{\partial t}$.
We first start with some important properties of Weierstrass elliptic
function. For $z\in\mathbb{C}$, let $\wp(z)$ denote the Weierstrass elliptic
function with periods $1$ and $\tau$. It is known [8, p. 376] that $\wp(z)$
maps the period parallelogram $R$ (rectangle in our case) with vertices $0$,
$\omega=1/2$, $\omega+\omega^{\prime}=1/2+\tau/2$ and $\omega^{\prime}=\tau/2$
conformally and one-to-one onto the lower half plane
$\\{\omega:\text{Im}\hskip 1.42262pt\omega<0\\}$. Moreover, $\wp(z)$ is real
and decreases from $\infty$ to $-\infty$ as $z$ describes the boundary of $R$
in the counterclockwise direction starting from $0$. It is known that $\wp(z)$
and $\wp^{\prime}(z)$ are respectively even and odd functions of $z$.
Let $g_{2}$ and $g_{3}$ denote the invariants of $\wp(z)$. The following
differential equations for $\wp$ are well-known and can be found in [8, p.
332]:
$\displaystyle{\wp^{\prime}}^{2}(z)$
$\displaystyle=4\wp^{3}(z)-g_{2}\wp(z)-g_{3},$
$\displaystyle\wp^{\prime\prime}(z)$
$\displaystyle=6\wp^{2}(z)-\frac{g_{2}}{2},$
$\displaystyle\wp^{\prime\prime\prime}(z)$
$\displaystyle=12\wp(z)\wp^{\prime}(z).$ (2.1)
The first equation in (2) can also be represented in the form [8, p. 331]
${\wp^{\prime}}^{2}(z)=4\left(\wp(z)-e_{1}\right)\left(\wp(z)-e_{2}\right)\left(\wp(z)-e_{3}\right),$
(2.2)
where $e_{1},e_{2}$ and $e_{3}$ are values of the $\wp(z)$ at
$z=1/2,(\tau+1)/2$ and $\tau/2$ respectively [8, p. 330]. As can be easily
seen from (2.2), $\wp^{\prime}(z)$ vanishes at these values of $z$. It is
known that $e_{3}<e_{2}<e_{1}$, that $e_{3}<0$ and that $e_{1}>0$. Again, from
[8, p. 332], we find that
$\displaystyle e_{1}$ $\displaystyle=-e_{2}-e_{3}$ $\displaystyle g_{2}$
$\displaystyle=-4(e_{1}e_{2}+e_{2}e_{3}+e_{3}e_{1})$ $\displaystyle g_{3}$
$\displaystyle=4e_{1}e_{2}e_{3}.$ (2.3)
Further, the quantities $e_{1},e_{2}$ and $e_{3}$ are related to theta
functions by [8, p. 361]
$\displaystyle(e_{1}-e_{3})^{1/2}$ $\displaystyle=\pi\theta_{3}^{2},$
$\displaystyle(e_{1}-e_{2})^{1/2}$ $\displaystyle=\pi\theta_{4}^{2}.$ (2.4)
An important quantity which arises while expressing $\wp(z)$ in terms of theta
functions is the following multiple of weight $2$ Eisenstein series [2, p. 87,
Equation 4.1.7] given by
$c_{0}=c_{0}(q)=-\frac{\pi^{2}}{3}\left(1-24\sum_{n=1}^{\infty}\frac{nq^{n}}{1-q^{n}}\right).$
(2.5)
See [7]. Using [7, Equation 4.4], we have
$e_{3}<c_{0}<e_{2}<e_{1}.$ (2.6)
We note that $\theta_{2}(x|i\pi t)$ and $\theta_{3}(x|i\pi t)$ are related to
$\theta_{1}(x|i\pi t)$ by following simple relations:
$\displaystyle\theta_{2}(x|i\pi t)$ $\displaystyle=\theta_{1}(1/2-x|i\pi t),$
$\displaystyle\theta_{3}(x|i\pi t)$ $\displaystyle=iq^{-1/4}e^{-i\pi
x}\theta_{1}(x|i\pi t).$ (2.7)
Observe that from [7, Equation (2.9)], we have on $0<x<1/2$,
$2\frac{\theta_{1}^{\prime}(x)}{\theta_{1}(x)}+\frac{\wp^{\prime}(x)}{\wp(x)-c_{0}}>0,$
which when combined with (2) implies that on $0<x<1/2$,
$2\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}+\frac{\wp^{\prime}(x-1/2)}{\wp(x-1/2)-c_{0}}<0.$
(2.8)
Finally, we use the fact that each of the theta functions $\theta_{j}(x/2|i\pi
t)$, $j=1,2,3$ and $4$, satisfies the heat equation [8, Section 13.19]
$\frac{\partial\theta}{\partial t}=\frac{\partial^{2}\theta}{\partial x^{2}}.$
(2.9)
We now prove an inequality which will be instrumental in our proof of
monotonicity of $S_{2}$ on $0<t<\infty$.
###### Lemma 2.1.
Let $0<q<1$. Let $e_{1},g_{2},g_{3}\mbox{ and }c_{0}$ be defined as above.
Then the following inequality holds:
$e_{1}^{2}(g_{2}-12c_{0}^{2})+e_{1}(6g_{3}+4g_{2}c_{0})+\left(\frac{g_{2}^{2}}{4}+g_{2}c_{0}^{2}+6g_{3}c_{0}\right)<0.$
(2.10)
* Proof.
Let $T(q)$ denote the left-hand side of (2.10). We view $T(q)$ as a quadratic
function in $c_{0}$ rather than that in $e_{1}$, i.e.,
$T(q)=(g_{2}-12e_{1}^{2})c_{0}^{2}+(6g_{3}+4g_{2}e_{1})c_{0}+\left(\frac{g_{2}^{2}}{4}+g_{2}e_{1}^{2}+6g_{3}e_{1}\right).$
(2.11)
Employing (2) in (2.11), we see that
$\displaystyle T(q)$
$\displaystyle=-4(2e_{2}^{2}+5e_{2}e_{3}+2e_{3}^{2})c_{0}^{2}-8(2e_{2}^{3}+7e_{2}^{2}e_{3}+7e_{2}e_{3}^{3}+2e_{3}^{3})c_{0}$
$\displaystyle\quad+(8e_{2}^{4}+44e_{2}^{3}e_{3}+76e_{2}^{2}e_{3}^{2}+44e_{2}e_{3}^{3}+8e_{3}^{4})$
$\displaystyle=-4(2e_{2}+e_{3})(e_{2}+2e_{3})(c_{0}^{2}+2(e_{2}+e_{3})c_{0}-(e_{2}^{2}+3e_{2}e_{3}+e_{3}^{2})).$
(2.12)
The quadratic in $c_{0}$ in the last expression in (Proof.) has discriminant
$4(e_{2}+e_{3})^{2}+4(e_{2}^{2}+3e_{2}e_{3}+e_{3}^{2})=4(2e_{2}+e_{3})(e_{2}+2e_{3})=4(e_{1}-e_{2})(e_{1}-e_{3}),$
where we utilized (2) in the last equality. Hence,
$\displaystyle T(q)=$
$\displaystyle-4(e_{1}-e_{2})(e_{1}-e_{3})\left(c_{0}-\left(-(e_{2}+e_{3})+\pi^{2}\theta_{3}^{2}\theta_{4}^{2}\right)\right)\left(c_{0}-\left(-(e_{2}+e_{3})-\pi^{2}\theta_{3}^{2}\theta_{4}^{2}\right)\right)$
$\displaystyle\quad=-4(e_{1}-e_{2})(e_{1}-e_{3})(c_{0}-e_{1}-\pi^{2}\theta_{3}^{2}\theta_{4}^{2})(c_{0}-e_{1}+\pi^{2}\theta_{3}^{2}\theta_{4}^{2}),$
(2.13)
where we invoked (2) in the first equality and (2) in the second. Using (2.6)
and (Proof.), it suffices to show that
$e_{1}-c_{0}>\pi^{2}\theta_{3}^{2}\theta_{4}^{2}$. To that end, observe that
using [2, p. 15, Equation (1.3.32)], we have
$\theta_{3}\theta_{4}=\theta_{4}^{2}(0|2\tau).$ (2.14)
Also, from [10, Equation 4],
$\theta_{4}^{4}=1+8\sum_{n=1}^{\infty}\frac{(-1)^{n}q^{n}}{(1+q^{n})^{2}}.$
(2.15)
Using (2.14) and (2.15), we deduce that
$\pi^{2}\theta_{3}^{2}\theta_{4}^{2}=\pi^{2}+8\pi^{2}\sum_{n=1}^{\infty}\frac{(-1)^{n}q^{2n}}{(1+q^{2n})^{2}}.$
(2.16)
But from [7, Equation 4.1],
$e_{1}-c_{0}=\pi^{2}+8\pi^{2}\sum_{n=1}^{\infty}\frac{q^{2n}}{(1+q^{2n})^{2}}.$
(2.17)
Thus (2.16) and (2.17) along with the fact that $0<q<1$ imply the inequality
$e_{1}-c_{0}>\pi\theta_{3}^{2}\theta_{4}^{2}$. This proves (2.10). ∎
###### Lemma 2.2.
Let $0<q<1$. Let $e_{2},g_{2},g_{3}\mbox{ and }c_{0}$ be defined as above.
Then the following inequality holds:
$e_{2}^{2}(g_{2}-12c_{0}^{2})+e_{2}(6g_{3}+4g_{2}c_{0})+\left(\frac{g_{2}^{2}}{4}+g_{2}c_{0}^{2}+6g_{3}c_{0}\right)>0.$
(2.18)
* Proof.
Let $U(q)$ denote the left-hand side of (2.18). From (2) and (2.6),
$\displaystyle U(q)$
$\displaystyle=(g_{2}-12e_{2}^{2})c_{0}^{2}+(6g_{3}+4g_{2}e_{2})c_{0}+\left(\frac{g_{2}^{2}}{4}+g_{2}e_{2}^{2}+6g_{3}e_{2}\right)$
$\displaystyle=-4(e_{2}-e_{3})(2e_{2}+e_{3})(c_{0}^{2}-2e_{2}c_{0}-(e_{2}^{2}-e_{2}e_{3}-e_{3}^{2}))$
$\displaystyle=4(e_{1}-e_{2})(e_{2}-e_{3})((c_{0}-e_{2})^{2}+(e_{1}-e_{2})(e_{2}-e_{3}))$
$\displaystyle>0.$
∎
## 3\. Proof of monotonicity of $\displaystyle\frac{\partial S_{2}}{\partial
t}$
From [7, Theorem 1], since $S_{2}(u,v;t)$ is decreasing on $0<t<\infty$, we
see at once that $\frac{\partial S_{2}}{\partial t}<0$. Let $L_{2}:=\log
S_{2}(u,v;t)$. Observe that
$\frac{\partial S_{2}}{\partial t}=S_{2}\frac{\partial L_{2}}{\partial t}.$
(3.1)
In order to show that $\frac{\partial S_{2}}{\partial t}$ is increasing on
$0<t<\infty$, it suffices to show that $\frac{\partial^{2}S_{2}}{\partial
t^{2}}>0$. Now from (3.1),
$\frac{\partial^{2}S_{2}}{\partial t^{2}}=\frac{\partial}{\partial
t}\left(S_{2}\frac{\partial L}{\partial
t}\right)=S_{2}\left(\frac{\partial^{2}L_{2}}{\partial
t^{2}}+\left(\frac{\partial L_{2}}{\partial t}\right)^{2}\right).$
We claim that $\frac{\partial^{2}L_{2}}{\partial t^{2}}>0$ whence we will be
done. Using (2.9) twice, we see that
$\frac{\partial^{2}}{\partial t^{2}}\theta_{2}(x/2|i\pi
t)=\frac{\partial}{\partial t}\left(\frac{\partial^{2}}{\partial
x^{2}}\theta_{2}(x/2|i\pi t)\right)=\frac{\partial^{2}}{\partial
x^{2}}\left(\frac{\partial}{\partial t}\theta_{2}(x/2|i\pi
t)\right)=\frac{\partial^{4}}{\partial x^{4}}\theta_{2}(x/2|i\pi t).$
Hence,
$\displaystyle\frac{\partial^{2}L_{2}}{\partial t^{2}}$
$\displaystyle=\frac{\partial}{\partial t}\left(\frac{\frac{\partial}{\partial
t}\theta_{2}(u/2|i\pi t)}{{\theta_{2}(u/2|i\pi
t)}}-\frac{\frac{\partial}{\partial t}\theta_{2}(v/2|i\pi
t)}{{\theta_{2}(v/2|i\pi t)}}\right)$
$\displaystyle=\frac{\theta_{2}^{(4)}(u/2|i\pi t)}{\theta_{2}(u/2|i\pi
t)}-\frac{\theta_{2}^{(4)}(v/2|i\pi t)}{\theta_{2}(v/2|i\pi t)}$
$\displaystyle\quad-\bigg{(}\bigg{(}\frac{\theta_{2}^{\prime\prime}(u/2|i\pi
t)}{\theta_{2}(u/2|i\pi
t)}\bigg{)}^{2}-\bigg{(}\frac{\theta_{2}^{\prime\prime}(v/2|i\pi
t)}{\theta_{2}(v/2|i\pi t)}\bigg{)}^{2}\bigg{)}.$
Thus it suffices to show that the function $\theta_{2}^{(4)}(x|i\pi
t)/\theta_{2}(x|i\pi t)-\left(\theta_{2}^{\prime\prime}(x|i\pi
t)/\theta_{2}(x|i\pi t)\right)^{2}$ decreases on $0<x<1/2$. From now on, we
fix $t$ where $0<t<\infty$ and henceforth suppress the dependence of
$\theta_{2}(x/2|i\pi t)$ on $t$. From (2) and the relation [7, Equation (2.6)]
$\left(\frac{\theta_{1}^{\prime}(x)}{\theta_{1}(x)}\right)^{\prime}=-\left(\wp(x)-c_{0}\right),$
(3.2)
we find that
$\left(\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\right)^{\prime}=-\left(\wp\left(x-1/2\right)-c_{0}\right),$
(3.3)
since $\wp(x)$ is an even function of $x$. Then by a repeated application of
quotient rule for derivatives and (3.3), it is easy to see that the following
are true:
$\displaystyle\frac{\theta_{2}^{\prime\prime}(x)}{\theta_{2}(x)}$
$\displaystyle=\left(\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\right)^{2}-\left(\wp\left(x-1/2\right)-c_{0}\right),$
$\displaystyle\frac{\theta_{2}^{\prime\prime\prime}(x)}{\theta_{2}(x)}$
$\displaystyle=\left(\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\right)^{3}-3\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\left(\wp\left(x-1/2\right)-c_{0}\right)-\wp^{\prime}\left(x-1/2\right),$
$\displaystyle\frac{\theta_{2}^{(4)}(x)}{\theta_{2}(x)}$
$\displaystyle=\left(\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\right)^{4}-6\left(\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\right)^{2}\left(\wp\left(x-1/2\right)-c_{0}\right)-4\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\wp^{\prime}\left(x-1/2\right)$
$\displaystyle\quad+3\left(\wp\left(x-1/2\right)-c_{0}\right)^{2}-\wp^{\prime\prime}\left(x-1/2\right),$
from which it easily follows that
$\displaystyle\frac{\theta_{2}^{(4)}(x)}{\theta_{2}(x)}-\left(\frac{\theta_{2}^{\prime\prime}(x)}{\theta_{2}(x)}\right)^{2}$
$\displaystyle=-4\left(\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\right)^{2}\left(\wp\left(x-1/2\right)-c_{0}\right)+2\left(\wp\left(x-1/2\right)-c_{0}\right)^{2}$
$\displaystyle\quad-4\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\wp^{\prime}\left(x-1/2\right)-\wp^{\prime\prime}\left(x-1/2\right).$
Again using (3.3), we find that
$\displaystyle\frac{d}{dx}\left(\frac{\theta_{2}^{(4)}(x)}{\theta_{2}(x)}-\left(\frac{\theta_{2}^{\prime\prime}(x)}{\theta_{2}(x)}\right)^{2}\right)$
$\displaystyle=8\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\left(\wp\left(x-1/2\right)-c_{0}\right)^{2}-4\left(\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\right)^{2}\wp^{\prime}\left(x-1/2\right)$
$\displaystyle\quad+8\left(\wp\left(x-1/2\right)-c_{0}\right)\wp^{\prime}\left(x-1/2\right)-4\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\wp^{\prime\prime}\left(x-1/2\right)$
$\displaystyle\quad-\wp^{\prime\prime\prime}\left(x-1/2\right).$
From the monotonicity of $\wp$ along the boundary of the rectangular lattice
as mentioned in Section 2, in the case at hand, we have in particular that
$\wp(x)$ is strictly decreasing on $0<x<1/2$. Hence $\wp(1/2-x)$ is strictly
increasing on $0<x<1/2$. Since $\wp(1/2-x)=\wp(x-1/2)$, this implies that
$\wp^{\prime}(x-1/2)>0$ on $0<x<1/2$. Define the function $F_{2}(x)$ as
$\displaystyle F_{2}(x)$
$\displaystyle:=\frac{1}{\wp^{\prime}(x-1/2)}\frac{d}{dx}\left(\frac{\theta_{2}^{(4)}(x)}{\theta_{2}(x)}-\left(\frac{\theta_{2}^{\prime\prime}(x)}{\theta_{2}(x)}\right)^{2}\right)$
$\displaystyle=8\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\frac{\left(\wp\left(x-1/2\right)-c_{0}\right)^{2}}{\wp^{\prime}\left(x-1/2\right)}-4\left(\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\right)^{2}+8\left(\wp\left(x-1/2\right)-c_{0}\right)$
$\displaystyle\quad-4\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\frac{\wp^{\prime\prime}\left(x-1/2\right)}{\wp^{\prime}\left(x-1/2\right)}-\frac{\wp^{\prime\prime\prime}\left(x-1/2\right)}{\wp^{\prime}\left(x-1/2\right)}.$
(3.4)
It suffices to prove that $F_{2}(x)<0$. We prove this by showing that
$F_{2}(1/2)=0$ and $F_{2}^{\prime}(x)>0$, since then, the mean value theorem
implies that for any $x\in(0,1/2)$,
$F_{2}(x)-F_{2}(1/2)=F_{2}^{\prime}(c)(x-1/2)$ for some $c\in(x,1/2)$. We
begin by showing $F_{2}(1/2)=0$. We require the following series expansions in
order to establish this. First, from [8, p. 358, Section 13.19],
$\displaystyle\frac{\theta_{2}^{\prime}(z)}{\theta_{2}(z)}$
$\displaystyle=-\pi\tan\pi
z+4\pi\sum_{n=1}^{\infty}(-1)^{n}\frac{q^{2n}}{1-q^{2n}}\sin 2n\pi z$
$\displaystyle=\left(\frac{1}{z-1/2}-\frac{\pi^{2}}{3}(z-1/2)-\cdots\right)+4\pi\sum_{n=1}^{\infty}(-1)^{n}\frac{q^{2n}}{1-q^{2n}}\sin
2n\pi z.$ (3.5)
Further, the Laurent series expansions of $\wp(z-1/2)$ and
$\wp^{\prime}(z-1/2)$ around $z=1/2$ are as follows [8, p. 330, Section
13.12].
$\displaystyle\wp(z-1/2)$
$\displaystyle=\frac{1}{(z-1/2)^{2}}+\frac{g_{2}(z-1/2)^{2}}{2^{2}.5}+\frac{g_{3}(z-1/2)^{4}}{2^{2}.7}+\frac{g_{2}^{2}(z-1/2)^{6}}{2^{4}.3.5^{2}}+...,$
$\displaystyle\wp^{\prime}(z-1/2)$
$\displaystyle=\frac{-2}{(z-1/2)^{3}}+\frac{g_{2}(z-1/2)}{10}+\frac{g_{3}(z-1/2)^{3}}{7}+\frac{g_{2}^{2}(z-1/2)^{5}}{2^{3}.5^{2}}+....$
(3.6)
Using (3), (3), the third differential equation in (2) and simplifying, we
find that $F_{2}(1/2)=0$. Differentiating both sides of (3) with respect to
$x$, using (2), (3.3) and simplifying, we get
$\displaystyle\frac{F_{2}^{{}^{\prime}}(x)}{4}$
$\displaystyle=\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}\cdot\frac{\wp^{2}(x-1/2)\left(g_{2}-12c_{0}^{2}\right)+\wp\left(x-1/2\right)\left(6g_{3}+4g_{2}c_{0}\right)+\left(6g_{3}c_{0}+g_{2}c_{0}^{2}+\frac{g_{2}^{2}}{4}\right)}{{\wp^{\prime}}^{2}(x-1/2)}$
$\displaystyle\quad+\frac{\wp\left(x-1/2\right)\left(g_{2}/2-6c_{0}^{2}\right)+g_{3}+2c_{0}^{3}+g_{2}c_{0}/2}{\wp^{\prime}(x-1/2)}.$
(3.7)
Now we show that $F_{2}^{\prime}(x)>0$. Let
$\displaystyle A_{1}(x)$
$\displaystyle:=\wp(x-1/2)\left(g_{2}/2-6c_{0}^{2}\right)+g_{3}+2c_{0}^{3}+g_{2}c_{0}/2,$
$\displaystyle A_{2}(x)$
$\displaystyle:=\wp^{2}(x-1/2)\left(g_{2}-12c_{0}^{2}\right)+\wp\left(x-1/2\right)\left(6g_{3}+4g_{2}c_{0}\right)+\left(6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4\right).$
(3.8)
By Remark 1 in [7], we have
$e_{1}<\frac{-(2g_{3}+4c_{0}^{3}+g_{2}c_{0})}{g_{2}-12c_{0}^{2}}.$ (3.9)
This along with the fact that $\wp(x-1/2)$ is strictly increasing on $0<x<1/2$
from $e_{1}$ to $\infty$ implies that $A_{1}$ has a unique zero, say $a_{1}$
in $(0,1/2)$. Now Lemma 2 from [7] implies that $g_{2}-12c_{0}^{2}>0$. This
along with the fact that $\wp\left(x-1/2\right)\rightarrow\infty$ as
$x\rightarrow{\frac{1}{2}}^{-}$ implies that $A_{2}(x)\to\infty$ as
$x\rightarrow{\frac{1}{2}}^{-}$. Using the fact that
$\wp(1/2)=\wp(-1/2)=e_{1}$ and Lemma 2.1, we have $A_{2}(0)<0$. Since $A_{2}$
is quadratic in $\wp(x-1/2)$ and $\wp(x-1/2)$ is strictly increasing on
$0<x<1/2$, there exists a unique value $a_{2}$ of $x$ in $(0,1/2)$ such that
$A_{2}(a_{2})=0$. Let $P:=\wp(a_{2}-1/2)$. Note that $a_{2}$ is not a double
root of $A_{2}$. Next, $P$ has two possibilities, say,
$P=P_{1}:=\frac{-6g_{3}-4g_{2}c_{0}-\sqrt{\Delta}}{2(g_{2}-12c_{0}^{2})}\mbox{
or }P=P_{2}:=\frac{-6g_{3}-4g_{2}c_{0}+\sqrt{\Delta}}{2(g_{2}-12c_{0}^{2})},$
where
$\Delta:=(6g_{3}+4g_{2}c_{0})^{2}-4(g_{2}-12c_{0}^{2})(6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4)>0,$
(3.10)
the last inequality coming from the above discussion. We now claim that
$P=P_{2}$. Now
$P_{2}>\frac{-6g_{3}-4g_{2}c_{0}}{2(g_{2}-12c_{0}^{2})}$ (3.11)
and
$\frac{-6g_{3}-4g_{2}c_{0}}{2(g_{2}-12c_{0}^{2})}+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}=\frac{-g_{3}-g_{2}c_{0}/2-2c_{0}^{3}}{(g_{2}-12c_{0}^{2})}+\frac{6c_{0}^{3}-g_{2}c_{0}/2}{g_{2}-12c_{0}^{2}}>\frac{e_{1}-c_{0}}{2}>0,$
(3.12)
where we utilized (3.9) in the penultimate step and (2.6) in the ultimate
step. Therefore, by (3.9), (3.11) and (3.12),
$e_{1}<\frac{-(2g_{3}+4c_{0}^{3}+g_{2}c_{0})}{g_{2}-12c_{0}^{2}}<P_{2}.$
(3.13)
This shows that $\wp(x-1/2)$ attains the value $P_{2}$ for a unique $x$ in the
interval $(0,1/2)$. This combined with the facts that $P_{1}<P_{2}$ and
$A_{2}$ has a unique root in $0<x<1/2$ implies that $P=P_{2}$.
Remark 1. The above discussion implies that $P_{1}<e_{1}<P_{2}$. As the real
period of $\wp$ is $1$, this tells us that there is no real number $x$ such
that $\wp(x-1/2)=P_{1}$.
Using $P=P_{2}$ and (3.13), it is clear that $0<a_{1}<a_{2}<1/2$. Figure 3
shows the graphs of $10A_{1}(x)$111The graph of $A_{1}(x)$ is scaled by the
factor of $10$ for better view without changing the fact $0<a_{1}<a_{2}<1/2$.
and $A_{2}(x)$.
Define
$\displaystyle G_{2}(x)$
$\displaystyle:=\frac{F_{2}^{\prime}(x){\wp^{\prime}}^{2}(x-1/2)}{4A_{2}(x)}$
$\displaystyle=\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}+\frac{\wp^{\prime}(x-1/2)\left(\wp\left(x-1/2\right)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)}{2\left(\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)}.$
(3.14)
Next, we differentiate extreme sides of (3) with respect to $x$ and use (3.3)
so that $\theta_{2}^{\prime}(x)/\theta_{2}(x)$ is eliminated from the right-
hand side of (3) and we have everything in terms of $\wp$ and $\wp^{\prime}$.
This along with the second differential equation in (2) gives
$\displaystyle G_{2}^{\prime}(x)$
$\displaystyle=-(\wp(x-1/2)-c_{0})+\frac{(6\wp^{2}(x-1/2)-\frac{g_{2}}{2})\left(\wp(x-1/2)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)}{2\left(\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)}$
$\displaystyle\quad+\frac{{\wp^{\prime}}^{2}(x-1/2)}{2\left(\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)}$
$\displaystyle\quad-\frac{{\wp^{\prime}}^{2}(x-1/2)\left(\wp(x-1/2)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)\left(2\wp(x-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)}{2\left(\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)^{2}}.$
(3.15)
Simplifying the first three terms of (3), we obtain
$\displaystyle G_{2}^{\prime}(x)$
$\displaystyle=\frac{{\wp^{\prime}}^{2}(x-1/2)}{\left(\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)}$
$\displaystyle\quad-\frac{{\wp^{\prime}}^{2}(x-1/2)\left(\wp(x-1/2)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)\left(2\wp(x-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)}{2\left(\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)^{2}}.$
(3.16)
Consider three cases: $0<x<a_{1}$, $a_{1}\leq x\leq a_{2}$ and $a_{2}<x<1/2$.
Case 1: $0<x<a_{1}$.
Then, $A_{1}(x)<0$ and $A_{2}(x)<0$. We show that $G_{2}(x)<0$. Note that from
(2.2), (3), (3.9) and Lemma 2.1, it readily follows that $G_{2}(0)=0$. Since
$A_{1}(x)<0$, $A_{2}(x)<0$ and $g_{2}-12c_{0}^{2}>0$, we have
$\displaystyle\wp\left(x-1/2\right)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}<0,$
(3.17)
$\displaystyle\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}<0.$
(3.18)
From (3.17) and (3.12), we see that
$\displaystyle 2\wp(x-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}<0.$
(3.19)
Therefore, (3.17), (3.18) and (3.19) imply that $G_{2}^{\prime}(x)<0$. By the
mean value theorem, for any $x\in(0,a_{1})$, $G_{2}(x)=xG_{2}^{\prime}(d)$ for
some $d\in(0,x)$. Hence $G_{2}(x)<0$. Thus $F_{2}^{\prime}(x)>0$ in
$0<x<a_{1}$.
Case 2: $a_{1}\leq x\leq a_{2}$.
Note that $A_{1}(a_{1})=0$, $A_{2}(a_{1})<0$, $A_{1}(a_{2})>0$ and
$A_{2}(a_{2})=0$. Also, $A_{1}(x)>0$ and $A_{2}(x)<0$ when $a_{1}<x<a_{2}$.
Since $\wp(x-1/2)$ is strictly increasing on $0<x<1/2$, we have
$\wp^{\prime}(x-1/2)>0$ and $\wp(x-1/2)-c_{0}>e_{1}-c_{0}>0$, where we invoked
(2.6) in the last step. This along with (2.8) shows that
$\theta_{2}^{\prime}(x)/\theta_{2}(x)<0$ on $0<x<1/2$. Using all of the above
facts and (3), we observe that $F_{2}^{\prime}(x)>0$ on $a_{1}\leq x\leq
a_{2}$.
Case 3: $a_{2}<x<1/2$. Since $A_{1}(x)>0$, $A_{2}(x)>0$ and
$g_{2}-12c_{0}^{2}>0$, we have
$\displaystyle\wp\left(x-1/2\right)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}>0,$
(3.20)
$\displaystyle\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}>0.$
(3.21)
From (3), as $x\to{\frac{1}{2}}^{-}$,
$\displaystyle
G_{2}(x)=\frac{\theta_{2}^{\prime}(x)}{\theta_{2}(x)}+\frac{\wp^{\prime}(x-1/2)}{2\wp(x-1/2)}\left(1+O\left(\frac{1}{\wp(x-1/2)}\right)\right).$
Using (3) and (3), it is easy to check that $G_{2}(1/2)=0$. Next we show that
$G_{2}^{\prime}(x)<0$. From (3),
$\displaystyle
G_{2}^{\prime}(x)=\frac{{\wp^{\prime}}^{2}(x-1/2)(1-Q(x))}{\left(\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)},$
where
$Q(x):=\frac{\left(\wp(x-1/2)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)\left(2\wp(x-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)}{2\left(\wp^{2}(x-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)}.$
(3.22)
We claim that $Q(x)>1$. Note that the denominator of $Q(x)$ can be simplified
as follows:
$\displaystyle\quad 2\bigg{(}\wp^{2}(x$
$\displaystyle-1/2)+\wp(x-1/2)\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\bigg{)}$
$\displaystyle=\left(2\wp(x-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)\left(\wp(x-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{2(g_{2}-12c_{0}^{2})}\right)$
$\displaystyle\quad+\left(2\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}-\frac{(6g_{3}+4g_{2}c_{0})^{2}}{2(g_{2}-12c_{0}^{2})^{2}}\right).$
(3.23)
Now
$\displaystyle 2\wp(x-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}$
$\displaystyle>2\wp(a_{2}-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}$
$\displaystyle=2P+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}$
$\displaystyle=\frac{\sqrt{\Delta}}{(g_{2}-12c_{0}^{2})}$ $\displaystyle>0.$
(3.24)
From (3.12), we have
$\wp(x-1/2)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}>\wp(x-1/2)+\frac{6g_{3}+4g_{2}c_{0}}{2(g_{2}-12c_{0}^{2})}.$
(3.25)
By (3.10), the last term on the right-hand side of (3) is negative. Hence,
(3), (3), (3.25) and (3.21) imply that $Q(x)>1$. Therefore
$G_{2}^{\prime}(x)<0$. By the mean value theorem, for any $x\in(a_{2},1/2)$,
$G_{2}(x)-G_{2}(1/2)=G_{2}^{\prime}(b)(x-1/2)$ for some $b\in(x,1/2)$. Hence
$G_{2}(x)>0$. Since $A_{2}(x)>0$, this implies that
$F_{2}^{{}^{\prime}}(x)>0$.
From the above three cases, we conclude that $F_{2}^{{}^{\prime}}(x)>0$ in
$0<x<1/2$. Since $F_{2}(1/2)=0$, by another application of the mean value
theorem, we conclude that $F_{2}(x)<0$ in $0<x<1/2$. This completes the proof.
Figure 4 shows the graph of $G_{2}(x)$ on $0<x<1/2$.
## 4\. Proof of monotonicity of $\displaystyle\frac{\partial S_{3}}{\partial
t}$
The method for proving monotonicity of $\frac{\partial S_{3}}{\partial t}$ is
similar to that of $\frac{\partial S_{2}}{\partial t}$ and so we will be
brief. From [7, Theorem 1], since $S_{3}(u,v;t)$ is decreasing on
$0<t<\infty$, we see at once that $\frac{\partial S_{3}}{\partial t}<0$. Let
$L_{3}:=\log S_{3}(u,v;t)$. Observe that
$\frac{\partial S_{3}}{\partial t}=S_{3}\frac{\partial L_{3}}{\partial t}.$
It suffices to show that $\frac{\partial^{2}S_{3}}{\partial t^{2}}>0$. Now,
$\frac{\partial^{2}S_{3}}{\partial t^{2}}=\frac{\partial}{\partial
t}\left(S_{3}\frac{\partial L}{\partial
t}\right)=S_{3}\left(\frac{\partial^{2}L_{3}}{\partial
t^{2}}+\left(\frac{\partial L_{3}}{\partial t}\right)^{2}\right).$
We show that $\frac{\partial^{2}L_{3}}{\partial t^{2}}>0$. Observe that using
(2.9) twice, we have $\frac{\partial^{2}}{\partial t^{2}}\theta_{3}(x/2|i\pi
t)=\frac{\partial^{4}}{\partial x^{4}}\theta_{3}(x/2|i\pi t)$. It suffices to
show that the function $\theta_{3}^{(4)}(x|i\pi t)/\theta_{3}(x|i\pi
t)-\left(\theta_{3}^{\prime\prime}(x|i\pi t)/\theta_{3}(x|i\pi t)\right)^{2}$
decreases on $0<x<1/2$. Fix $t$ where $0<t<\infty$. Using (2) and (3.2), we
find that
$\left(\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\right)^{\prime}=-\left(\wp\left(x+\frac{\tau-1}{2}\right)-c_{0}\right).$
(4.1)
Observe that
$\displaystyle\frac{\theta_{3}^{(4)}(x)}{\theta_{3}(x)}-\left(\frac{\theta_{3}^{\prime\prime}(x)}{\theta_{3}(x)}\right)^{2}$
$\displaystyle=-4\left(\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\right)^{2}\left(\wp\left(x+\frac{\tau-1}{2}\right)-c_{0}\right)+2\left(\wp\left(x+\frac{\tau-1}{2}\right)-c_{0}\right)^{2}$
$\displaystyle\quad-4\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)-\wp^{\prime\prime}\left(x+\frac{\tau-1}{2}\right).$
Using (4.1), we find that
$\displaystyle\frac{d}{dx}\left(\frac{\theta_{3}^{(4)}(x)}{\theta_{3}(x)}-\left(\frac{\theta_{3}^{\prime\prime}(x)}{\theta_{3}(x)}\right)^{2}\right)$
$\displaystyle=8\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\left(\wp\left(x+\frac{\tau-1}{2}\right)-c_{0}\right)^{2}-4\left(\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\right)^{2}\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)$
$\displaystyle\quad+8\left(\wp\left(x+\frac{\tau-1}{2}\right)-c_{0}\right)\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)$
$\displaystyle\quad-4\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\wp^{\prime\prime}\left(x+\frac{\tau-1}{2}\right)-\wp^{\prime\prime\prime}\left(x+\frac{\tau-1}{2}\right).$
Since $\wp\left(x+\frac{\tau-1}{2}\right)$ decreases on $0<x<1/2$, we have
$\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)<0$. Define a function $F_{3}(x)$
as
$\displaystyle F_{3}(x)$
$\displaystyle:=\frac{1}{\wp^{\prime}(x+\frac{\tau-1}{2})}\frac{d}{dx}\left(\frac{\theta_{3}^{(4)}(x)}{\theta_{3}(x)}-\left(\frac{\theta_{3}^{\prime\prime}(x)}{\theta_{3}(x)}\right)^{2}\right)$
$\displaystyle=8\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\frac{\left(\wp\left(x+\frac{\tau-1}{2}\right)-c_{0}\right)^{2}}{\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)}-4\left(\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\right)^{2}+8\left(\wp\left(x+\frac{\tau-1}{2}\right)-c_{0}\right)$
$\displaystyle\quad-4\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\frac{\wp^{\prime\prime}\left(x+\frac{\tau-1}{2}\right)}{\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)}-\frac{\wp^{\prime\prime\prime}\left(x+\frac{\tau-1}{2}\right)}{\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)}.$
(4.2)
It suffices to prove that $F_{3}(x)>0$. We prove this by showing that
$F_{3}^{\prime}(x)<0$ and $F_{3}(1/2)>0$, because then by the mean value
theorem, for any $x\in(0,1/2)$, we have
$F_{3}(x)-F_{3}(1/2)=F_{3}^{\prime}(e)(x-1/2)$ for some $e\in(x,1/2)$ whence
$F_{3}(x)>0$. We first show that $F_{3}(1/2)>0$. Using the thirs differential
equation in (2), we have
$\displaystyle F_{3}(1/2)$
$\displaystyle=8(e_{3}-c_{0})^{2}\lim_{x\to{\frac{1}{2}}^{-}}\frac{\theta_{3}^{\prime}(x)/\theta_{3}(x)}{\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)}-4\lim_{x\to{\frac{1}{2}}^{-}}\left(\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\right)^{2}+8(e_{3}-c_{0})$
$\displaystyle\quad-4\wp^{\prime\prime}(\tau/2)\lim_{x\to{\frac{1}{2}}^{-}}\frac{\theta_{3}^{\prime}(x)/\theta_{3}(x)}{\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)}-12e_{3}.$
(4.3)
Now [8, p. 358, Section 13.19]
$\displaystyle\frac{\theta_{3}^{\prime}(z)}{\theta_{3}(z)}=4\pi\sum_{n=1}^{\infty}(-1)^{n}\frac{q^{n}}{1-q^{2n}}\sin
2n\pi z$ (4.4)
implies that $\theta_{3}^{\prime}(x)/\theta_{3}(x)$ vanishes at $x=1/2$. Note
that $\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)=0$ at $x=1/2$ too. Hence,
using L’Hopital’s rule in (4), then (4.1), the second differential equation in
(2) and simplifying, we see that
$F_{3}(1/2)=\frac{16(e_{3}-c_{0})^{3}}{g_{2}-12e_{3}^{2}}-12c_{0}.$
Now using (2) and (2.6), note that
$\displaystyle g_{2}-12e_{3}^{2}$
$\displaystyle=-4(e_{1}e_{2}+e_{2}e_{3}+e_{3}e_{1})-12e_{3}^{2}$
$\displaystyle=4(e_{3}-e_{1})(e_{2}-e_{3})$ $\displaystyle<0.$
Thus, we need to show that $16(e_{3}-c_{0})^{3}-12c_{0}(g_{2}-12e_{3}^{2})<0$
or equivalently, $(e_{3}-c_{0})^{3}<3c_{0}(e_{3}-e_{1})(e_{2}-e_{3})$.
Consider two cases.
Case 1: $c_{0}\leq 0$.
By (2.6), the left-hand side is less than zero but the right-hand side is
greater than or equal to zero. This proves the required inequality.
Case 2: $c_{0}>0$.
Using (2),
$\displaystyle 3c_{0}(e_{3}-e_{1})(e_{2}-e_{3})-(e_{3}-c_{0})^{3}$
$\displaystyle=(e_{1}+e_{2}+c_{0})^{3}-3c_{0}(2e_{1}+e_{2})(e_{1}+2e_{2})$
$\displaystyle=\frac{1}{27}\left(((2e_{1}+e_{2})+(e_{1}+2e_{2})+3c_{0})^{3}-27\cdot
3c_{0}(2e_{1}+e_{2})(e_{1}+2e_{2})\right).$
The last expression is clearly positive by the Arithmetic mean-Geometric mean
inequality and since $2e_{1}+e_{2}$, $e_{1}+2e_{2}$ are positive by (2.6) and
since $3c_{0}$ is positive. From the above two cases, we conclude that
$F_{3}(1/2)>0$. Our next task is to show that $F_{3}^{\prime}(x)<0$. From (4),
we have
$\displaystyle\frac{F_{3}^{{}^{\prime}}(x)}{4}=\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}\frac{A_{2}(x+\frac{\tau}{2})}{{\wp^{\prime}}^{2}\left(x+\frac{\tau-1}{2}\right)}+\frac{A_{1}(x+\frac{\tau}{2})}{\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)},$
where $A_{1}(x)$ and $A_{2}(x)$ are defined in (3). Now
$A_{2}^{\prime}\left(x+\frac{\tau}{2}\right)=\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)\left(2\left(g_{2}-12c_{0}^{2}\right)\wp\left(x+\frac{\tau-1}{2}\right)+\left(6g_{3}+4g_{2}c_{0}\right)\right).$
From (2.6), (3.9) and the facts that
$e_{3}<\wp\left(x+\frac{\tau-1}{2}\right)<e_{2}$ and
$\wp^{\prime}\left(x+\frac{\tau-1}{2}\right)<0$ on $0<x<1/2$, we find that
$A_{2}^{\prime}(x+\frac{\tau}{2})>0$. Also by Lemma 2.2,
$A_{2}(\frac{\tau}{2})>0$. By the mean value theorem, for any $x\in(0,1/2)$,
we have
$A_{2}(x+\frac{\tau}{2})=A_{2}(\frac{\tau}{2})+xA_{2}^{\prime}(k+\frac{\tau}{2})>0$
for some $k\in(0,x)$.
Figure 5 shows the graphs of $A_{1}(\frac{\tau}{2})$ and
$A_{2}(\frac{\tau}{2})$ on $0<x<1/2$. Now define $G_{3}$ by
$\displaystyle G_{3}(x)$
$\displaystyle:=\frac{F_{3}^{\prime}(x){\wp^{\prime}}^{2}(x+\frac{\tau-1}{2})}{4A_{2}(x+\frac{\tau}{2})}$
$\displaystyle=\frac{\theta_{3}^{\prime}(x)}{\theta_{3}(x)}+\frac{\wp^{\prime}(x+\frac{\tau-1}{2})\left(\wp\left(x+\frac{\tau-1}{2}\right)+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)}{2\left(\wp^{2}(x+\frac{\tau-1}{2})+\wp(x+\frac{\tau-1}{2})\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)}.$
(4.5)
From the above discussion, it suffices to show that $G_{3}(x)<0$. Now, from
(4.4) and the fact that
$\wp^{\prime}\left(\frac{\tau-1}{2}\right)=0=\wp^{\prime}\left(\frac{\tau}{2}\right)$,
it is easy to see that $G_{3}(0)=0=G_{3}(1/2)$. This implies that
$G_{3}^{\prime}(x)$ has at least one zero in $0<x<1/2$. Differentiating both
sides of (4) with respect to $x$ and simplifying, we observe that
$\displaystyle
G_{3}^{\prime}(x)=\frac{{\wp^{\prime}}^{2}(x+\frac{\tau-1}{2})(1-Q(x+\frac{\tau}{2}))}{\left(\wp^{2}(x+\frac{\tau-1}{2})+\wp(x+\frac{\tau-1}{2})\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)},$
where $Q(x)$ is defined in (3.22). Now
$\displaystyle 1-Q\left(x+\frac{\tau}{2}\right)$
$\displaystyle=1-\frac{\left(\wp(x+\frac{\tau-1}{2})+\frac{2g_{3}+4c_{0}^{3}+g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)\left(2\wp(x+\frac{\tau-1}{2})+\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}\right)}{2\left(\wp^{2}(x+\frac{\tau-1}{2})+\wp(x+\frac{\tau-1}{2})\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)}$
$\displaystyle=\frac{2\wp(x+\frac{\tau-1}{2})\frac{g_{3}+g_{2}c_{0}-4c_{0}^{3}}{g_{2}-12c_{0}^{2}}+C}{2\left(\wp^{2}(x+\frac{\tau-1}{2})+\wp(x+\frac{\tau-1}{2})\frac{6g_{3}+4g_{2}c_{0}}{g_{2}-12c_{0}^{2}}+\frac{6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4}{g_{2}-12c_{0}^{2}}\right)},$
(4.6)
where
$C:=\frac{2(6g_{3}c_{0}+g_{2}c_{0}^{2}+g_{2}^{2}/4)}{g_{2}-12c_{0}^{2}}-\frac{(6g_{3}+4g_{2}c_{0})(2g_{3}+4c_{0}^{3}+g_{2}c_{0})}{(g_{2}-12c_{0}^{2})^{2}}.$
The numerator in the last expression of (4) has atmost one zero since it is
linear in $\wp(x+\frac{\tau-1}{2})$ and $\wp(x+\frac{\tau-1}{2})$ is monotone.
Hence, $G_{3}^{\prime}(x)$ has exactly one zero, say $x_{0}$, in $0<x<1/2$.
Thus we will be done if we can show that $G_{3}(x)<0$ at some point in the
interval $0<x<1/2$. In fact, we show that $G_{3}(x)<0$ on $(0,x_{0})$.
For any $x$ in $(0,x_{0})$, we have
$\wp\left(x+\frac{\tau-1}{2}\right)>\wp\left(x_{0}+\frac{\tau-1}{2}\right)$.
Also,
$\displaystyle\frac{g_{3}+g_{2}c_{0}-4c_{0}^{3}}{g_{2}-12c_{0}^{2}}=\frac{g_{3}+g_{2}c_{0}/2+2c_{0}^{3}}{g_{2}-12c_{0}^{2}}+\frac{c_{0}(g_{2}/2-6c_{0}^{2})}{g_{2}-12c_{0}^{2}}<\frac{-(e_{1}-c_{0})}{2}<0,$
where last two inequalities follows from (3.9) and (2.6). Therefore
${2\wp\left(x+\frac{\tau-1}{2}\right)\frac{g_{3}+g_{2}c_{0}-4c_{0}^{3}}{g_{2}-12c_{0}^{2}}+C}<{2\wp\left(x_{0}+\frac{\tau-1}{2}\right)\frac{g_{3}+g_{2}c_{0}-4c_{0}^{3}}{g_{2}-12c_{0}^{2}}+C}=0,$
where the last equality comes from the fact that $G_{3}^{\prime}(x_{0})=0$.
Hence, $G_{3}^{\prime}(x)<0$ for $0<x<x_{0}$. Then it is clear by the mean
value theorem that for any $x\in(0,x_{0})$,
$G_{3}(x)=xG_{3}^{\prime}(x_{1})<0$ for some $x_{1}\in(0,x)$. So finally
$G_{3}(x)<0$ for $0<x<1/2$. This completes the proof. Figure 6 shows the graph
of $G_{3}(x)$ on $0<x<1/2$.
## References
* [1] A. Baernstein II, _On the harmonic measure of slit domains_ , Complex Var. Theory Appl. 9 (1987), 131–142.
* [2] B.C. Berndt, _Number theory in the spirit of Ramanujan_ , American Mathematical Society, Providence, RI, 2006.
* [3] D. Betsakos, _Geometric theorems and problems for harmonic measure_ , Rocky Mountain J. Math. 31 (2001), no.3, 773–795.
* [4] D.M. Campbell, J.G. Clunie and W.K. Hayman, _Research problem in complex analysis_ , in _Aspects of Contemporary Complex Analysis_ , D.A. Brannan, J.Clunie, Eds., Academic Press, London-New York, 1980.
* [5] C.-P. Chen, Complete monotonicity and logarithmically complete monotonicity properties for the gamma and psi functions, J. Math. Anal. Appl. 336 (2007), No. 2, 812–822.
* [6] W. Cheney and W. Light, _A course in Approximation Theory_ , Graduate Text in Mathematics, 101, American Mathematical Society, Providence, RI, 2009.
* [7] A. Dixit and A.Yu. Solynin, _Monotonicity of quotients of theta functions related to an extremal problem on harmonic measure_ , J. Math. Anal. Appl. 336, No. 2 (2007), 1042–1053.
* [8] A. Erldelyi, W. Magnus, F. Oberhettinger and F.G. Tricomi, _Higher Transcendental Functions_ , Vol. 2, McGraw-Hill, New York, 1955.
* [9] B.-N. Guo, S. Guo and F. Qi, _Complete monotonicity of some functions involving polygamma functions_ , J. Comput. Appl. Math., 233 (2010), No. 9, 2149–2160.
* [10] M.D. Hirschhorn, _Partial fractions and four classical theorems of number theory_ , Amer. Math. Monthly 107, No. 3 (2007), 260–264.
* [11] M.E.H. Ismail, _Integral representations and complete monotonicity of various quotients of Bessel functions_ , Can. J. Math. 29, No. 6 (1977), 1198–1207.
* [12] K.S. Miller and S.G. Samko, _Completely monotonic functions_ , Integr. Transf. and Spec. Funct. 12 (2001), No. 4, 389–402.
* [13] K. Schiefermayr, _Some new properties of Jacobi’s theta functions_ , J. Comput. Appl. Math. 178 (2005), 419–424.
* [14] A.Yu. Solynin, _Harmonic measure of radial segments and symmetrization_ (Russian) Mat. Sb. 189 (1998), No. 11, 121–138; translation in Sb. Math. 189 (1998), no. 11-12, 1701- 1718.
|
arxiv-papers
| 2011-04-13T07:40:56 |
2024-09-04T02:49:18.225335
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Atul Dixit, Arindam Roy, Alexandru Zaharescu",
"submitter": "Atul Dixit",
"url": "https://arxiv.org/abs/1104.2401"
}
|
1104.2487
|
# The optimal weighting function for cosmic magnification measurement through
foreground galaxy-background galaxy (quasar) cross correlation
Xiaofeng Yang1,2, Pengjie Zhang1
$1$Key Laboratory for Research in Galaxies and Cosmology, Shanghai
Astronomical Observatory, Chinese Academy of Sciences,
Nandan Road 80, Shanghai, 200030, China
$2$Graduate University of Chinese Academy of Sciences, 19A, Yuquan Road,
Beijing, 100049, China E-mail:xfyang@shao.ac.cn
###### Abstract
Cosmic magnification has been detected through cross correlation between
foreground and background populations (galaxies or quasars). It has been shown
that weighing each background object by its $\alpha-1$ can significantly
improve the cosmic magnification measurement [16, 23]. Here, $\alpha$ is the
logarithmic slope of the luminosity function of background populations.
However, we find that this weighting function is optimal only for sparse
background populations in which intrinsic clustering is negligible with
respect to shot noise. We derive the optimal weighting function for general
case including scale independent and scale dependent weights. The optimal
weighting function improves the S/N (signal to noise ratio) by $\sim 20\%$ for
a BigBOSS-like survey and the improvement can reach a factor of $\sim 2$ for
surveys with much denser background populations.
###### keywords:
cosmology: weak lensing–cosmic magnification–theory
## 1 Introduction
Weak gravitational lensing directly probes the matter distribution of the
universe (e.g.[20]) and is emerging as one of the most powerful probes of dark
matter, dark energy [1] and the nature of gravity [12]. By far the most
sophisticated way to measure weak lensing is through cosmic shear, which is
the lensing induced coherent distortion in galaxy shape (Fu et al. 6 and
references therein). Coordinated projects on precision weak lensing
reconstruction through galaxy shapes have been carried out extensively (STEP,
Heymans et al. 8; STEP2, Massey et al. 15; GREAT08, Bridle et al. 3; GREAT10,
Kitching et al. 13).
Alternatively, one can reconstruct weak lensing through cosmic magnification,
namely the lensing induced coherent distortion in galaxy number density (e.g.
Gunn [7], Ménard & Bartelmann [16], Jain et al. [10] and references therein).
Neglecting high order corrections, the lensed galaxy (quasar) number
overdensity $\delta^{\rm L}_{\rm g}$ is related to the intrinsic overdensity
$\delta_{\rm g}$ by
$\delta^{\rm L}_{\rm g}=\delta_{\rm g}+2(\alpha-1)\kappa.$ (1)
Here, $\kappa$ is the lensing convergence and $\alpha(m,z)=2.5~{}{\rm dlog}\
n(m,z)/{\rm d}m-1$ is a function of the galaxy apparent magnitude $m$ and
redshift $z$. The number count of galaxy brighter than $m$ is
$N(m)=\int^{m}n(m)dm$. Throughout the paper we use the superscript “L” to
denote the lensed quantity.
Since cosmic magnification does not involve galaxy shape, weak lensing
reconstruction through cosmic magnification automatically avoids all problems
associated with galaxy shape. The key step in such reconstruction is to
eliminate $\delta_{\rm g}$, which is often orders of magnitude larger than the
lensing signal $\kappa$. Usually this is done by cross correlating foreground
population (galaxies) and background population (galaxies or quasars) with no
overlapping in redshift [23, 9, 24]. The resulting cross correlation is
$\displaystyle\langle\delta^{\rm L}_{\rm g,f}(\hat{\theta})\delta^{\rm L}_{\rm
g,b}(\hat{\theta^{\prime}})\rangle\approx 2(\alpha_{\rm
b}-1)\langle\delta_{\rm g,f}(\hat{\theta})\kappa_{\rm
b}(\hat{\theta^{\prime}})\rangle\ .$ (2)
Throughout the paper we use the subscript “b” for quantities of background
population and “f” for that of foreground galaxies. The above relation
neglects a term proportional to $\langle\kappa_{\rm f}\kappa_{\rm b}\rangle$,
which is typically much smaller than the $\langle\delta_{\rm g,f}\kappa_{\rm
b}\rangle$ term.111This term can be non-negligible or even dominant for
sufficiently high redshift foreground galaxy samples [27].
It is important to weigh the cross correlation measurement appropriately to
improve the S/N (signal to noise ratio). Since the signal scales as
$\alpha-1$, Ménard & Bartelmann [16] first suggested to maximize the S/N by
weighing each galaxy with its own $\alpha-1$. This weighting significantly
improves the measurement and a robust $8\sigma$ detection of the cosmic
magnification was achieved for the first time [23].
Nevertheless, we find that, the $\alpha-1$ weighting is optimal only in the
limit where the background galaxy (quasar) intrinsic clustering is negligible
with respect to the shot noise in background distribution. The statistical
errors (noises) are contributed by both shot noise and intrinsic clustering of
foreground and background galaxies. In this letter, we derive the exact
expression of the weighting function optimal for the cosmic magnification
measurement through cross correlation. The new weighting can further improve
the S/N by $\sim 20\%$ for galaxy-quasar cross correlation measurement in a
BigBOSS-like survey. We can also employ high redshift galaxies instead of
quasars as background sources which can have much larger number density and
smaller bias. Smaller shot noise results into better performance for the
derived optimal weighting. The improvement over the $\alpha-1$ weighting can
reach a factor of $\sim 2$ for surveys with background population density of
$\sim 2/$arcmin2.
Throughout the paper, we adopt the fiducial cosmology as a flat $\Lambda$CDM
universe with $\Omega_{\Lambda}=0.728$, $\Omega_{m}=1-\Omega_{\Lambda}$,
$\sigma_{8}=0.807$, $h=0.702$ and initial power index $n=0.961$, which are
consistent with WMAP seven years best-fit parameters [14].
## 2 THE OPTIMAL WEIGHTING FUNCTION
We are seeking for an optimal weighting function linearly operating on the
background galaxy (quasar) number overdensity in flux (magnitude) space. Let’s
denote the background galaxy number overdensity of the $i$-th magnitude bin as
$\delta^{(i)}_{\rm g,b}$ and the corresponding weighting function as $W_{i}$.
* •
The simplest weighting function is scale independent, so the weighted
background galaxy overdensity is
$\sum_{i}W_{i}\delta^{(i)}_{\rm g,b}\ .$ (3)
* •
We can further add scale dependence in $W_{i}$ to increase the S/N. The new
weighting function convolves the density field. For brevity, we express it in
Fourier space as $W_{i}(\ell)$. The Fourier transformation of the weighted
background overdensity is
$\sum_{i}\sum_{\ell}W_{i}(\ell)\tilde{\delta}^{(i)}_{\rm g,b}(\vec{\ell})\ .$
(4)
Here, $\tilde{\delta}_{\rm g,b}$ is the Fourier component of the overdensity
$\delta_{\rm g,b}$. The weighting function $W_{i}(\ell)$ is real and only
depends on the amplitude of the wavevector $\ell\equiv|\vec{\ell}|$. It
guarantees the weighted overdensity to be real.
The S/N of the background-foreground galaxy cross-correlation depends on the
weighting function, so we use the subscript “W” to denote the S/N after
weighting. The overall S/N can be conveniently derived in the Fourier space,
$\displaystyle\left(\frac{S}{N}\right)_{W}^{2}=$
$\displaystyle\sum_{\ell}\left[\frac{\langle C^{\rm
CM-g}_{\ell}\rangle_{W}}{\langle\Delta C^{\rm
CM-g}_{\ell}\rangle_{W}}\right]^{2}$ $\displaystyle=$
$\displaystyle\sum_{\ell}\frac{(2\ell+1)\Delta\ell f_{\rm sky}\langle C^{\rm
CM-g}_{\ell}\rangle_{W}^{2}}{\langle C^{\rm
CM-g}_{\ell}\rangle_{W}^{2}+(\langle C_{\rm g,b}\rangle_{W}+\langle C_{\rm
s,b}\rangle_{W})(C_{\rm g,f}+C_{\rm s,f})}$ $\displaystyle=$
$\displaystyle\sum_{\ell}\frac{(2\ell+1)\Delta\ell f_{\rm sky}}{1+(C_{\rm
g,f}+C_{\rm s,f})\frac{\langle b_{\rm g,b}W\rangle^{2}C_{\rm m,b}+\langle
W^{2}\rangle C_{\rm s,b}}{4\langle W(\alpha_{b}-1)\rangle^{2}C_{\kappa_{b}\rm
g}^{2}}}\ .$ (5)
Here, $C^{\rm CM-g}$ is the cosmic magnification-galaxy cross correlation
power spectrum and $\Delta C^{\rm CM-g}$ is the associated statistical error.
$\langle\cdots\rangle_{W}$ denotes the weighted average of the corresponding
quantity. We then have
$\langle C^{\rm CM-g}\rangle_{W}=\langle 2(\alpha_{\rm b}-1)W\rangle
C_{\kappa_{\rm b}\rm g}\ .$ (6)
Here, $C_{\kappa_{\rm b}\rm g}$ is the cross correlation power spectrum
between background lensing convergence and foreground galaxy overdensity.
$\langle uv\rangle$ is the averaged product of $uv$,
$\displaystyle\langle uv\rangle\equiv\frac{\sum_{i}u(m_{i})v(m_{i})N_{{\rm
b},i}}{\sum_{i}N_{{\rm b},i}}\ .$ (7)
Here, $N_{{\rm b},i}$ is the number of background galaxies (quasars) in the
given magnitude bin $m_{i}-\Delta m_{i}/2<m<m_{i}+\Delta m_{i}/2$.
The S/N scales with $f^{1/2}_{\rm sky}$ and $f_{\rm sky}$ is the fractional
sky coverage. $C_{\rm s}$ is the shot noise power spectrum and the weighted
one is $\langle C_{\rm s,b}\rangle_{W}=\langle W^{2}\rangle C_{\rm s,b}$.
$C_{\rm g}$ is the galaxy power spectrum. We adopt a simple bias model for the
foreground and background galaxies. We then have $C_{{\rm g},i}=b_{{\rm
g},i}^{2}C_{\rm m}$ where $b_{{\rm g},i}$ is the bias of the $i$-th magnitude
bin and $C_{\rm m}$ is the corresponding matter angular power spectrum. The
weighted background galaxy power spectrum is
$\langle C_{\rm g,b}\rangle_{W}=\langle b_{\rm g,b}W\rangle^{2}C_{\rm m,b}\ .$
(8)
### 2.1 The scale independent optimal weighting function
The optimal weighing function $W$ can be obtained by varying the S/N (Eq. 2)
with respect to $W$ and maximizing it. The derivation is lengthy but
straightforward, so we leave details in the appendix and only present the
final result here.
The optimal weighting function is of the form222The derived scale independent
weighting function implicitly assumes no scale dependence in the galaxy bias.
In reality, the galaxy bias is scale dependent and the application of Eq. 9 is
limited. The exact optimal weighting function applicable to scale dependent
bias is given by Eq. 10.
$\displaystyle W=(\alpha_{\rm b}-1)+\varepsilon b_{\rm g,b}\ \ .$ (9)
where the scale independent constant $\varepsilon$ is determined by Eq. 12. It
is a fixed number for the given redshift bin of the given survey. In the limit
that shot noise of background galaxies overwhelms their intrinsic clustering
($C_{\rm s,b}\gg C_{\rm g,b}$), $\varepsilon\rightarrow 0$. In this case, the
weighting function $\alpha-1$ proposed by Ménard & Bartelmann [16] becomes
optimal.
### 2.2 The scale dependent optimal weighting function
The weighting function $W$ (Eq. 9) is optimal under the condition that $W$ is
scale independent. If we relax this requirement and allow for scale dependence
in $W$, we are able to maximize the S/N of the cross power spectrum
measurement at each $\ell$ bin. Clearly, this further improves the overall
S/N.
In this case, $W$ of different $\ell$ bins are independent to each other. This
significantly simplifies the derivation and we are now able to obtain an
analytical expression for $W$,
$\displaystyle W(\ell)=(\alpha_{\rm b}-1)+\left[-\frac{\langle(\alpha_{\rm
b}-1)b_{\rm g,b}\rangle C_{\rm m,b}(\ell)/C_{\rm s,b}}{1+\langle b_{\rm
g,b}^{2}\rangle C_{\rm m,b}(\ell)/C_{\rm s,b}}\right]b_{\rm g,b}(\ell).$ (10)
This form is similar to Eq. (9), except that it is now scale dependent. Here
again, in the limit $C_{\rm s,b}\gg C_{\rm m,b}\sim C_{\rm g,b}$,
$W\rightarrow\alpha-1$ and we recover the result of Ménard & Bartelmann [16].
This is indeed the case for SDSS background quasar sample.
### 2.3 The applicability
Are the derived weighting functions (Eq. 9 & 10) directly applicable to real
surveys? From Eq. 9 & 10, it seems that we need to figure out $b_{\rm g,b}$
first. Since $b^{2}_{\rm g,b}\equiv C_{\rm g,b}/C_{\rm m,b}$ and $C_{\rm m,b}$
is not directly given by the observation, cosmological priors or external
measurements (e.g. weak lensing) are required to obtain the absolute value of
$b_{\rm g,b}$. Hence it seems that the applicability of Eq. 9 & 10 is limited
by cosmological uncertainty.
However, this is not the case. Eq. 10 shows that, it is the combination
$b_{\rm g,b}^{2}C_{\rm m,b}$ determines $W$. Since $C_{\rm g,b}\equiv b_{\rm
g,b}^{2}C_{\rm m,b}$ and $\alpha_{b}$ are directly observable, Eq. 10 is
determined completely by observations. Closer look shows that Eq. 9 is also
determined by the combination $b_{\rm g,b}^{2}C_{\rm m,b}$, so the
corresponding weighting is determined completely by observations, too. Hence
the derived optimal weighting functions are indeed directly applicable to real
surveys.
For ongoing and planned surveys such as CFHTLS, COSMOS, DES, BigBOSS, LSST,
SKA, etc., the number density of background populations (galaxies) can be high
and the intrinsic clustering can be non-negligible or even dominant comparing
to shot noise. In next section we will quantify the improvement of the optimal
weighting functions for a BigBOSS-like survey and briefly discuss implications
to surveys with even denser background populations.
## 3 The improvement
Table 1: Improving the cosmic magnification measurement by the optimal weighting function. The target survey is BigBOSS. The terms on the left side of the arrows are the estimated S/N using the weighting $\alpha-1$. The ones on the right side are the S/N using the optimal weighting function, where the ones on the left side of $``/"$ are what expected using the scale independent weighting (Eq. 9), and the ones on the right are what expected using the scale dependent weighting (Eq. 10). The improvement depends on the bias dependence on galaxy luminosity. To demonstrate such dependence, we adopt a toy model $b_{Q}\propto F^{\beta}$ and investigate different values of the parameter $\beta$. In general, the optimal weighting function improves the S/N by $10\%$-$20\%$ for BigBOSS, whose background quasar density is $\sim 0.02/$arcmin2. The improvement can reach a factor of $\sim 2$ for surveys with background (galaxy) populations reaching surface density of $\sim 2/$arcmin2. Flux dependence of quasar bias model | $\beta=0$ | 0.1 | 0.2 | 0.3
---|---|---|---|---
Detection significance of $\rm LRG\times quasar$ | 111.2$\rightarrow$129.0/136.8 | 109.8$\rightarrow$126.3/133.9 | 108.3$\rightarrow$123.7/131.1 | 106.5$\rightarrow$119.3/126.7
Detection significance of $\rm ELG\times quasar$ | 94.3$\rightarrow$106.4/110.3 | 93.2$\rightarrow$104.5/108.6 | 92.0$\rightarrow$102.2/106.7 | 90.5$\rightarrow$99.1/104.1
BigBOSS333http://bigboss.lbl.gov/ is a planned spectroscopic redshift survey
of $24000\ {\rm deg}^{2}$ (BigBOSS-North plus South). Cosmic magnification can
be measured by BigBOSS through LRG (luminous red galaxy)-quasar and ELG
(emission line galaxy)-quasar cross correlations. In principle, it can also be
measured through LRG-ELG cross correlation. But the interpretation of the
measured cross correlation signal would be complicated by the intricate
selection function of ELGs [28]. In the current paper, we only consider the
LRG-quasar and ELG-quasar cross correlations.
There are some uncertainties in the BigBOSS galaxy (quasar) redshift
evolution, flux distribution and intrinsic clustering. To proceed, we will
take a number of simplifications. So the absolute S/N of cross correlation
measurement that we calculate is by no means accurate. But our calculation
should be sufficiently robust to demonstrate the relative improvement of the
exact optimal weighting function over the previous one.
The LRG and ELG luminosity functions are calculated based on the BigBOSS white
paper [21]. The comoving number density of LRG and ELG is $3.4\times
10^{-4}(h/\rm Mpc)^{3}$, then we have $1.1\times 10^{7}$ LRGs in the redshift
range of $z=0.2-1$ and $3.3\times 10^{7}$ ELGs in the redshift range of
$z=0.7-1.95$. Clustering of LRGs evolves slowly, so we adopt LRG bias as
$b_{\rm g,f}(z)=1.7/D(z)$ [19]. Here $D(z)$ is the linear density growth
factor and is normalized such that $D(z=0)=1$. Existing knowledge on
clustering of ELGs is rather limited. So we simply follow Padmanabhan et al.
[19] and approximate the ELG bias as $b_{\rm g,f}=0.8/D(z)$.
For the luminosity function of background quasars, we adopt the LDDE
(Luminosity dependent density evolution) model with best fit parameters from
Croom et al. [5]. The magnitude limit is $g=23$, then we have $2.1\times
10^{6}$ quasars in the redshift range of $z=2-3.5$. We choose a redshift gap
($z_{\rm b,min}-z_{\rm f,max}=0.05$) such that the intrinsic cross correlation
between foreground and background populations can be safely neglected. We
adopt a bias model for quasar clustering, with $b_{\rm
Q}(z)=0.53+0.289(1+z)^{2}$ from the analysis of $3\times 10^{5}$quasars [18].
The S/N depends on many issues and can vary from $90$-$140$ (Table 1). The
S/Ns of LRG-quasar and ELG-quasar correlations are comparable because of a
consequence of several competing factors including the lensing efficiency,
galaxy surface density and clustering. Nevertheless, a robust conclusion is
that BigBOSS can measure cosmic magnification through galaxy-quasar cross
correlation measurement with high precision. Given such high S/N and accurate
redshift available in BigBOSS, it is feasible to directly measure the angular
diameter distance from such measurement by the method of Jain & Taylor [11],
Zhang et al. [25], Bernstein [2].
Unambiguous improvement in the cross correlation measurement by using our
optimal weighting (Eq. 9 & 10) is confirmed, as shown in Table 1. The S/N is
improved by $\sim 15\%$ by using the scale independent optimal weighting (Eq.
9) and by $\sim 20\%$ by using the scale dependent one (Eq. 10).
We further investigate the dependence of the above improvement on the flux
dependence of quasar bias. We adopt a toy model with $b_{\rm Q}(z,F)=b_{\rm
Q}(z)(F/F^{*})^{\beta}$. Here, $F^{*}$ is the flux corresponding to that of
the $M^{*}$ in the quasar luminosity function. $\beta$ is an adjustable
parameter and we will try $\beta=0,0.1,0.2,0.3$, then the corresponding
parameters in scale independent weighting are $\varepsilon=0.049,0.048,0.047$
and $0.045$. Larger value of $\beta$ ($\geq 0.4$) leads to too large quasar
bias ($\geq 10$) and hence will not be investigated here. Table 1 shows
consistent improvement by our optimal weighting functions. Hence, despite
uncertainties in quasar modeling, we expect our optimal weighting function to
be useful to improve cosmic magnification measurement in the BigBOSS survey.
Nevertheless the improvement is only moderate. The major reason is that, even
for BigBOSS, the quasar sample is still sparse, with a surface number density
$\sim 0.02/$arcmin2. Hence shot noise dominates over the intrinsic clustering.
Indeed, we find that typically $C_{\rm m,Q}/C_{\rm s,Q}\la 0.1$. For imaging
surveys like CFHTLS, COSMOS, DES and LSST, we can also use high redshift
galaxies as background galaxies to correlate with low redshift foreground
galaxies. For these surveys, high redshift galaxy population (e.g. with
$z>1$-$2$) can reach surface number density $\sim 0.2$-$2/$arcmin2 or even
higher. So the shot noise in these surveys can be suppressed by a factor of
$10$-$100$ or more. The overall improvement of our optimal weighting would be
larger.
To demonstrate these further improvements, we hypothetically increase the
surface density of BigBOSS quasars by a factor of 10 and 100 respectively, but
keep $\beta=0$ and all other parameters unchanged. Shot noise will be
decreased by a factor of 10 and 100 correspondingly. The scale independent
weighting parameter $\varepsilon$ can reach 0.12 and 0.22 respectively. For
the first case, the S/N is improved by $\sim 38\%$ for the scale independent
optimal weighting and by $\sim 51\%$ for the scale dependent one. For the
second case, the improvement is $\sim 72\%$ for the scale independent optimal
weighting and is $\sim 94\%$ for the scale dependent one. It is now clear that
for measuring cosmic magnification through cross correlation between
foreground and background galaxies of many existing and planned surveys, one
should adopt the optimal weighting function derived in this letter.
## 4 Summary
We have derived the optimal weighting functions for cosmic magnification
measurement through cross correlation between foreground and background
populations, for scale independent and scale dependent weights respectively.
Our weighting functions outperform the commonly used weighting function
$\alpha-1$ by $\sim 20\%$ for a BigBOSS-like survey and by larger factors for
surveys with denser background populations. Hence we recommend to use our
optimal weighting function for cosmic magnification measurement in BigBOSS,
CFHTLS, COSMOS, DES, Euclid, LSST, SKA, WFIRST, etc.
## 5 Acknowledgment
This work is supported in part by the one-hundred talents program of the
Chinese Academy of Sciences, the national science foundation of China (grant
No. 10821302, 10973027 & 11025316), the CAS/SAFEA International Partnership
Program for Creative Research Teams and the 973 program grant No.
2007CB815401.
## References
* Albrecht et al. [2006] Andreas Albrecht, et al. The Dark Energy Task Force Final Report. arXiv:astro-ph/0609591
* Bernstein [2006] Bernstein, G. 2006, ApJ, 637, 598
* Bridle et al. [2009] Bridle, S., et al. 2009, Annals of Applied Statistics, Vol. 3, No. 1, 6
* Bridle et al. [2009] Bridle, S., et al. arXiv:astro-ph/0908.0945
* Croom et al. [2009] Croom,S.M., et al. arXiv:astro-ph/0907.2727
* Fu et al. [2008] Fu, L., et al. 2008, AAP, 479, 9
* Gunn [1967] Gunn, J. E. 1967, ApJ, 150, 737
* Heymans et al. [2006] Heymans,C., et al. 2007, MNRAS, 368,1323
* Hildebrandt et al. [2009] Hildebrandt,H., et al. arXiv:astro-ph/0906.1580v2
* Jain et al. [2003] Jain, B., Scranton, R., & Sheth, R. K. 2003, MNRAS, 345, 62
* Jain & Taylor [2003] Jain, B., & Taylor, A. 2003, PRL, 91, 141302
* Jain & Zhang [2008] Jain, B.,& Zhang, P. 2008, PRD, 78,063503
* Kitching et al. [2010] Kitching, T., 2010, arXiv:astro-ph/1009.0779
* Komatsu et al. [2011] Komatsu, E., et al. 2011, Astrophys.J.Suppl.192,18
* Massey et al. [2007] Massey,R., 2007, MNRAS, 376,13
* Ménard & Bartelmann [2002] Ménard, B. & Bartelmann, M., 2002, AAP, 386, 784
* Menard et al. [2003] Ménard, B., Hamana, T., Bartelmann, M., & Yoshida, N. 2003, AAP, 403, 817
* Myers et al. [2007] Myers,A.D.,et al. 2007, ApJ, 658, 85
* Padmanabhan et al. [2006] Padmanabhan,N., et al. 2006, MNRAS,359,237
* Refregier [2003] Refregier,A. 2003, Annual Review of Astronomy & Astrophysics, 41, 645
* Schlegel et al. [2009] Schlegel,D., et al. arXiv:astro-ph/0904.0468
* Schrabback et al. [2009] Schrabback,T., et al. arXiv:astro-ph/0911.0053
* Scranton et al. [2005] Scranton,R.,et al. 2005, ApJ, 633, 589
* Wang et al. [2011] Wang,L., et al. arXiv:astro-ph/1101.4796
* Zhang et al. [2005] Zhang, J., Hui, L., & Stebbins, A. 2005, ApJ, 635, 806
* Zhang & Pen [2005] Zhang, P.,& Pen, U.-L. 2005, PRL, 95, 241302
* Zhang & Pen [2006] Zhang, P.,& Pen, U.-L. 2006, MNRAS, 367, 169
* Zhu et al. [2009] Zhu,G., et al. 2009, ApJ, 701, 86
## Appendix A Deriving the optimal weighting function
In Section 2, we give the optimal weighting function without derivation. Here,
we present a brief derivation for the scale independent weighting function.
The derivation of the scale dependent weighting function is similar, but
simpler.
Maximizing the S/N requires the variation of $(S/N)^{2}$ with respect to $W$
to be zero. From this condition, we obtain
$\displaystyle W$
$\displaystyle=\frac{\sum_{\ell}(2\ell+1)\Delta\ell\eta(\nu\langle b_{\rm
g,b}W\rangle^{2}+\langle W^{2}\rangle)\big{/}(1+F)^{2}}{\langle W(\alpha_{\rm
b}-1)\rangle\sum_{\ell}(2\ell+1)\Delta\ell\eta\big{/}(1+F)^{2}}(\alpha_{\rm
b}-1)$ (11) $\displaystyle-\frac{\langle W(\alpha_{\rm b}-1)\rangle\langle
b_{\rm
g,b}W\rangle\sum_{\ell}(2\ell+1)\Delta\ell\eta\nu\big{/}(1+F)^{2}}{\langle
W(\alpha_{\rm
b}-1)\rangle\sum_{\ell}(2\ell+1)\Delta\ell\eta\big{/}(1+F)^{2}}b_{\rm g,b}.$
Here, for brevity, we have denoted $\eta=C_{\rm s,b}(C_{\rm g,f}+C_{\rm
s,f})/C_{\kappa_{b}g}^{2}$, $\nu=C_{\rm m,b}/C_{\rm s,b}$ and
$F_{(W)}=\eta[\nu\langle b_{\rm g,b}W\rangle^{2}+\langle
W^{2}\rangle]/[4\langle W(\alpha_{\rm b}-1)\rangle^{2}]$.
Noticing that the coefficients of $\alpha_{\rm b}-1$ and $b_{\rm g,b}$ only
involve the weighted average of $W$ and taking the advantage that the optimal
$W$ remains optimal after a constant (flux independent) scaling, we are able
to seek for the solution of the form $W=(\alpha_{\rm b}-1)+\varepsilon b_{\rm
g,b}$, with $\varepsilon$ a constant to be determined. Plugging it into the
above equation, we obtain the equation of $\varepsilon$,
$\displaystyle\varepsilon=-\frac{\langle(\alpha_{\rm b}-1)b_{\rm
g,b}\rangle}{\sum_{\ell}\frac{(2\ell+1)\Delta\ell\eta}{(1+F)^{2}}\bigg{/}\sum_{\ell}\frac{(2\ell+1)\Delta\ell\eta\nu}{(1+F)^{2}}+\langle
b_{\rm g,b}^{2}\rangle}\ .$ (12)
Noticing that $F$ depends on $\varepsilon$. The above equation can be solved
numerically to obtain the solution of $\varepsilon$.
In the case of scale dependent weighting, each $W(\ell)$ are independent to
each other. Through similar derivation, one can show that
$\displaystyle W(\ell)=(\alpha_{\rm b}-1)+\left[-\frac{\nu\langle(\alpha_{\rm
b}-1)b_{\rm g,b}\rangle}{1+\nu\langle b_{\rm g,b}^{2}\rangle}\right]b_{\rm
g,b}\ .$ (13)
|
arxiv-papers
| 2011-04-13T13:33:40 |
2024-09-04T02:49:18.233585
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Xiaofeng Yang and Pengjie Zhang",
"submitter": "Xiaofeng Yang",
"url": "https://arxiv.org/abs/1104.2487"
}
|
1104.2552
|
# Single-qubit-gate error below 10-4 in a trapped ion
K. R. Brown kbrown@iontrapping.com; present address: Georgia Tech Research
Institute, 400 10th Street Northwest, Atlanta, Georgia 30318, USA. A. C.
Wilson Y. Colombe C. Ospelkaus Present address: QUEST, Leibniz Universität
Hannover, Im Welfengarten 1, D-30167 Hannover and PTB, Bundesallee 100,
D-38116 Braunschweig, Germany. A. M. Meier E. Knill D. Leibfried D. J.
Wineland National Institute of Standards and Technology, 325 Broadway,
Boulder, CO 80305, USA
###### Abstract
With a 9Be+ trapped-ion hyperfine-states qubit, we demonstrate an error
probability per randomized single-qubit gate of 2.0(2) $\times$ 10-5, below
the threshold estimate of 10-4 commonly considered sufficient for fault-
tolerant quantum computing. The 9Be+ ion is trapped above a microfabricated
surface-electrode ion trap and is manipulated with microwaves applied to a
trap electrode. The achievement of low single-qubit-gate errors is an
essential step toward the construction of a scalable quantum computer.
In theory, quantum computers can solve certain problems much more efficiently
than classical computers Nielsen and Chuang (2000). This has motivated
experimental efforts to construct and to verify devices that manipulate
quantum bits (qubits) in a variety of physical systems Ladd et al. (2010). The
power of quantum computers depends on the ability to accurately control
sensitive superposition amplitudes by means of quantum gates, and errors in
these gates are a chief obstacle to building quantum computers DiVincenzo
(2000). Small gate errors would enable fault-tolerant operation through the
use of quantum error correction protocols Preskill (1998). While the maximum
tolerable error varies between correction strategies, there is a consensus
that $10^{-4}$ is an important threshold to breach Preskill (1998); Knill
(2010). Single-qubit gates with errors slightly above this level have been
achieved with nuclear spins in liquid-state nuclear-magnetic resonance
experiments Ryan et al. (2009) and with neutral atoms confined in optical
lattices Olmschenk et al. (2010); here we demonstrate single-qubit error
probabilities of $2.0(2)\times 10^{-5}$, substantially below the threshold.
Reaching fault-tolerance still requires reducing two-qubit-gate errors from
the current state of the art (7 $\times$ 10-3 for laser-based Benhelm et al.
(2008) and 0.24 for microwave-based gates Ospelkaus et al. (2011)) to similar
levels.
To determine the average error per gate (EPG), we use the method of randomized
benchmarking Knill et al. (2008). Compared to other methods for evaluating
gate performance, such as quantum process tomography Poyatos et al. (1997),
randomized benchmarking offers the advantage that it efficiently and
separately can determine the EPG and the combined state-preparation and
measurement errors. Because it involves long sequences of random gates, it is
sensitive to errors occurring when gates are used in arbitrary computations.
In randomized benchmarking, the qubit, initialized close to a pure quantum
state, is subjected to predetermined sequences of randomly selected Clifford
gates Bravyi and Kitaev (2005) for which, in the absence of errors, the
measurement outcome is deterministic and efficiently predictable. Clifford
gates include the basic unitary gates of most proposed fault-tolerant quantum
computing architectures. Together with certain single-qubit states and
measurements, they suffice for universal quantum computing Bravyi and Kitaev
(2005); Knill (2005). To establish the EPG, the actual measurement and
predicted outcome are compared for many random sequences of different lengths.
Under assumptions presented in Ref. Knill et al. (2008), this yields an
average fidelity as a function of the number of gates that decreases
exponentially to $1/2$ and determines the EPG. Randomized benchmarking has
been used to quantify single-qubit EPGs in a variety of systems as summarized
in Table 1.
Reference | System | Gate error
---|---|---
This Rapid Communication (2011) | Single trapped ion | $2.0(2)\times 10^{-5}$
Reference Ryan et al. (2009) (2009) | Nuclear magnetic resonance | $1.3(1)\times 10^{-4}$
Reference Olmschenk et al. (2010) (2010) | Atoms in an optical lattice | $1.4(1)\times 10^{-4}$
Reference Biercuk et al. (2009) (2009) | Trapped-ion crystal | $8(1)\times 10^{-4}$
Reference Knill et al. (2008) (2008) | Single trapped ion | $4.8(2)\times 10^{-3}$
Reference Chow et al. (2010) (2010) | Superconducting transmon | $7(5)\times 10^{-3}$
Table 1: Reported average EPG for Pauli-randomized $\pi/2$ gates in different
systems as determined by randomized benchmarking.
To improve on the results of Ref. Knill et al. (2008), we integrated a
microwave antenna into a surface-electrode trap structure Ospelkaus et al.
(2008). The use of microwave radiation instead of optical stimulated-Raman
transitions to drive qubit rotations suppresses decoherence from laser beam
pointing instability and power fluctuations and eliminates decoherence from
spontaneous emission. The microwave amplitude can be stabilized more easily
than laser power, and because the antenna is integrated into the trap
electrodes, unwanted motion of the trap does not affect the microwave-ion-
coupling strength. The small distance (40 $\mu$m) between the trap surface and
the ion permits transition rates comparable to those based on lasers. Improved
shielding from ambient magnetic-field fluctuations was achieved by locating
the trap inside a copper vacuum enclosure held at 4.2 K by a helium-bath
cryostat. The thickness of the walls, combined with the increase in electrical
conductivity of copper at 4.2 K, effectively shields against the ambient
magnetic field fluctuations that typically limit coherence in room-temperature
ion-trap experiments Knill et al. (2008). This shielding is evident when we
change the magnetic field external to the cryostat; the accompanying response
in ion fluorescence lags the change with an exponential time constant of
3.8(2) s. In addition, cryogenic operation decreases the background gas
pressure to negligible levels, thereby enabling long experimental runs with
the same ion, and it suppresses ion heating Deslauriers et al. (2006);
Labaziewicz et al. (2008); Brown et al. (2011).
The 9Be+ ion is trapped $40$ $\mu$m above a surface-electrode trap Seidelin et
al. (2006) constructed of 8-$\mu$m-thick gold electrodes electroplated onto a
crystalline quartz substrate and separated by 5-$\mu$m gaps (Fig. 1).
Figure 1: (Color online) Micrograph of the ion trap, showing radio-frequency
(rf) electrodes and control electrodes. The red sphere indicates the
approximate ion position in the $x$-$y$ plane. Also shown are the directions
of the static magnetic field $B_{0}$ and of the microwave current used to
drive hyperfine transitions. (Inset) Energy level diagram (not to scale) of
the 2s 2S1/2 hyperfine states in 9Be+. Blue dashed lines indicate the
transitions used to prepare and measure $|\downarrow\rangle$. The solid black
line indicates the qubit transition, and the red dashed line indicates one of
the transitions used to shelve $|\uparrow\rangle$ into a dark state.
A static magnetic field $B_{0}$, parallel to the trap surface and collinear
with a Doppler cooling laser beam, is applied to break the degeneracy of the
ground-state Zeeman sublevels (Fig. 1 inset). We drive 2s 2S1/2 hyperfine
transitions with microwave pulses near 1.25 GHz, coupled with a 4-nF capacitor
to one end of a trap control electrode. The microwave current is shunted to
ground at the other end of the electrode by the 4-nF capacitor of an RC
filter. Microwave pulses are created by frequency quadrupling the output of a
direct-digital synthesizer whose frequency and phase can be updated in less
than 1 $\mu$s by a field-programmable gate array [(FPGA), 16-ns timing
resolution]. An rf switch creates approximately rectangular-shaped pulses.
This signal is amplified and is delivered via a coaxial cable within the
cryostat and a feedthrough in the copper vacuum enclosure. In this Rapid
Communication, we use the clock transition
($|F=2,m_{F}=0\rangle\equiv|\downarrow\rangle\leftrightarrow|1,0\rangle\equiv|\uparrow\rangle$)
in 9Be+ for the qubit instead of the previously used
$|2,-2\rangle\leftrightarrow|1,-1\rangle$ transition Knill et al. (2008) (Fig.
1 inset). The clock transition is a factor of 20 less sensitive to magnetic-
field fluctuations (950 MHz/T at field $B_{0}=1.51\times 10^{-3}$ T, compared
to 21 GHz/T).
A benchmarking experiment proceeds as follows. The ion is Doppler cooled and
optically pumped to the $|2,-2\rangle$ state with $\sigma^{-}$-polarized laser
radiation near the
$|^{2}$S${}_{1/2},2,-2\rangle\leftrightarrow|^{2}$P${}_{3/2},3,-3\rangle$
cycling transition at 313 nm. Then, the qubit is initialized in
$|\downarrow\rangle$ with two microwave $\pi$ pulses, resonant with the
$|2,-2\rangle\leftrightarrow|1,-1\rangle$ and
$|1,-1\rangle\leftrightarrow|\downarrow\rangle$ transitions (blue lines in
Fig. 1 inset). Pulse duration is then controlled by a digital delay generator,
which has a 5-ps timing resolution. The frequency, phase, and triggering of
each pulse remain under control of the FPGA.
A predetermined sequence of randomized computational gates is then applied.
Each computational gate consists of a Pauli gate ($\pi$ pulse) followed by a
(non-Pauli) Clifford gate ($\pi/2$ pulse). The gate sequence is followed by
measurement randomization consisting of a random Pauli gate and a Clifford
gate chosen deterministically to yield an expected measurement outcome of
either $|\uparrow\rangle$ or $|\downarrow\rangle$. The Pauli gates are chosen
with equal probability from the set $e^{-i\pi\sigma_{\mathrm{p}}/2}$, where
$\sigma_{\mathrm{p}}\in$ {$\pm\sigma_{x}$, $\pm\sigma_{y}$, $\pm\sigma_{z}$,
$\pm I$}. The Clifford gates are chosen with equal probability from the set
$e^{-i\pi\sigma_{\mathrm{c}}/4}$, where $\sigma_{\mathrm{c}}\in$
{$\pm\sigma_{x}$, $\pm\sigma_{y}$}. In practice, a Clifford gate is
implemented as a single (rectangular-shaped) $\pi/2$ pulse of duration
$\tau_{\pi/2}\approx 21$ $\mu$s with appropriate phase. For calibration
simplicity, a Pauli gate is implemented for $\sigma_{\mathrm{p}}\in$
{$\pm\sigma_{x}$, $\pm\sigma_{y}$} as two successive $\pi/2$ pulses, and an
identity gate $\pm I$ is implemented as an interval of the same duration
without the application of microwaves. A gate $e^{-i\pi\sigma_{z}/2}$ is
implemented as an identity gate, but the logical frame of the qubit and
subsequent pulses are adjusted to realize the relevant change in phase. All
pulses are separated by a delay of 0.72 $\mu$s.
To detect the final qubit state, $\pi$ pulses implement the transfer
$|\downarrow\rangle\rightarrow|1,-1\rangle\rightarrow|2,-2\rangle$ (blue lines
in Fig. 1 inset). Two additional pulses implement the transfer
$|\uparrow\rangle\rightarrow|\downarrow\rangle\rightarrow|1,1\rangle$ (black
and red lines in Fig. 1 inset). The ion is then illuminated for 400 $\mu$s by
313-nm light resonant with the cycling transition, and the resulting
fluorescence is detected with a photomultiplier. The entire sequence
experiment (from initialization through detection) is repeated 100 times (for
each sequence) to reduce statistical uncertainty. On average, approximately 13
photons are collected from an ion in the bright $|2,-2\rangle$ state, but only
$0.14$ are collected from an ion in the dark $|1,1\rangle$ state (due largely
to laser light scattered from the trap surface). To normalize the detection
and to eliminate errors due to slow fluctuations in laser power, each sequence
experiment is immediately followed by two reference experiments, where the ion
is prepared in the $|\downarrow\rangle$ and $|\uparrow\rangle$ states,
respectively, and the above detection protocol is implemented. From the
resulting bright and dark histograms [inset to Fig. 2(b)], we take the median
to establish a threshold for $|\downarrow\rangle$ and $|\uparrow\rangle$
detection.
Results are shown in Fig. 2.
Figure 2: (Color online) Results of the single-qubit benchmarking experiments.
(a) Histogram of sequences of a given length with a given fidelity. Fidelity
is discretized to 0.01 precision because 100 experiments were performed for
each sequence. (b) Mean fidelity for each sequence length with error bars. The
black trace is a least-squares fit to Eq. (1) yielding an EPG of 2.0(2)
$\times$ 10-5. (Inset) Summed histogram of bright and dark calibration
experiments with a red line indicating the detection threshold.
Sequence length refers to the number of computational gates in a sequence. We
implement sequences of lengths 1, 3, 8, 21, 55, 144, 233, 377, 610, and 987,
with 100 different sequences at each length, for a total of 1000 unique
sequences. With the 21-$\mu$s $\pi/2$ duration used here, a sequence of 987
computational gates requires approximately 64 ms to complete. Our current
software limits the experiment to sequences of length $\lesssim 1,300$ gates.
Theoretically, the average probability for obtaining a correct measurement
result (the fidelity) after a sequence of length $l$ is Knill et al. (2008)
$\bar{\mathcal{F}}=\frac{1}{2}+\frac{1}{2}(1-d_{\mathrm{if}})(1-2\mathcal{E}_{\mathrm{g}})^{l},$
(1)
where $d_{\mathrm{if}}$ describes errors in initialization and measurement and
$\mathcal{E}_{\mathrm{g}}$ is the EPG. A least-squares fit of the observed
decay in fidelity to Eq. (1) yields $\mathcal{E}_{\mathrm{g}}=2.0(2)\times
10^{-5}$ and $d_{\mathrm{if}}=2.7(1)\times 10^{-2}$. Here, $d_{\mathrm{if}}$
is limited by imperfect laser polarization caused by inhomogeneities in the
birefringence of the cryogenic windows of the vacuum enclosure.
The following systematic effects may contribute to the EPG: magnetic-field
fluctuations, microwave phase and frequency instability and resolution limits,
ac Zeeman shifts, pulse amplitude and duration fluctuations, microwave-ion-
coupling strength fluctuations, decoherence caused by unintended laser
illumination of the ion, and off-resonant excitation to other levels in the
ground-state hyperfine manifold.
During the benchmarking, we calibrate the qubit transition frequency
approximately every 60 s. The difference between each frequency recalibration
and the first calibration is plotted in Fig. 3(a) for the time period
corresponding to the data in Fig. 2.
Figure 3: Changes in (a) qubit transition frequency and (b) $\pi/2$ duration
during the benchmarking experiments. Change is defined as the difference
between the recalibrated value and the first calibration. Typical transition
frequencies and $\pi/2$ durations are approximately 1.250 7385 GHz and 20.50
$\mu$s, respectively.
Monte Carlo simulations of the sequences indicate an EPG contribution of
$\mathcal{E}_{\mathrm{g}}=\beta\Delta^{2}$, where $\beta=1.91\times
10^{-8}/$Hz2 and $\Delta$ is the detuning of the microwave frequency from the
qubit frequency (assumed constant for all of the sequences). In the absence of
recalibrations, the root-mean-square (rms) difference of 25 Hz would give a
predicted EPG of $1.2\times 10^{-5}$. However, with regular recalibration, the
rms difference in frequency between adjacent calibration points (15 Hz) gives
a predicted contribution to the EPG of $0.4\times 10^{-5}$. The microwave
frequency and phase resolution are 0.37 Hz and $1.5$ mrad, respectively,
leading to a predicted EPG contribution of less than $10^{-7}$.
A theoretical estimate for the expected ac Zeeman shift of the clock (qubit)
transition yields a value of less than $1$ Hz. In principle, this shift can be
determined by comparing the qubit frequency measured in a Ramsey experiment
with that of a Rabi experiment. Such back-to-back comparisons yielded values
ranging from +14 Hz to -10 Hz, each with errors of approximately 2 Hz. The
source of this variation and of the discrepancy with theory is not known, but
if we assume, as a worst-case scenario, a miscalibration of 15 Hz for the
frequency, we estimate an EPG contribution of $0.4\times 10^{-5}$.
One measure of errors caused by qubit-frequency fluctuations (e.g., from
fluctuating magnetic fields) is to characterize decoherence as an exponential
decay through a T2 process Olmschenk et al. (2010); Knill et al. (2008). To
check this, we implement a Ramsey experiment. The ion is prepared in
$|\downarrow\rangle$, and we apply a single $\pi/2$ pulse. After waiting for
an interval $\tau/2$, we apply a $\pi$ pulse to refocus the qubit, and
following another interval $\tau/2$, we apply an additional $\pi/2$ pulse,
ideally restoring the qubit back to $|\downarrow\rangle$. An exponential fit
of the resulting decay in the $|\downarrow\rangle$ state probability over
periods $\tau\lesssim 100$ ms gives T${}_{2}=0.38(4)$ s. Assuming this value
of T2 also describes frequency fluctuations at times on the order of the gate
pulses, we predict an EPG contribution of $9\times 10^{-5}$. Because this
exceeds the benchmark value, we believe that the noise at shorter periods, in
this experiment, is smaller than that predicted by a simple exponential fitted
at longer durations.
We recalibrate the $\pi/2$ duration approximately every 120 s with a sequence
of 256 in-phase $\pi/2$ pulses [Fig. 3(b)]. Monte Carlo simulations indicate
an EPG contribution of $\mathcal{E}_{\mathrm{g}}=\gamma(\Delta\tau)^{2}$,
where $\gamma=2.7\times 10^{-3}/\mu$s2 and $\Delta\tau$ represents a
miscalibration in the $\pi/2$ time (assumed constant for all sequences). In
the absence of recalibration, the 23-ns rms drift would correspond to an EPG
of $0.1\times 10^{-5}$; from the estimated residual miscalibration between
points of 5 ns, we predict an EPG contribution of less than $10^{-7}$.
We characterize pulse-to-pulse microwave power fluctuations by turning on the
microwaves continuously and sampling the power every 10 ns. The integral of
the sampled power over a 25-$\mu$s interval is proportional to the total
rotation angle during a pulse of the same duration. We perform this integral
12 times, with each 25-$\mu$s interval following the previous one by 10 s.
Within 120 s after turning on the microwaves, we observe a 1% drift in the
power. If the pulse-to-pulse variation in microwave power is, in fact, this
large, it corresponds to an EPG contribution of $3\times 10^{-5}$. However,
after a 20-min warm-up interval, we measure a pulse-to-pulse power variation
of only 0.1%, corresponding to an EPG contribution of $0.03\times 10^{-5}$.
Because the duty cycle of the benchmarking experiment is not constant, with
sequences of different lengths and calibration experiments interspersed
throughout, it is difficult to assign a specific EPG contribution to this
effect. However, we do observe larger EPG at higher microwave powers,
consistent with temperature effects playing a role at these higher powers.
To investigate unintended laser light as a source of decoherence, (e.g., from
optical pumping), the ion is prepared in $|\downarrow\rangle$ and is allowed
to remain in the dark for varying durations. We observe no decay in the
$|\downarrow\rangle$ state probability with an uncertainty of $2\times
10^{-7}$/$\mu$s, corresponding to the absence of gate errors during the
65-$\mu$s randomized gate interval with an uncertainty of $1\times 10^{-5}$.
Similar results are obtained for an ion prepared in $|\uparrow\rangle$.
Microwave-induced transitions from the qubit levels into other Zeeman levels
within the ground-state hyperfine manifold can be inferred by observing an
asymmetry between sequences ending in $|\downarrow\rangle$ and those ending in
$|\uparrow\rangle$. While the $|2,-2\rangle$ state fluoresces with 13 photons
detected on average, other hyperfine states yield, at most, 1.3 photons during
the 400-$\mu$s detection period. Therefore, transitions from the qubit
manifold to other levels would show up as a loss of fidelity for sequences
ending in $|\downarrow\rangle$, while they would not affect the apparent
fidelity of sequences ending in $|\uparrow\rangle$. For the bright sequences
in Fig. 2, the EPG is $2.2(5)\times 10^{-5}$, while for the dark sequences it
is $2.0(5)\times 10^{-5}$. We conclude that qubit leakage contributes an EPG
of $<0.2(7)\times 10^{-5}$. Similarly, if ion heating contributes to the EPG,
it should appear as a deviation from exponential decay in the benchmarking
data, which we do not observe.
For future work, it seems likely that microwave power fluctuations could be
controlled passively through a suitable choice of amplifiers and switching
circuitry or actively via feedback. Shorter pulses at higher microwave powers
would diminish errors associated with fluctuating qubit frequency, but errors
due to off-resonant transitions become more of a concern in this regime. Off-
resonant transitions could be suppressed with the use of appropriately shaped
pulses, which concentrate the microwave spectrum near the qubit transition
frequency. Self-correcting pulse sequences Levitt (1986) could be used to
reduce the effects of errors in $\pi/2$ duration and transition frequency. In
a multizone trap array, single-qubit gates implemented with microwaves will be
susceptible to cross talk between zones; however, this effect can be mitigated
with careful microwave design, the use of nulling currents in spectator zones
Ospelkaus et al. (2008), and the use of composite pulses Levitt (1986). A
demonstration of two-qubit gates with errors small enough to enable scalable
quantum computing remains challenging, but high-fidelity single-qubit gates
should make this task easier. For example, many refocusing and decoupling
techniques are based on single-qubit gates and can reduce errors during two-
qubit gate operations Viola et al. (1999).
###### Acknowledgements.
This work was supported by IARPA, NSA, DARPA, ONR, and the NIST Quantum
Information Program. We thank U. Warring, M. Biercuk, A. VanDevender, J.
Amini, and R. B. Blakestad for their help in assembling parts of the
experiment, and we thank J. Britton, S. Glancy, A. Steane, and C. Bennett for
comments. This article is a contribution of the U.S. Government, not subject
to U.S. copyright.
## References
* Nielsen and Chuang (2000) M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, Cambridge, UK, 2000).
* Ladd et al. (2010) T. D. Ladd et al., Nature (London) 464, 45 (2010).
* DiVincenzo (2000) D. P. DiVincenzo, Fortschr. Phys. 48, 771 (2000).
* Preskill (1998) J. Preskill, Proc. R. Soc. Lond., Ser. A 454, 385 (1998).
* Knill (2010) E. Knill, Nature (London) 463, 441 (2010).
* Ryan et al. (2009) C. A. Ryan, M. Laforest, and R. Laflamme, New J. Phys. 11, 013034 (2009).
* Olmschenk et al. (2010) S. Olmschenk et al., New J. Phys. 12, 113007 (2010).
* Benhelm et al. (2008) J. Benhelm et al., Nat. Phys. 4, 463 (2008).
* Ospelkaus et al. (2011) C. Ospelkaus et al., Nature (London) 476, 181 (2011).
* Knill et al. (2008) E. Knill et al., Phys. Rev. A 77, 012307 (2008).
* Poyatos et al. (1997) J. F. Poyatos, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 78, 390 (1997).
* Bravyi and Kitaev (2005) S. Bravyi and A. Kitaev, Phys. Rev. A 71, 022316 (2005).
* Knill (2005) E. Knill, Nature (London) 434, 39 (2005).
* Biercuk et al. (2009) M. J. Biercuk et al., Quantum Inf. Comput. 9, 920 (2009).
* Chow et al. (2010) J. M. Chow et al., Phys. Rev. A 82, 040305 (2010).
* Ospelkaus et al. (2008) C. Ospelkaus et al., Phys. Rev. Lett. 101, 090502 (2008).
* Deslauriers et al. (2006) L. Deslauriers et al., Phys. Rev. Lett. 97, 103007 (2006).
* Labaziewicz et al. (2008) J. Labaziewicz et al., Phys. Rev. Lett. 100, 013001 (2008).
* Brown et al. (2011) K. R. Brown et al., Nature (London) 471, 196 (2011).
* Seidelin et al. (2006) S. Seidelin et al., Phys. Rev. Lett. 96, 253003 (2006).
* Levitt (1986) M. H. Levitt, Prog. Nucl. Magn. Reson. Spectrosc. 18, 61 (1986).
* Viola et al. (1999) L. Viola, S. Lloyd, and E. Knill, Phys. Rev. Lett. 83, 4888 (1999).
|
arxiv-papers
| 2011-04-13T17:06:48 |
2024-09-04T02:49:18.240080
|
{
"license": "Public Domain",
"authors": "K. R. Brown, A. C. Wilson, Y. Colombe, C. Ospelkaus, A. M. Meier, E.\n Knill, D. Leibfried, and D. J. Wineland",
"submitter": "Kenton Brown",
"url": "https://arxiv.org/abs/1104.2552"
}
|
1104.2575
|
version 30.3.2011
Two-boson field quantisation and flavour in $q^{+}q^{-}$ mesons
H.P. Morsch111postal address: Institut für Kernphysik, Forschungszentrum
Jülich, D-52425 Jülich, Germany
E-mail: h.p.morsch@gmx.de
Institute for Nuclear Studies, Pl-00681 Warsaw, Poland
###### Abstract
The flavour degree of freedom in non-charged $q\bar{q}$ mesons is discussed in
a generalisation of quantum electrodynamics including scalar coupling of gauge
bosons, which yields to an understanding of the confinement potential in
mesons. The known “flavour states” $\sigma$, $\omega$, $\Phi$, $J/\Psi$ and
$\Upsilon$ can be described as fundamental states of the $q\bar{q}$ meson
system, if a potential sum rule is applied, which is related to the structure
of vacuum. This indicates a quantisation in fundamental two-boson fields,
connected directly to the flavour degree of freedom.
In comparison with potential models additional states are predicted, which
explain the large continuum of scalar mesons in the low mass spectrum and new
states recently detected in the charm region.
PACS/ keywords: 11.15.-q, 12.40.-y, 14.40.Cs, 14.40.Gx/ Generalisation of
quantum electrodynamics with massless elementary fermions (quantons, $q$) and
scalar two-boson coupling. Confinement potential. Flavour degree of freedom of
mesons described by fundamental $q^{+}q^{-}$ states. Masses of $\sigma$,
$\omega$, $\Phi$, $J/\Psi$ and $\Upsilon$.
The flavour degree of freedom has been observed in hadrons, but also in
charged and neutral leptons, see e.g. ref. [1]. It is described in the
Standard Model of particle physics by elementary fermions of different flavour
quantum number. The fact that flavour is found in both strong and electroweak
interactions could point to a supersymmetry between these fundamental forces,
which should give rise to a variety of supersymmetric particles, which in
spite of extensive searches have not been observed.
A very different interpretation of the flavour degree of freedom is obtained
in an extension of quantum electrodynamics, in which the property of
confinement of mesons as well as their masses are well described. This is
based on a Lagrangian [2], which includes a scalar coupling of two vector
bosons
${\cal L}=\frac{1}{\tilde{m}^{2}}\bar{\Psi}\
i\gamma_{\mu}D^{\mu}(D_{\nu}D^{\nu})\Psi\ -\
\frac{1}{4}F_{\mu\nu}F^{\mu\nu}~{},$ (1)
where $\Psi$ is a massless elementary fermion (quanton, q) field,
$D_{\mu}=\partial_{\mu}-i{g_{e}}A_{\mu}$ the covariant derivative with vector
boson field $A_{\mu}$ and coupling $g_{e}$, and
$F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}$ the field strength
tensor. Since our Lagrangian is an extension of quantum electrodynamics, the
coupling $g_{e}$ corresponds to a generalized charge coupling $g_{e}\geq e$
between charged quantons $q^{+}$ and $q^{-}$. By inserting the explicite form
of $D^{\mu}$ and $D_{\nu}D^{\nu}$ in eq. (1), this leads to the following two
contributions with 2- and 3-boson ($2g$ and $3g$) coupling, if Lorentz gauge
$\partial_{\mu}A^{\mu}=0$ and current conservation is applied
${\cal L}_{2g}=\frac{-ig_{e}^{2}}{\tilde{m}^{2}}\
\bar{\Psi}\gamma_{\mu}\partial^{\mu}(A_{\nu}A^{\nu})\Psi\ $ (2)
and
${\cal L}_{3g}=\frac{-g_{e}^{3}\ }{\tilde{m}^{2}}\
\bar{\Psi}\gamma_{\mu}A^{\mu}(A_{\nu}A^{\nu})\Psi\ .$ (3)
Requiring that $A_{\nu}A^{\nu}$ corresponds to a background field, ${\cal
L}_{2g}$ and ${\cal L}_{3g}$ give rise to two first-order $q^{+}q^{-}$ matrix
elements
${\cal
M}_{2g}=\frac{-\alpha_{e}^{2}}{\tilde{m}^{3}}\bar{\psi}(\tilde{p}^{\prime})\gamma_{\mu}~{}\partial^{\mu}\partial^{\rho}w(q)g_{\mu\rho}~{}\gamma_{\rho}\psi(\tilde{p})\
$ (4)
and
${\cal
M}_{3g}=\frac{-\alpha_{e}^{3}}{\tilde{m}}\bar{\psi}(\tilde{p}^{\prime})\gamma_{\mu}~{}w(q)\frac{g_{\mu\rho}f(p_{i})}{p_{i}^{~{}2}}w(q)~{}\gamma_{\rho}\psi(\tilde{p})~{},$
(5)
in which $\alpha_{e}={g_{e}^{2}}/{4\pi}$ and $\psi(\tilde{p})$ is a two-
fermion wave function $\psi(\tilde{p})=\frac{1}{\tilde{m}^{3}}\Psi(p)\Psi(k)$.
The momenta have to respect the condition
$\tilde{p}^{\prime}-\tilde{p}=q+p_{i}=P$. Further, $w(q)$ is the two-boson
momentum distribution and $f(p_{i})$ the probability to combine $q$ and $P$ to
$-p_{i}$. Since $f(p_{i})\to 0$ for $\Delta p\to 0$ and $\infty$, there are no
divergencies in ${\cal M}_{3}$.
By contracting the $\gamma$ matrices by
$\gamma_{\mu}\gamma_{\rho}+\gamma_{\rho}\gamma_{\mu}=2g_{\mu\rho}$, reducing
eqs. (4) and (5) to three dimensions, and making a transformation to r-space
(details are given in ref. [2]), the following two potentials are obtained,
which are given in spherical coordinates by
$V_{2g}(r)=\frac{\alpha_{e}^{2}\hbar^{2}\tilde{E}^{2}}{\tilde{m}^{3}}\
\Big{(}\frac{d^{2}w(r)}{dr^{2}}+\frac{2}{r}\frac{dw(r)}{dr}\Big{)}\frac{1}{\
w(r)}\ ,$ (6)
where $\tilde{E}=<E^{2}>^{1/2}$ is the mean energy of scalar states of the
system, and
$V^{(1^{-})}_{3g}(r)=\frac{\hbar}{\tilde{m}}\int dr^{\prime}\rho(r^{\prime})\
V_{g}(r-r^{\prime})~{},$ (7)
in which $w(r)$ and $\rho(r)$ are two-boson wave function and density (with
dimension $fm^{-2}$), respectively, related by $\rho(r)=w^{2}(r)$. Further,
$V_{g}(r-r^{\prime})$ is an effective boson-exchange interaction
$V_{g}(r)=-\alpha_{e}^{3}\hbar\frac{f(r)}{r}$. Since the quanton-antiquanton
parity is negative, the potential (7) corresponds to a binding potential for
vector states (with $J^{\pi}=1^{-}$). For scalar states angular momentum L=1
is needed, requiring a p-wave density, which is related to $\rho(r)$ by
$\rho^{p}(\vec{r})=\rho^{p}(r)\ Y_{1,m}(\theta,\Phi)=(1+\beta R\ d/dr)\rho(r)\
Y_{1,m}(\theta,\Phi)\ .$ (8)
$\beta R$ is determined from the condition $<r_{\rho^{p}}>\ =\int d\tau\
r\rho^{p}(r)=0$ (elimination of spurious motion). This yields a boson-exchange
potential given by
$V^{(0^{+})}_{3g}(r)=\frac{\hbar}{\tilde{m}}\int d\vec{r}\
^{\prime}\rho^{p}(\vec{r}\ ^{\prime})\ Y_{1,m}(\theta^{\prime},\Phi^{\prime})\
V_{g}(\vec{r}-\vec{r}^{\prime})=4\pi\frac{\hbar}{\tilde{m}}\int
dr^{\prime}\rho^{p}(r^{\prime})\ V_{g}(r-r^{\prime})~{}.$ (9)
We require a matching of $V^{(0^{+})}_{3g}(r)$ and $\rho(r)$
$V^{(0^{+})}_{3g}(r)=c_{pot}\ \rho(r)\ ,$ (10)
where $c_{pot}$ is an arbitrary proportionality factor. Eq. (10) is a
consequence of the fact that $V_{g}(r)$ should be finite for all values of r.
This can be achieved by using a form
$V_{g}(r)=f_{as}(r)(-\alpha_{e}^{3}\hbar/r)\ e^{-cr}\ $ (11)
with $f_{as}(r)=(e^{(ar)^{\sigma}}-1)/(e^{(ar)^{\sigma}}+1)$, where the
parameters $c$, $a$ and $\sigma$ are determined from the condition (10).
Self-consistent two-boson densities are obtained assuming a form
$\rho(r)=\rho_{o}\ [exp\\{-(r/b)^{\kappa}\\}]^{2}\ \ with\ \ \kappa\simeq 1.5\
.$ (12)
The matching condition (10) is rather strict (see fig. 1) and determines quite
well the parameter $\kappa$ of $\rho(r)$: using a pure exponential form
($\kappa$=1) a very steep rise of $\rho(r)$ is obtained for $r\to 0$ , but an
almost negligible and flat boson-exchange potential, which cannot satisfy eq.
(10). Also for a Gaussian form ($\kappa$=2) no consistency is obtained, the
deduced potential falls off more rapidly towards larger radii than the density
$\rho(r)$. The agreement between $<r^{2}_{\rho}>$ and $<r^{2}_{V_{3g}}(r)>$
cannot be enforced by using a different parametrisation for $f_{as}(r)$. Only
by a density with $\kappa\simeq 1.5$ a satisfactory solution is obtained.
For our solution (12) it is important to verify that $V_{g}(r)$ is quite
similar in shape to $\rho^{p}(r)$ required from the modification of the boson-
exchange propagator. This is indeed the case, as shown in the upper part of
fig. 2, which displays solution 4 in the tables. Further, the low radius cut-
off function $f_{as}(r)$ is shown by the dashed line, which falls off to zero
for $r\to 0$. A transformation to momentum ($Q$) space leads to $f_{as}(Q)\to
0$ for $Q\to\infty$. Interestingly, this decrease of $f_{as}(Q)$ for large
momenta is quite similar to the behaviour of quantum chromodynamics, a slowly
falling coupling strength $\alpha(Q)$ related to the property of asymptotic
freedom [3].
In the two lower parts of fig. 2 the resulting two-boson density and the
boson-exchange potential (9) are shown in r- and Q-space222In Q-space
multiplied by $Q^{2}$. for solution 4 in the tables, both in very good
agreement. In the Fourier transformation to Q-space the process $gg\rightarrow
q\bar{q}$ is elastic and consequently the created $q\bar{q}$-pair has no mass.
However, if we take a finite mass of the created fermions of 1.4 GeV (such a
mass has been assumed for a comparable system in potential models [4]), a
boson-exchange potential is obtained (given by the dashed line in the lower
part of fig. 2), which cannot be consistent with the density $\rho(r)$. Thus,
our solutions require massless fermions. This allows to relate the generated
system to the absolute vacuum of fluctuating boson fields with energy
$E_{vac}=0$.
The mass of the system is given by
$M^{m}_{n}=-E_{3g}^{~{}m}+E_{2g}^{~{}n}\ ,$ (13)
where $E_{3g}^{~{}m}$ and $E_{2g}^{~{}n}$ are binding energies in $V_{3g}(r)$
and $V_{2g}(r)$, respectively, calculated by using a mass parameter
$\tilde{m}=1/4~{}\tilde{M}=1/4<Q^{2}_{\rho}>^{1/2}$, where $\tilde{M}$ is the
average mass generated, and $\tilde{E}$ given in table 2. The coupling
constant $\alpha_{e}$ is determined by the matching of the binding energies to
the mass, see eq. (13). The boson-exchange potential is attractive and has
negative binding energies, with the strongest bound state having the largest
mass and excited states having smaller masses. These energies do not increase
the mean energy $E_{vac}$ of the vacuum: writing the energy-momentum relation
$E_{vac}=0=\sqrt{<Q^{2}_{\rho}>}+E_{3g}$, this relation is conserved, if
$E_{3g}$ is compensated by the root mean square momentum of the deduced
density $<Q^{2}_{\rho}>^{1/2}$.
Differently, the binding energy in the self-induced two-boson potential (6),
which does not appear in normal gauge theory applications (see ref. [1]), is
positive and corresponds to a real mass generation by increasing the total
energy by $E_{2g}$. Therefore, this potential allows a creation of stationary
$(q\bar{q})^{n}$ states out of the absolute vacuum of fluctuating boson
fields, if two rapidly fluctuating boson fields overlap and cause a quantum
fluctuation with energy $E_{2g}$. The two-boson potential $V_{2g}(r)$ (with
density parameters from solution 4 in the tables) is compared to the
confinement potential from lattice gauge calculations [5] in the upper part of
fig. 3, which shows good agreement. The corresponding potentials obtained from
the other solutions are very similar, if a small decrease of $\kappa$ is
assumed for solutions of stronger binding (as given in table 2).
Table 1: Deduced masses (in GeV) of scalar and vector $q^{+}q^{-}$ states in comparison with known $0^{++}$ and $1^{--}$ mesons [1] (for $V_{3g}(r)$ only the lowest bound state is given). Solution | (meson) | $M^{1}_{1}$ | $M^{1}_{2}$ | $M^{1}_{3}$ | $M_{1}^{exp}$ | $M_{2}^{exp}$ | $M_{3}^{exp}$
---|---|---|---|---|---|---|---
1 scalar | $\sigma$ | 0.55 | 1.28 | 1.88 | 0.60$\pm$0.2 | 1.35$\pm$0.2 |
2 scalar | $f_{o}$ | 1.38 | 2.25 | 2.9 | 1.35$\pm$0.2 | |
vector | $\omega$ | 0.78 | 1.65 | 2.3 | 0.78 | 1.65$\pm$0.02 |
3 scalar | $f_{o}$ | 2.68 | 3.34 | 3.9 | — | |
vector | $\Phi$ | 1.02 | 1.68 | 2.23 | 1.02 | 1.68$\pm$0.02 |
4 scalar | not seen | 11.7 | 12.3 | 12.8 | — | |
vector | $J/\Psi$ | 3.10 | 3.69 | 4.16 | 3.097 | 3.686 | (4.160)
5 scalar | not seen | 40.5 | 41.0 | 41.4 | — | |
vector | $\Upsilon$ | 9.46 | 9.98 | 10.38 | 9.46 | 10.023 | 10.355
We have seen in fig. 1 that the functional shape of the two-boson density (12)
(given by the parameter $\kappa$) is quite well determined. In contrary, we
find that the slope parameter $b$ (which governs the radial extent
$<r^{2}_{\rho}>$) is not constrained by the different conditions applied. This
allows a continuum of solutions with different radius. However, on the
fundamental level of overlapping boson fields quantum effects are inherent and
should give rise to discrete solutions. Such a (new) quantisation can only
arise from an additional constraint orginating from the structure of the
vacuum. This may be formulated in the form of a vacuum potential sum rule.
Table 2: Parameters and deduced values of $<Q^{2}_{\rho}>^{1/2}$, $\tilde{E}$ in GeV and $<r^{2}>$ in $fm^{2}$ for the different solutions. $b$ is given in $fm$, $c$ and $a$ in $fm^{-1}$. The values of $<r^{2}>_{exp}$ are taken from ref. [6]. Sol. | $\kappa$ | $b$ | $\alpha_{e}$ | $c$ | $a$ | $\sigma$ | $<Q^{2}_{\rho}>^{1/2}$ | $\tilde{E}$ | $<r^{2}_{\rho}>$ | $<r^{2}>_{exp}$
---|---|---|---|---|---|---|---|---|---|---
1 | 1.50 | 0.77 | 0.26 | 2.4 | 6.4 | 0.86 | 0.59 | 0.9 | 0.65 | –
2 | 1.46 | 0.534 | 0.385 | 3.3 | 12.0 | 0.86 | 0.81 | 1.0 | 0.33 | 0.33
3 | 1.44 | 0.327 | 0.44 | 5.35 | 16.4 | 0.85 | 1.44 | 1.3 | 0.13 | 0.21
4 | 1.40 | 0.125 | 0.58 | 13.6 | 50.7 | 0.83 | 3.50 | 1.6 | 0.02 | 0.04
5 | 1.37 | 0.042 | 0.635 | 46.0 | 132.6 | 0.82 | 10.46 | 2.3 | 0.002 | –
We assume the existence of a global boson-exchange interaction in the vacuum
$V_{vac}(r)$, which has a radial dependence similar to the boson-exchange
interaction (11) discussed above, but with an additional $1/r$ fall-off, which
leads to $V_{vac}(r)\sim 1/r^{2}$. Further, we require that the different
potentials $V^{i}_{3g}(r)$ (where $i$ are discrete solutions) sum up to
$V_{vac}(r)$
$\sum_{i}V^{i}_{3g}(r)=V_{vac}(r)=\tilde{f}_{as}(r)(-\tilde{\alpha}_{e}^{3}\hbar\
r_{o}/{r^{2}})\ e^{-\tilde{c}r}\ ,$ (14)
where $\tilde{f}_{as}(r)$ and $e^{-\tilde{c}r}$ are cut-off functions as in
eq. (11). Actually, we expect that the cut-off functions should be close to
those for the state with the lowest mass. Interestingly, the radial forms of
$V_{g}(r)$ and $V_{vac}(r)$ are the only two forms, which lead to equally
simple forms in Q-space: $1/r\to 1/Q^{2}$ and $1/r^{2}\to 1/Q$. This supports
our assumption.
If we assume that the new quantisation is related to the flavour degree of
freedom, the different “flavour states” of mesons $\omega$, $\Phi$, $J/\Psi$,
and $\Upsilon$ should correspond to eigenstates of the sum rule (14). Indeed,
we find that the sum of the boson-exchange potentials with g.s. masses of
0.78, 1.02, 3.1 and 9.4 add up to a potential, which is in reasonable
agreement with the sum (14). However, the needed cut-off parameters $a$,
$\sigma$, and $c$ correspond to those for the $\sigma(600)$ solution (see ref.
[2]). This can be regarded as strong evidence for the $\sigma(600)$ being to
the lowest flavour state. By inclusion of this solution also, a good agreement
with the sum rule (14) is obtained. This is shown in the lower part of fig 3,
where the different potentials are given by dashed and dot-dashed lines with
their sum given by the solid line. The resulting masses of scalar and vector
states together with their excited states in $V_{2g}(r)$ are given in table 1,
which are in good agreement with experiment for the known states. The
corresponding density parameters are given in table 2 with mean square radii
in reasonable agreement with the meson radii extacted from experimental data
(see ref. [6]). It is evident that in this multi-parameter fit there are
ambiguities, which can be reduced only by detailed studies of the
contributions of the different states to the average mass $\tilde{E}$ and its
relation to $<Q^{2}_{\rho}>^{1/2}$. However, the reasonable account of the
experimental masses and the quantitative fit of the sum rule (14) in fig. 3
indicates that our results are quite correct.
As compared to potential models using finite fermion (quark) masses (see e.g.
ref. [4]), we obtain significantly more states, bound states in $V_{2g}(r)$
and in $V_{3g}(r)$. The solutions in table 1 correspond only to the 1s level
in $V_{3g}(r)$, in addition there are Ns levels with N=2, 3, … Most of these
states, however, have a relatively small mass far below 3 GeV. As the boson-
exchange potential is Coulomb like, it creates a continuum of Ns levels with
masses, which range down to the threshold region. This is consistent with the
average energy $\tilde{E}$ of scalar excitations in table 2, which increases
much less for heavier systems as compared to the energy of the 1s-state. These
low energy states give rise to large phase shifts at low energies, in
particular large scalar phase shifts.
Concerning masses above 3 GeV, solution 5 yields additional scalar 2s and 3s
states at masses of about 12 and 8.8 GeV, respectively, whereas an extra
vector 2s state is obtained (between the most likely $\Psi$(3s) and $\Psi$(4s)
states at 4.160 GeV and 4.415 GeV) at a mass of about 4.2 GeV. This state may
be identified with the recently discovered X(4260), see ref. [1].
Corresponding excited states in the confinement potential (6) should be found
at masses of 4.9, 5.3 and 5.5 GeV with uncertainties of 0.2-0.3 GeV.
In summary, the present model based on an extension of electrodynamics leads
to a good understanding of the confinement and the masses of fundamental
$q\bar{q}$ mesons. The flavour degree of freedom is described by stationary
states of different radial extent, whose potentials exhaust a vacuum potential
sum rule. In a forthcoming paper a similar description will be discussed for
neutrinos, which supports our conclusion that the flavour degree of freedom is
related to the structure of overlapping boson fields in the vacuum.
Fruitful discussions and valuable comments from P. Decowski, M. Dillig
(deceased), B. Loiseau and P. Zupranski among many other colleagues are
appreciated.
## References
* [1] Review of particle properties, C. Amsler et al., Phys. Lett B 667, 1 (2008);
http://pdg.lbl.gov/ and refs. therein
* [2] H.P. Morsch, “Inclusion of scalar boson coupling in fundamental gauge theory Lagrangians”, to be published
* [3] D.J. Gross and F. Wilczek, Phys. Rev. Lett. 30, 1343 (1973); H.D. Politzer, Phys. Rev. Lett. 30, 1346 (1973) (2005) and refs. therein
* [4] R. Barbieri, R. Kögerler, Z. Kunszt, and R. Gatto, Nucl. Phys. B 105, 125 (1976); E. Eichten, K.Gottfried, T. Kinoshita, K.D. Lane, and T.M. Yan, Phys. Rev. D 17, 3090 (1978); S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985); D. Ebert, R.N. Faustov, and V.O. Galkin, Phys. Rev. D 67, 014027 (2003); and refs. therein
* [5] G.S. Bali, K. Schilling, and A. Wachter, Phys. Rev. D 56, 2566 (1997);
G.S. Bali, B. Bolder, N. Eicker, T. Lippert, B. Orth, K. Schilling, and T.
Struckmann, Phys. Rev. D 62, 054503 (2000)
* [6] H.P. Morsch, Z. Phys. A 350, 61 (1994)
* [7] M. Ablikim, et al., hep-ex/0406038 (2004); see also D.V. Bugg, hep-ex/0510014 (2005) and refs. therein 16, 229 (2003)
Figure 1: Comparison of two-boson density for $<r^{2}_{\rho}>$=0.2 fm2 (dot-
dashed lines) and boson-exchange potential $|V_{3g}(r)/c_{pot}|$ (solid lines)
for $\kappa$=1.5, 1 and 2, respectively. Figure 2: Self-consistent solution
with $<r^{2}_{\rho}>$=0.013 fm2. Upper part: Low radius cut-off function
$f_{as}(r)$, shape of interaction (11) and $\rho^{p}(r)$ given by dashed,
solid and dot-dashed lines, respectively. Lower two parts: Two-boson density
and boson-exchange potential $|V_{3g}/c_{pot}|$ in r- and Q-space, for the
latter multiplied by $Q^{2}$, given by the overlapping dot-dashed and solid
lines, respectively. The dashed line in the lower part corresponds to a
calculation assuming fermion masses of 1.4 GeV. Figure 3: Upper part: Deduced
confinement potential $V_{2g}(r)$ (6) taken from solution 4 (solid line) in
comparison with lattice gauge calculations [5] (solid points) . Lower part:
Boson-exchange potentials for the different solutions in the tables (given by
dot-dashed and dashed lines) and sum given by solid line. This is compared to
the vacuum sum rule (14) given by the dot-dashed line overlapping with the
solid line. A pure potential $V=-const/r^{2}$ is shown also by the lower dot-
dashed line.
|
arxiv-papers
| 2011-04-01T11:30:51 |
2024-09-04T02:49:18.244783
|
{
"license": "Public Domain",
"authors": "Hans-Peter Morsch",
"submitter": "Hans-Peter Morsch",
"url": "https://arxiv.org/abs/1104.2575"
}
|
1104.2667
|
Tamura et al.Gas Bulk Motion in of A 2256
cosmology: large-scale structure — galaxies: clusters: individual (A2256) —
intergalactic medium — X-rays: diffuse background
# Discovery of Gas Bulk Motion in the Galaxy Cluster Abell 2256 with Suzaku
Takayuki Tamura11affiliation: Institute of Space and Astronautical Science,
Japan Aerospace Exploration Agency,
3-1-1 Yoshinodai, Sagamihara, Kanagawa 229-8510 Kiyoshi
Hayashida22affiliation: Department of Earth and Space Science, Graduate
School of Science, Osaka University, Toyonaka 560-0043 Shutaro
Ueda22affiliationmark: Masaaki Nagai22affiliationmark:
tamura.takayuki@jaxa.jp
###### Abstract
The results from Suzaku observations of the galaxy cluster Abell 2256 are
presented. This cluster is a prototypical and well-studied merging system,
exhibiting substructures both in the X-ray surface brightness and in the
radial velocity distribution of member galaxies. There are main and sub
components separating by 3’.5 in the sky and by about 2000 km s-1 in radial
velocity peaks of member galaxies. In order to measure Doppler shifts of iron
K-shell lines from the two gas components by the Suzaku XIS, the energy scale
of the instrument was evaluated carefully and found to be calibrated well. A
significant shift of the radial velocity of the sub component gas with respect
to that of the main cluster was detected. All three XIS sensors show the shift
independently and consistently among the three. The difference is found to be
1500 $\pm 300$ (statistical) $\pm 300$ (systematic) km s-1. The X-ray
determined absolute redshifts of and hence the difference between the main and
sub components are consistent with those of member galaxies in optical. The
observation indicates robustly that the X-ray emitting gas is moving together
with galaxies as a substructure within the cluster. These results along with
other X-ray observations of gas bulk motions in merging clusters are
discussed.
## 1 Introduction
Galaxy clusters are the largest and youngest gravitationally bound system
among the hierarchical structures in the universe. Dynamical studies of
cluster galaxies have revealed that some systems are still forming and
unrelaxed. X-ray observation of the intracluster medium (ICM) provided further
evidence for mergers through spatial variations of the gas properties.
Remarkably, sharp X-ray images obtained by Chandra have revealed shocks (e.g.
[Markevitch et al. (2002)]) and density discontinuities (or “cold fronts”;
e.g. [Vikhlinin et al. (2001)]). These are interpreted as various stages of
on-going or advanced mergers and suggest supersonic (for the shock) or
transonic (cold front) gas motions. Cluster mergers involve a large amount of
energies and hence influence numerous kinds of observations. In particular,
possible effects of gas bulk motions on the X-ray mass estimates have been
investigated extensively mostly based on numerical simulations (e.g. [Evrard
et al. (1996)], [Nagai et al. (2007)]). This is mainly because that the
cluster mass distribution is one of the most powerful tools for the precise
cosmology. Furthermore, cluster mergers heat the gas, develop gas turbulence,
and accelerate particles, which in turn generate diffuse radio and X-ray
halos.
To understand physics of cluster mergers, gas dynamics in the system should be
studied. The gas bulk motion can be measured most directly using the Doppler
shift of X-ray line emission. These measurements are still challenging because
of the limited energy resolutions of current X-ray instruments. Dupke, Bregman
and their colleagues searched for bulk motions using ASCA in nearby bright
clusters. They claimed detections of large velocity gradients, such as that
consistent with a circular velocity of $4100^{+2200}_{-3100}$ km s-1 (90%
confidence) in the Perseus cluster (Dupke & Bregman, 2001a) and that of
$1600\pm 320$ km s-1 in the Centaurus cluster111The error confidence range is
not explicitly mentioned in the reference. (Dupke & Bregman, 2001b). These
rotations imply a large amount of kinetic energy comparable to the ICM thermal
one. Note that they used the ASCA instruments (GIS and SIS) which have gain
accuracies of about 1% (or 3000 km s-1). Dupke & Bregman (2006) used also
Chandra data and claimed a confirmation of the motion in the Centaurus
cluster. These important results, however, have not yet been confirmed by
other groups. For example, Ezawa et al. (2001) used the same GIS data of the
Perseus cluster and concluded no significant velocity gradient. In addition,
Ota et al. (2007) found that the Suzaku results of the Centaurus cluster are
difficult to reconcile with claims in Dupke & Bregman (2001b) and Dupke &
Bregman (2006). In short, previous results by Dupke and Bregman suggest bulk
motions in some clusters but with large uncertainties.
Currently the Suzaku XIS (Koyama et al., 2007a) would be the best X-ray
spectrometer for the bulk motion search, because of its good sensitivity and
calibration (Ozawa et al., 2009). In fact, Suzaku XIS data were already used
for this search in representative clusters. Tight upper limits on velocity
variations are reported from the Centaurus cluster (1400 km s-1; 90%
confidence; Ota et al. (2007)) and A 2319 (2000 km s-1; 90% confidence;
Sugawara et al. (2009)) among others.
In order to improve the accuracy of the velocity determination and to search
for gas bulk motions we analyzed Suzaku XIS spectra of the Abell 2256 cluster
of galaxies (A 2256, redshift of 0.058). This X-ray bright cluster is one of
the first systems showing substructures not only in the X-ray surface
brightness but also in the galaxy velocity distribution (Briel et al., 1991).
In the cluster central region, there are two systems separating by 3’.5 in the
sky. Motivated by this double-peaked structure in their ROSAT image, Briel et
al. (1991) integrated the velocity distribution of galaxies from Fabricant et
al. (1989) over the cluster, fitted it to two Gaussians, and found the two
separated peaks in the velocity distribution. The two structures are separated
by $\sim 2000$ km s-1 in radial velocity peaks of member galaxies, as given by
table 1. Berrington et al. (2002) added new velocity data to the Fabricant et
al. (1989) sample, used 277 member galaxies in total, and confirmed the two
systems along with an additional third component (table 1). This unique
finding motivated subsequent observations in multiple wavelengths. For
example, radio observations revealed a diffuse halo, relics, and tailed radio
emission from member galaxies (e.g. Rottgering et al. (1994)). The Chandra
observation by Sun et al. (2002) revealed detailed gas structures in and
around the main and second peaks. Furthermore, there are some attempts to
reproduce merger history of A 2256 using numerical simulations (e.g. Roettiger
et al. (1995)). Thus, A 2256 is a prototypical and well-studied merging system
and hence suitable to study the gas dynamics.
Table 1: Fitting parameters of radial velocity distribution of member galaxies. component number | Mean velocity | Dispersion
---|---|---
in the reference | (km s-1) | (km s-1)
Briel et al. (1991)
1 (main) | $17880\pm 205$ | $1270\pm 127$
2 (sub) | $15730\pm 158$ | $350\pm 123$
Berrington et al. (2002)**footnotemark: $*$
1 (53 galaxies; sub) | 15700 | 550
2 (196 galaxies; main) | 17700 | 840
3 (28 galaxies) | 19700 | 300
**footnotemark: $*$ Errors on velocities and dispersions are not given in the
paper.
We have carefully evaluated instrumental energy scales of the Suzaku XIS, used
iron K-shell line emission, and found a radial velocity shift of the second
gas component with respect to the main cluster. The X-ray determined redshifts
are consistent with those of galaxy components. This is the most robust
detection of a gas bulk motion in a cluster.
Throughout this paper, we assume cosmological parameters as follows;
$H_{0}=70$ km s-1Mpc-1, $\Omega_{\mathrm{m}}=0.3$, and
$\Omega_{\mathrm{\Lambda}}=0.7$. At the cluster redshift of 0.058, one arc-
minute corresponds to 67.4 kpc. We use the 68% ($1\sigma$) confidence level
for errors, unless stated otherwise.
## 2 Observations
Suzaku observations of A 2256 were performed on 2006 November 10–13 (PI: K.
Hayashida). The XIS was in the normal window and the spaced-row charge
injection off modes. The observation log is shown in table 2. Figure 1 shows
an X-ray image of the cluster. Detailed descriptions of the Suzaku
observatory, the XIS instrument, and the X-ray telescope are found in Mitsuda
et al. (2007), Koyama et al. (2007a), Serlemitsos et al. (2007), respectively.
To verify the XIS gain calibration, we have also used data from the Perseus
cluster observed in 2006 with the same XIS modes as those for A 2256 (table
2). These data were already used for the XIS calibration (e.g. Ozawa et al.
(2009)) and scientific analyses (Tamura et al. (2009); Nishino et al. (2010)).
Table 2: Suzaku observations of A 2256 and the Perseus cluster. Target | Date | Sequence | (RA, Dec) | P.A. **footnotemark: $*$ | Net exposure
---|---|---|---|---|---
| | number | (degree, J2000.0) | (degree) | (ks)
A 2256 | 2006/11/10–13 | 801061010 | (256.0138, 78.7112) | 208 | 94.4
Perseus | 2006/2/1 | 800010010 | (49.9436, 41.5175) | 260 | 43.7
Perseus | 2006/8/29 | 101012010 | (49.9554, 41.5039) | 66 | 46.6
**footnotemark: $*$ The nominal position angle (north to DETY axis).
(80mm,80mm)fig1.ps
Figure 1: Suzaku XIS contour image of A 2256 in the 0.3–10 keV band overlaid
on a DSS (Digital Sky Survey by the Space Telescope Science Institute) optical
image. The three XIS data are co-added. The XIS field of view is shown by a
square. The contour levels are linear from 10 to 160 counts pixel-1. No
vignetting nor background was corrected. The calibration-source regions at
corners of CCD are excluded. North is up and east is to left.
## 3 Analysis and Results
### 3.1 Data Reduction
We used version 2.1 processing data along with the HEASOFT version 6.9. In
addition to the XIS standard event selection criteria, we have screened data
by adding the following conditions; geomagnetic cut-off-rigidity $>$ 6 GV, the
elevation angle above the sunlit earth $>$20D and the dark earth $>$5D. We
used the latest calibration file as of July 2010. Using these files, we
corrected the XIS energy scale. The data from three CCDs (XIS 0, XIS 1, and
XIS 3) are used.
We examined the light curve excluding the central bright region events
($R<6^{\prime}$) for stable-background periods. There was no flaring event in
the data. The instrumental (non-X-ray) background was estimated using the
night earth observation database and the software xisnxbgen (Tawa et al.,
2008).
We prepared X-ray telescope and CCD response functions for each spectrum using
software xissimarfgen (Ishisaki et al., 2007) and xisrmfgen (Ishisaki et al.,
2007), respectively. The energy bin size is 2 eV.
To describe the thermal emission from a collisional ionization equilibrium
(CIE) plasma, we use the APEC model (Smith & Brickhouse, 2001) with the solar
metal abundances taken from Anders & Grevesse (1989).
### 3.2 Energy Scale Calibration
The main purpose of this paper is to measure Doppler shifts of K-shell iron
lines in X-ray. In these analyses, the energy-scale calibration is crucial. In
§ 3.2.1, we summarise the calibration status. In § 3.2.2, we attempt to
confirm the calibration using calibration-source data collected during the A
2256 observation. The primal goal here is to find differences of line energies
observed in different positions within the field of view. Therefore the
positional dependence of the energy scale is most important, which is
evaluated in § 3.2.3. Here we focus on the data obtained in spaced-row charge
injection off mode and around the K-shell iron lines. Considering all
available information given here as summarized in table 3, we assume that the
systematic uncertainty of the energy scale around the iron lines is most
likely 0.1% and 0.2% at most over the central $14^{\prime}.7\times
14^{\prime}.7$ region or among the three CCDs.
#### 3.2.1 Reported Status
Koyama et al. (2007b) estimated the systematic uncertainty of the absolute
energy in the iron band to be within $+0.1,-0.05$%, based on the observed
lines from the Galactic center along with Mn K$\alpha$ and K$\beta$ lines (at
5895 eV and 6490 eV respectively) from the built-in calibration source (55Fe).
Independently, Ota et al. (2007) investigated the XIS data of two bright and
extended sources (the A 1060 and Perseus clusters) and evaluated the
positional energy scale calibration in detail. They estimated the systematic
error of the spatial gain non-uniformity to be $\pm 0.13$%. Furthermore, Ozawa
et al. (2009) examined systematically the XIS data obtained from the start of
operation in July 2005 until December 2006. They reported that position
dependence of the energy scale are well corrected for the charge-transfer
inefficiency and that the time-averaged uncertainty of the absolute energy is
$\pm$ 0.1%. In addition, the gradual change of the energy resolution is also
calibrated; the typical uncertainty of resolutions is 10–20 eV in full width
half maximum.
Table 3: Accuracy of the XIS energy scale calibration and the observed velocity shift Section$\dagger$ | Line used | $\Delta*$ (%) | Reference and remarks
---|---|---|---
3.2.1 | GCDX/Fe-K$\ddagger$ | $+0.1,-0.05$ | Koyama et al. (2007b); uncertainty of the absolute energy.
3.2.1 | Perseus/Fe-K | $\pm 0.13$ | Ota et al. (2007); spatial non-uniformity (at 68% confidence).
3.2.1 | Mn I K${\alpha}$ | $\pm 0.1$ | Ozawa et al. (2009); time-averaged uncertainty of the absolute energy.
3.2.2 | Mn I K${\alpha}$ | $\pm 0.1$ | This paper; standard deviation among CCD segments in the A 2256 data.
3.2.3 | Perseus/Fe-K | (-0.09–0.06) | This paper; variations among the two regions.
3.4 | A 2256/Fe-K | $0.5\pm 0.1$ | This paper; velocity shift in A 2256, after correcting the inter-CCD gain
| | | and the statistical error.
\dagger\daggerfootnotemark: $\dagger$ Section in this paper.
**footnotemark: $*$ possible uncertainty, velocity shift, or its error all in
percentage of the line energy. At the iron K line, 0.1% corresponds to 7 eV
for the energy shift or 300 km s-1 for the radial velocity.
\ddagger\ddaggerfootnotemark: $\ddagger$ From the Galactic center diffuse
emission.
#### 3.2.2 Absolute Scale
We extracted spectra of calibration sources which illuminate two corners of
each CCD (segment A and D). These spectra in the energy range of 5.3–7.0 keV
are fitted with two Gaussian lines for the Mn K${\alpha}$ and K${\beta}$ along
with a bremsstrahlung continuum component. Here we fixed the energy ratio
between the two lines to the expected one. Thus obtained energy centroids of
the Mn K${\alpha}$ line from the two corners of three CCDs give an average of
5904 eV (as compared with the expected value of 5895 eV) and a standard
deviation (scatter among the six centroids) of 6 eV. The statistical errors of
the line center were about 1–2 eV. This confirms that the absolute energy
scale averaged over all CCDs and the relative gain among CCD segments are
within $\pm 0.15$% and $\pm 0.10$%, respectively. Simultaneously we found that
the data can be fitted with no intrinsic line width for the Gaussian
components, meaning that the energy resolution is also well calibrated.
#### 3.2.3 Spatial Variation
We used the XIS data of the Perseus cluster, which would provide the highest-
quality XIS line spectra over the whole CCD field of view among all the Suzaku
observations. Here we assume no line shift intrinsic to the cluster within the
observed region. There were two exposures of the Perseus in the normal window
and the spaced-row charge injection off modes (table 2) in periods close to
the A 2256 observation. Note that Ota et al. (2007) used the same data, but
with early calibration (i.e. version 0.7 data). Here we re-examine the
accuracy with latest and improved calibration.
Firstly, following Ota et al. (2007), we divided the XIS field of view into
$8\times 8$ cells of size $2^{\prime}.1\times 2^{\prime}.1$. Each spectra in
the 6.2–7.0 keV band is fitted with two Gaussian lines for He-like K${\alpha}$
($\sim 6700$ eV) and H-like K${\alpha}$ ($\sim 6966$ eV) and a bremsstrahlung
continuum model. Here we fixed the ratio of the central energies between the
two lines to a model value of 1.040 and let the energy of the first line be a
fitting parameter. Because of the low statistics data in CCD peripheral
regions we focus on the central $7\times 7$ cells (i.e., $14^{\prime}.7\times
14^{\prime}.7$). The typical statistical errors of the line energy are from
$\pm 4$ eV to $\pm 25$ eV. Thus derived central energies from $7\times 7$
cells for each CCD are used to derived an average and a standard deviation.
The average values are 6575 eV, 6575 eV, and 6569 eV, for XIS 0, XIS 1 and XIS
3, respectively, which are consistent with the cluster redshift of
0.017–0.018. The standard deviations among $7\times 7$ cells are 7 eV, 13 eV,
and 10 eV for the three XISs, respectively. There is no cell having a
significant deviation from the average value at more than 2$\sigma$ level.
These deviations include not only systematics due to the instrumental gain
uncertainty but also statistics and systematics intrinsic to the Perseus
emission. Therefore the energy scale uncertainty should be smaller than this
range of deviations (0.1–0.2%).
Secondly, we focus on the CCD regions specific in our analysis below. We have
extracted the Perseus spectra from the same detector regions (main and sub) as
used in the A 2256 analysis. The definition of the regions are given in figure
2 and § 3.4. These spectra were fitted with the same model as above and used
to derive the redshift (from the line centroid). We found no difference in
redshift between the two regions. The redshift differences (between the two
regions from the three sensors) range from $-9\times 10^{-4}$ to $+6\times
10^{-4}$ with an average of $1.5\times 10^{-5}$. Accordingly we estimate the
possible instrumental gain shift to be within $\pm$0.1% with no systematic
bias between the two regions.
### 3.3 Energy sorted X-ray Images
In order to examine the spectral variation over the A 2256 central region, we
extracted two images in different energy bands including He-like and H-like
iron line emission as shown in figure 2. There appear at least two emission
components which corresponds to the main cluster at east and the sub component
at west discovered in Briel et al. (1991). We noticed a clear difference in
the distribution between the two images. In the He-like iron image, the sub
component exhibits brightness comparable to that of the main component. On the
other hand, in the H-like iron image, the sub has lower brightness. This
clearly indicates that the sub has cooler emission compared with the main.
Based on these contrasting spatial distributions, we define centers of the two
components to be (256D.1208, 78D.6431) and (255D.7958, 78D.6611), in
equatorial J2000.0 coordinates (RA, Dec), respectively, as shown in figure 2.
(180mm,80mm)ds9-band.ps
Figure 2: XIS images of A 2256 from the 6.10–6.48 keV energy band including
He-like iron emission (left) and from the 6.48–6.80 keV band including H-like
iron emission (right) in unit of counts. No vignetting nor background was
corrected. Images have been smoothed with a Gaussian filter with
$\sigma=17^{\prime\prime}$. The spectra are extracted from the two regions
indicated by circles in yellow (the main region) and a circle in white (the
sub).
### 3.4 Radial Velocity Shift
#### 3.4.1 Separate Spectral Fitting
Our goal is to constrain the velocity shift of the sub component with respect
to the main cluster. Then, we extracted sets of spectra from the two
components and fitted with different redshifts. More specifically, for the
main component emission, we integrated the data within $4^{\prime}$ in radius
from the main center but excluding a sub component region with a radius of
$2^{\prime}$ (figure2). For the sub region, data within $1^{\prime}.5$ in
radius from the sub center are extracted.
We use the energy range of the 5.5–7.3 keV around the iron line complex. In
this energy band the cosmic background fluxes are below a few % of the source
count in the two regions. Therefore we ignore this background contribution.
We use a CIE component to model the spectra. Free parameters are an iron line
redshift, a temperature, an iron abundance, and normalization. Models for the
three XISs are assumed to have different redshifts and normalization to
compensate inter-CCD calibration uncertainties. Other parameters, temperature
and abundance, are fixed to be common among the CCDs. As shown in figure 3 and
table 4, these models describe the data well (FIT-1). The fits give different
redshifts between the two regions from all three XISs as shown in figure 4(a).
The statistical errors of the absolute redshift are less than $\pm 0.002$. The
redshift differences are 0.0048, 0.0041, and 0.0076 for the three XIS. These
are equivalent to shifts of 30–50 eV in energy or 1200–2300 km s-1 in radial
velocity.
The spectral shape of the two regions are different as shown in figure 3.
Therefore, the obtained redshift difference may depend on the spectral
modeling. To check this possibility, instead of the CIE model, we use two
Gaussian lines and a bremsstrahlung continuum component (FIT-2). Here we
assume the He-like iron resonance line centered on 6682 eV and the H-like iron
Ly$\alpha$ centered on 6965 eV for the two lines and determine the redshift
common to these two lines. The Gaussian components are set to have no
intrinsic width. As given in table 4, these models give slightly better fits
to the data in general than the first model . For each XIS, we obtained
redshifts and therefore a redshift difference as same as those from the first
fit within the statistical uncertainty. Therefore the redshift depends
insignificantly on the spectral modeling.
Strictly speaking line centroids of the iron line transitions could change
depending on the emission temperature. In the case of the observed region of A
2256, where the temperature varies within 4–8 keV (e.g. Sun et al. (2002)),
the strong iron line structure is dominated by the He-like triplet. Within
this temperature range, the emission centroid of the triplet stays within
6682–6684 eV based on the APEC model. This possible shift ($<2$ eV) is well
below the obtained redshift difference (30–50 eV) and should not be the main
origin of the difference.
#### 3.4.2 Gain-corrected Spectral Fitting
The obtained redshifts are systematically different among the three XISs
[figure 4(a)]. We attempt to correct this inter-CCD gain difference based on
the calibration source data. Following Fujita et al. (2008), we estimate a
gain correction factor, $f_{\rm gain}$, by dividing the obtained energy of the
Mn K line from the calibration source by the expected one. In the A 2256 data
as given in section 3.2, $f_{\rm gain}$ are 1.0018 (XIS 0), 1.0000 (XIS 1),
and 1.0016 (XIS 3). This factor, the redshift obtained from the fit ($z_{\rm
fit}$), and the corrected redshift ($z_{\rm cor}$) have a relation
$f_{\rm gain}=\frac{z_{\rm cor}+1}{z_{\rm fit}+1}.$ (1)
We use this correction and fit the spectra with a single redshift common to
the three XISs to the two-Gaussian model (FIT-3). These models give slightly
poorer fits compared with the previous ones, because of a decrease of the
degree of freedom (table 4). Nevertheless, the fits are still acceptable. The
difference in the redshift between the two region is $0.005\pm 0.0008$ (or
$1500\pm 240$ km s-1). In figure 4(b) we show the statistical $\chi^{2}$
distribution as a function of the redshift. This indicates clearly that the
data cannot be described by the same redshift for the main and sub regions.
We found that redshifts determined here by X-ray are consistent with those of
member galaxies in optical (table 1) as explained below. The X-ray redshift of
the main component, z=0.0059 or 17700 km s-1, is the same as the galaxy
redshift within the statistical error. That of the sub component, z=0.00540 or
16200 km s-1, is larger than the galaxy value of 15730 km s-1. Yet the
difference, 470 km s-1, is within the combined errors from X-ray statistical
(150 km s-1), systematic (300km s-1), and optical fitting (160 km s-1).
Besides this, the obtained specta of the sub component could be contaminated
from the emission of the main component due to the projection and the
telescope point spread function (half power diameter of about $2^{\prime}$).
This contamination could make the obtained redshift of the sub larger than the
true one (or closer to that of the main).
To visualize the redshift shift in the sub region more directly, we show
spectral fittings by combining the two front-illuminated XIS (0 and 3) data in
figure 5. We fitted the spectra with the two Gaussian model as given above. In
this figure we compare the fitting results between the best-fit model
($z=0.051$) and one with the redshift fixed to the main cluster value
($z=0.058$). This comparison shows that we can reject that the sub region has
the same redshift as the main cluster.
(80mm,80mm)figure3a.ps (80mm,80mm)figure3b.ps
Figure 3: Cluster spectra along with the best-fit CIE model in histogram
(FIT-1). Plots (a) and (b) are from the main and sub regions, respectively.
The XIS 0, XIS 1, and XIS 3 data are shown by black, red, and green colors,
respectively. In lower panels fit residuals in terms of the data to model
ratio are shown.
(80mm,80mm)figure4a.ps (80mm,80mm)figure4b.ps
Figure 4: (a) The redshifts obtained from the CIE fitting (FIT-1) of the main
and sub component regions for each XIS (X-axis). The gain-corrected results of
FIT-3 are shown at the position of XIS$=$1.5. The two horizontal dashed lines
indicate redshifts from the optical data (Briel et al. (1991); table 1). (b)
The statistic $\chi^{2}$ distributions of the gain-corrected Gaussian fitting
(FIT-3) of the two regions as a function of the redshift. The horizontal line
at $\chi^{2}=134$ indicates the degree of the freedom of the fits. The two
vertical dashed lines indicate the redshifts as in (a).
(80mm,80mm)figure5a.ps (80mm,80mm)figure5b.ps
Figure 5: The spectra from the sub-component region by adding the two front-illuminated XIS (0 and 3), fitted with two Gaussians and a continuum model. (a) The redshift common to the two lines is left free. The model gives the best-fit redshift of 0.05115 and $\chi^{2}$ of 40.4 for the degrees of the freedom of 42. (b) The redshift is fixed to the main component value ($z=0.058$). The model gives $\chi^{2}$ of 80.8 for degrees of freedom of 43. Table 4: Spectral fitting results from the main and sub regions. Region | $kT$ | Fe | $<z>$ $*$ | $\chi^{2}/d.o.f.$
---|---|---|---|---
| (keV) | (Solar) | |
FIT-1, CIE component
main | 7.1$\pm 0.3$ | 0.25$\pm 0.02$ | 0.0583 | 121/139
sub | 5.0$\pm 0.4$ | 0.29$\pm 0.03$ | 0.0528 | 135/139
FIT-2, two Gaussian components
main | 6.2$\pm 0.7$ | - | 0.0580 | 112/137
sub | 4.5$\pm 0.9$ | - | 0.0528 | 131/137
FIT-3, two Gaussian with the gain correction
main | 6.2$\pm 0.7$ | - | 0.0590$\pm 0.0007$ | 118/137
sub | 4.6$\pm 0.9$ | - | 0.0540$\pm 0.0005$ | 137/137
**footnotemark: $*$ Average value over three XIS data for the FIT-1 and FIT-2
are given. Statistical errors on these values are typically 0.001. In the
FIT-3, a single redshift common to the three XISs is assumed.
## 4 Summary and Discussion
### 4.1 Summary of the Result
We found gas bulk motion of the second component in A 2256. The difference in
the redshifts and hence radial velocities between the main and sub systems is
found to be 1500 $\pm 300$ (statistical) $\pm 300$ (systematic) km s-1 (§
3.4). This observed shift corresponds to only 0.5% in the energy, but is well
beyond the accuracy of the energy scale reported by the instrument team
(Koyama et al. (2007b); Ozawa et al. (2009)) and that by Ota et al. (2007).
Focusing on the present analysis of A 2256, we also have examined the
calibration systematics independently and confirmed the accuracy given above.
The obtained redshifts and hence the difference between the two X-ray emitting
gas components are consistent with those of the radial velocity distribution
in member galaxies. This consistency uniquely strengthens the reliability of
our X-ray measurement.
This is the fist detection of gas bulk motion not only in A 2256 but also from
Suzaku , which presumably has the best X-ray spectrometer in operation for the
velocity measurement. As given in § 1, Dupke and Bregman (2001a, 2001b, 2006)
previously claimed detections of bulk motions in the Perseus and Centaurus
clusters. Compared with these and other attempts, our measurement is more
accurate and robust. This improvement is not only due to the better
sensitivity and calibration of the XIS but also due to the well-separated and
X-ray bright nature of the structure in A 2256.
Radial velocity distributions of cluster galaxies were used to evaluate
dynamics of cluster mergers. These studies, however, have limited capability.
First, largely because of the finite number of galaxies and the projection
effect, a galaxy sub group within a main cluster is not straightforward to
identify. Second, the major baryon in the cluster is not galaxies but the gas
in most systems. Therefore dynamical energy in the hot gas cannot be ignored.
Third, galaxies and gas are not necessary to move together especially during
merging phases, because of the different nature of the two (collisionless
galaxies and collisional gas). In practice, some clusters under the violent
merging phase such as 1E 0657-558 (Clowe et al. (2006)) and A 754 (Markevitch
et al. (2003)) show a spatial separation between the galaxy and gas
distributions. Therefore, it is important to measure dynamics of galaxies and
gas simultaneously. The present result is one of the first such attempts.
### 4.2 Dynamics of A 2256
The determined radial velocity, $v_{r}\sim 1500$ km s-1, gives an lower limit
of a three dimensional true velocity, $v=v_{r}/\sin\alpha$, where $\alpha$
denotes the angle between the motion and the plane of the sky. Given a gas
temperature of 5 keV (§ 3.4) and an equivalent sound speed of 1100 km s-1
around the sub component, this velocity corresponds to a Mach number $M>1.4$.
Therefore, at least around the sub component, the gas kinematic pressure (or
energy) can be $(1.4)^{2}\sim 2$ times larger than the thermal one. In this
environment, the gas departs from hydrostatic equilibrium. Then, does this
motion affect the estimation of the mass of the primary cluster ? As argued by
Markevitch & Vikhlinin (1997), it depends on the physical separation between
the two components. In the case of A 2256, the two are not likely too closely
connected to disturb the hydrostatic condition around the primary, as
estimated below. However, to weight the total mass within the larger volume
including the sub component we should consider not only the mass of the sub
itself but also the dynamical pressure associated with the relative motion.
This kind of a departure from hydrostatic equilibrium was predicted generally
at cluster outer regions in numerical simulations (e.g. Evrard et al. (1996)).
We will compare our measurement with other studies on the ICM dynamics.
Markevitch et al. (2002) discovered a bow shock in the Chandra image of 1E
0657-56 (the bullet cluster) and estimated its $M$ to be $3.0\pm 0.4$ (4700 km
s-1) based on the observed density jump under the shock front condition. Using
a similar analysis, Markevitch et al. (2005) derived $M$ of $2.1\pm 0.4$ (2300
km s-1) in A 520. Based on a more specific configuration in the “cold front”
in A 3367, Vikhlinin et al. (2001) estimated its $M$ to be $1.0\pm 0.2$ (1400
km s-1). In these cases, velocity directions are assumed to be in the plane of
the sky. See Markevitch & Vikhlinin (2007) for a detailed discussion and other
examples. These measurements are unique but require certain configurations of
the collision to apply to observations. In contrast, X-ray Doppler shift
measurements as given in the present paper are more direct and commonly
applicable to merging systems. This method is sensitive to the motion parallel
to the line of sight. These two measurements are complementary and could
provide a direct measurement of three dimensional motion if applied to a
merging system simultaneously.
We measured the gas velocity parallel to the line of sight ($v_{r}$). How
about the velocity in the plane of the sky ($v_{\rm sky}$) and the true
velocity $v$? The ROSAT X-ray image of A 2256 shows a steeper brightness
gradient between the main and second components (Briel et al., 1991).
Furthermore, from the Chandra spectroscopic data Sun et al. (2002) found a
hint of a temperature jump across the two component. They argued a similarity
of this feature to the “cold front” discovered in other clusters. The
temperature and hence pressure jump in A 2256 (from about 4.5 keV to 8.5 keV)
are similar to those found in A 3367, in which the jump indicates a gas motion
with $v_{\rm sky}$ of about 1400 km s-1. Therefore we expect a similar $v_{\rm
sky}$ in A 2256, which is comparable to $v_{r}$. Further assuming that the
second component directs to the main cluster center, $v$ is estimated as
$\sqrt{v_{r}^{2}+v_{\rm sky}^{2}}\sim\sqrt{2}v_{r}\sim 2000$ km s-1. Instead
of assuming $v_{\rm sky}$, by considering the mean $\sin\alpha$ factor,
$2/\pi$, $v$ becomes 2400 km s-1 on average. Based on a simple assumption that
the two system started to collide from rest, we can estimate the velocity as
$v\sim(\frac{2GM}{R})^{1/2}$, where $G,M$ and $R$ are the gravitational
constant, the total mass of the system, and the separation, respectively. The
total mass of A 2256 of $8\times 10^{14}$ M⊙ (Markevitch & Vikhlinin (1997))
and an assumed final separation $R$ of 1 Mpc give $v\sim 2800$ km s-1, which
is comparable but larger than the estimated current velocity. Putting
$v=2000-2400$ km s-1 to the relation, we obtain a current separation $R$ of
$1.4-2$ Mpc. The time to the final collision can be estimated to be about
0.2–0.4 Gyr [$(R-1)$ Mpc divided by $v$]. Here we assume that the system is
going to the final collision. This assumption is consistent with a lack of
evidence for strong disturbances in the X-ray structure, as argued previously
for A 2256 in general.
### 4.3 Future Prospect
Our observations provided the first detection of gas bulk motion in a cluster.
To understand in general the cluster formation which is dominated by non-
linear processes, systematic measurements in a sample of clusters are
required. For example, some clusters such as 1E 0657-558 (Clowe et al. (2006))
at violent merging stages show segregation of the gas from the galaxy and
possibility from dark matter components. In these systems, we expect different
situations in the gas and galaxy dynamics compared with that found in A 2256.
Given capabilities of current X-ray instruments such as the Suzaku XIS, A 2256
is presumably an unique target with X-ray flux high and the velocity
separation clear enough to resolve the structure. Accordingly, the systematic
study requires new instruments with higher spectral resolutions and enough
sensitivities. In fact, this kind of assessment is one of the primary goals
for planned X-ray instruments such as SXS (Mitsuda et al. (2010)) onboard
ASTRO-H (Takahashi et al. (2010)). Using the SXS with an energy resolution
better than 7 eV we could measure gas bulk motions in a fair number of X-ray
bright clusters. Furthermore, we may find line broadening originated from the
gas turbulence, as a result of mergers, related shocks, or some activities in
the massive black hole at cluster centers. In addition, the SXS potentially
will constrain for the first time the line broadening from the thermal motion
of ions (and hence the ion temperature). The present result proves that A 2256
is one of the prime targets for ASTRO-H . The expected spectra of the two
components in A 2256 with the SXS are shown in Fig.11 of Takahashi et al.
(2010).
We thank anonymous referee for useful comments. We thank all the Suzaku team
member for their supports. To analyze the data, we used the ISAS Analysis
Servers provided by ISAS/C-SODA. KH acknowledges the support by a Grant-in-Aid
for Scientific Research, No.21659292 from the MEXT.
## References
* Anders & Grevesse (1989) Anders, E., & Grevesse, N. 1989, Geochimica et Cosmochimica Acta, 53, 197
* Berrington et al. (2002) Berrington, R. C., Lugger, P. M., & Cohn, H. N. 2002, AJ, 123, 2261
* Briel et al. (1991) Briel, U. G., et al. 1991, A&A, 246, L10
* Clowe et al. (2006) Clowe, D., Bradač, M., Gonzalez, A. H., Markevitch, M., Randall, S. W., Jones, C., & Zaritsky, D. 2006, ApJ, 648, L109
* Dupke & Bregman (2001a) Dupke, R. A., & Bregman, J. N. 2001a, ApJ, 547, 705
* Dupke & Bregman (2001b) Dupke, R. A., & Bregman, J. N. 2001b, ApJ, 562, 266
* Dupke & Bregman (2006) Dupke, R. A., & Bregman, J. N. 2006, ApJ, 639, 781
* Evrard et al. (1996) Evrard, A. E., Metzler, C. A., & Navarro, J. F. 1996, ApJ, 469, 494
* Ezawa et al. (2001) Ezawa, H., et al. 2001, PASJ, 53, 595
* Fabricant et al. (1989) Fabricant, D. G., Kent, S. M., & Kurtz, M. J. 1989, ApJ, 336, 77
* Fujita et al. (2008) Fujita, Y., et al. 2008, PASJ, 60, 1133
* Ishisaki et al. (2007) Ishisaki, Y., et al. 2007, PASJ, 59, S113
* Koyama et al. (2007a) Koyama, K, et al. 2007a, PASJ, 59, S23
* Koyama et al. (2007b) Koyama, K., et al.2007b, PASJ, 59, 245
* Markevitch & Vikhlinin (1997) Markevitch, M., & Vikhlinin, A. 1997, ApJ, 491, 467
* Markevitch et al. (2002) Markevitch, M., Gonzalez, A. H., David, L., Vikhlinin, A., Murray, S., Forman, W., Jones, C., & Tucker, W. 2002, ApJ, 567, L27
* Markevitch et al. (2003) Markevitch, M., et al. 2003, ApJ, 586, L19
* Markevitch et al. (2005) Markevitch, M., Govoni, F., Brunetti, G., & Jerius, D. 2005, ApJ, 627, 733
* Markevitch & Vikhlinin (2007) Markevitch, M., & Vikhlinin, A. 2007, Phys. Rep., 443, 1
* Mitsuda et al. (2007) Mitsuda, K, et al. 2007, PASJ, 59, S1
* Mitsuda et al. (2010) Mitsuda, K., et al. 2010, Proc. SPIE, 7732,
* Nagai et al. (2007) Nagai, D., Vikhlinin, A., & Kravtsov, A. V. 2007, ApJ, 655, 98
* Nishino et al. (2010) Nishino, S., Fukazawa, Y., Hayashi, K., Nakazawa, K., & Tanaka, T. 2010, PASJ, 62, 9
* Ota et al. (2007) Ota, N., et al. 2007, PASJ, 59, 351
* Ozawa et al. (2009) Ozawa, M., et al. 2009, PASJ, 61, 1
* Roettiger et al. (1995) Roettiger, K., Burns, J. O., & Pinkney, J. 1995, ApJ, 453, 634
* Rottgering et al. (1994) Rottgering, H., Snellen, I., Miley, G., de Jong, J. P., Hanisch, R. J., & Perley, R. 1994, ApJ, 436, 654
* Serlemitsos et al. (2007) Serlemitsos, P. et al. 2007, PASJ, 59, S9
* Smith & Brickhouse (2001) Smith, R.K & Brickhouse, N.S. 2001, ApJ, 556, L91
* Sugawara et al. (2009) Sugawara, C., Takizawa, M., & Nakazawa, K. 2009, PASJ, 61, 1293
* Sun et al. (2002) Sun, M., Murray, S. S., Markevitch, M., & Vikhlinin, A. 2002, ApJ, 565, 867
* Takahashi et al. (2010) Takahashi, T., et al. 2010, Proc. SPIE, 7732,
* Tamura et al. (2009) Tamura, T., et al. 2009, ApJ, 705, L62
* Tawa et al. (2008) Tawa, N., et al. 2008, PASJ, 60, 11
* Vikhlinin et al. (2001) Vikhlinin, A., Markevitch, M., & Murray, S. S. 2001, ApJ, 551, 160
|
arxiv-papers
| 2011-04-14T05:30:22 |
2024-09-04T02:49:18.250587
|
{
"license": "Public Domain",
"authors": "Takayuki Tamura, Kiyoshi Hayashida, Shutaro Ueda, Masaaki Nagai",
"submitter": "Takayuki Tamura",
"url": "https://arxiv.org/abs/1104.2667"
}
|
1104.2766
|
# Natural Diagonal Riemannian Almost Product and Para-Hermitian Cotangent
Bundles
S. L. Druţă-Romaniuc
_The paper will appear in Czechoslovak Mathematical Journal_
Abstract: We obtain the natural diagonal almost product and locally product
structures on the total space of the cotangent bundle of a Riemannian
manifold. We find the Riemannian almost product (locally product) and the
(almost) para-Hermitian cotangent bundles of natural diagonal lift type. We
prove the characterization theorem for the natural diagonal (almost) para-
Kählerian structures on the total spaces of the cotangent bundle.
Key words: natural lift, cotangent bundle, almost product structure, para-
Hermitian structure, para-Kähler structure.
Mathematics Subject Classification 2000: primary 53C05, 53C15, 53C55.
## 1\. Introduction
Some new interesting geometric structures on the total space $T^{*}M$ of the
cotangent bundle of a Riemannian manifold $(M,g)$ were obtained for example in
[7], [19], [21]–[23] by considering the natural lifts of the metric from the
base manifold to $T^{*}M$. Extensive literature, concerning the cotangent
bundles of natural bundles, may be found in [12].
The fundamental differences between the geometry of the cotangent bundle and
that of the tangent bundle, its dual, are due to the construction of the lifts
to $T^{*}M$, which is not similar to the definition of the lifts to $TM$ (see
[27]).
In a few papers such as [2]-[6], [9]–[11], [17], [24], and [26], some almost
product structures and almost para-Hermitian structures (called also almost
hyperbolic Hermitian structures) were constructed on the total spaces of the
tangent and cotangent bundles.
In 1965, K. Yano initiated in [26] the study of the Riemannian almost product
manifolds. A. M. Naveira gave in 1983 a classification of these manifolds with
respect to the covariant derivative of the almost product structure (see
[20]). In the paper [25] in 1992, M. Staikova and K. Gribachev obtained a
classification of the Riemannian almost product manifolds, for which the trace
of the almost product structure vanishes, the basic class being that of the
almost product manifolds with nonintegrable structure (see [16]).
A classification of the almost para-Hermitian manifolds was made in 1988 by C.
Bejan, who obtained in [3] 36 classes, up to duality, and the
characterizations of some of them. P. M. Gadea and J. Muñoz Masqué gave in
1991 a classification à la Gray-Hervella, obtaining 136 classes, up to duality
(see [8]). Maybe the best known class of (almost) para-Hermitian manifolds are
the (almost) para-Kähler manifolds, characterized by the closure of the
associated 2-form, and studied for example in [1] and [15].
In the present paper we consider an $(1,1)$-tensor field $P$ obtained as a
natural diagonal lift of the metric $g$ from the base manifold $M$ to the
total space $T^{*}M$ of the cotangent bundle. This tensor field depends on
four coefficients which are smooth functions of the energy density $t$. We
first determine the conditions under which the tensor field constructed in
this way is an almost product structure on $T^{*}M$. We obtain some simple
relations between the coefficients of $P$. From the study of the integrability
conditions of the determined almost product structure, it follows that the
base manifold must be a space form, and two coefficients may be expressed as
simple rational functions of the other two coefficients, their first order
derivatives, the energy density, and the constant sectional curvature of the
base manifold. Then we prove the characterization theorems for the cotangent
bundles which are Riemannian almost product (locally product) manifolds, or
(almost) para-Hermitian manifolds, with respect to the obtained almost product
structure, and a natural diagonal lifted metric $G$. Finally we obtain the
(almost) para-Kähler cotangent bundles of natural diagonal lift type.
Throughout this paper, the manifolds, tensor fields and other geometric
objects are assumed to be differentiable of class $C^{\infty}$ (i.e. smooth).
The Einstein summation convention is used , the range of the indices
$h,i,j,k,l,m,r,$ being always $\\{1,\dots,n\\}$.
## 2\. Preliminary results
The cotangent bundle of a smooth $n$-dimensional Riemannian manifold may be
endowed with a structure of $2n$-dimensional smooth manifold, induced by the
structure on the base manifold. If $(M,g)$ is a smooth Riemannian manifold of
dimension $n$, we denote its cotangent bundle by $\pi:T^{*}M\rightarrow M$.
Every local chart on $M$, $(U,\varphi)=(U,x^{1},\dots,x^{n})$ induces a local
chart $(\pi^{-1}(U),\Phi)=(\pi^{-1}(U),q^{1},\dots,q^{n},$
$p_{1},\dots,p_{n})$, on $T^{*}M$, as follows. For a cotangent vector
$p\in\pi^{-1}(U)\subset T^{*}M$, the first $n$ local coordinates
$q^{1},\dots,q^{n}$ are the local coordinates of its base point $x=\pi(p)$ in
the local chart $(U,\varphi)$ (in fact we have
$q^{i}=\pi^{*}x^{i}=x^{i}\circ\pi,\ i=1,\dots n)$. The last $n$ local
coordinates $p_{1},\dots,p_{n}$ of $p\in\pi^{-1}(U)$ are the vector space
coordinates of $p$ with respect to the basis
$(dx^{1}_{\pi(p)},\dots,dx^{n}_{\pi(p)})$, defined by the local chart
$(U,\varphi)$, i.e. $p=p_{i}dx^{i}_{\pi(p)}$.
The concept of $M$-tensor field on the cotangent bundle of a Riemannian
manifold was defined by the present author in [7], in the same manner as the
$M$-tensor fields were introduced on the tangent bundle (see [18]).
We recall the splitting of the tangent bundle to $T^{*}M$ into the vertical
distribution $VT^{*}M={\rm Ker}\ \pi_{*}$ and the horizontal one determined by
the Levi Civita connection $\dot{\nabla}$ of the metric $g$:
(2.1)
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}TT^{*}M=VT^{*}M\oplus
HT^{*}M.$
If $(\pi^{-1}(U),\Phi)=(\pi^{-1}(U),q^{1},\dots,q^{n},p_{1},\dots,p_{n})$ is a
local chart on $T^{*}M$, induced from the local chart
$(U,\varphi)=(U,x^{1},\dots,x^{n})$, the local vector fields
$\frac{\partial}{\partial p_{1}},\dots,\frac{\partial}{\partial p_{n}}$ on
$\pi^{-1}(U)$ define a local frame for $VT^{*}M$ over $\pi^{-1}(U)$ and the
local vector fields $\frac{\delta}{\delta q^{1}},\dots,\frac{\delta}{\delta
q^{n}}$ define a local frame for $HT^{*}M$ over $\pi^{-1}(U)$, where
$\frac{\delta}{\delta q^{i}}=\frac{\partial}{\partial
q^{i}}+\Gamma^{0}_{ih}\frac{\partial}{\partial p_{h}},\
\Gamma^{0}_{ih}=p_{k}\Gamma^{k}_{ih},$ and $\Gamma^{k}_{ih}(\pi(p))$ are the
Christoffel symbols of $g$.
The set of vector fields $\\{\frac{\partial}{\partial
p_{i}},\frac{\delta}{\delta q^{j}}\\}_{i,j=\overline{1,n}}$, denoted by
$\\{\partial^{i},\delta_{j}\\}_{i,j=\overline{1,n}}$, defines a local frame on
$T^{*}M$, adapted to the direct sum decomposition (2.1).
We consider the energy density defined by $g$ in the cotangent vector $p$:
$\displaystyle\begin{array}[]{c}t=\frac{1}{2}\|p\|^{2}=\frac{1}{2}g^{-1}_{\pi(p)}(p,p)=\frac{1}{2}g^{ik}(x)p_{i}p_{k},\
\ \ p\in\pi^{-1}(U).\end{array}$
We have $t\in[0,\infty)$ for all $p\in T^{*}M$.
In the sequel we shall use the following lemma, which may be proved easily.
###### Lemma 2.1.
If $n>1$ and $u,v$ are smooth functions on $T^{*}M$ such that
$\displaystyle ug_{ij}+vp_{i}p_{j}=0,\quad ug^{ij}+vg^{0i}g^{0j}=0,\quad or\
u\delta^{i}_{j}+vg^{0i}p_{j}=0,$
on the domain of any induced local chart on $T^{*}M$, then $u=0,\ v=0$. We
used the notation $g^{0i}=p_{h}g^{hi}$.
## 3\. Almost product structures of natural diagonal lift type on the
cotangent bundle
In this section we shall find the almost product structures on the (total
space of the) cotangent bundle, which are natural diagonal lifts of the metric
from the base manifold $M$ to $T^{*}M$. Then we shall study the integrability
conditions for the determined structures, obtaining the natural diagonal
locally product structures on $T^{*}M$.
An _almost product structure_ $J$ on a differentiable manifold $M$ is an
$(1,1)$\- tensor field on $M$ such that $J^{2}=I$. The pair $(M,J)$ is called
an _almost product manifold_. When the almost product structure $J$ is
integrable, it is called a _locally product structure_ , and the manifold
$(M,J)$ is a _locally product manifold_.
An _almost paracomplex manifold_ is an almost product manifold $(M,J)$, such
that the two eigenbundles associated to the two eigenvalues $+1$ and $-1$ of
$J$, respectively, have the same rank. Equivalently, a splitting of the
tangent bundle $TM$ into the Whitney sum of two subbundles $T^{\pm}M$ of the
same fiber dimension is called an _almost paracomplex structure_ on $M$.
V. Cruceanu presented in [6] two simple almost product structures on the total
space $T^{*}M$ of the cotangent bundle, obtained by considering on the base
manifold $M$ a linear connection $\nabla$ and a non-degenerate $(0,2)$-tensor
field $g$. If $\alpha$ is a differentiable 1-form and $X$ is a vector field on
$M$, $\alpha^{V}$ denotes the vertical lift of $\alpha$ and $X^{H}$ the
horizontal lift of $X$ to $T^{*}M$, one can consider
(3.1)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}P(X^{H})=-X^{H},\
P(\alpha^{V})=\alpha^{V},$ (3.2)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}Q(X^{H})=(X^{\flat})^{V},\
Q(\alpha^{V})=(\alpha^{\sharp})^{H},$
where $X^{\flat}=g_{X}$ is the 1-form on $M$ defined by
$X^{\flat}(Y)=g_{X}(Y)=g(X,Y),\forall Y\in\mathcal{T}^{1}_{0}(M)$,
$\alpha^{\sharp}=g^{-1}_{\alpha}$ is a vector field on $M$ defined by
$g(\alpha^{\sharp},Y)=\alpha(Y),~{}\forall Y\in\mathcal{T}^{1}_{0}(M)$. $P$ is
a paracomplex structure if and only if $\nabla$ has vanishing curvature, while
$Q$ is paracomplex if and only if the curvature of $\nabla$ and the exterior
covariant differential $Dg$ of $g$, given by
$(Dg)(X,Y)=\nabla_{X}(Y^{\flat})-\nabla_{Y}(X^{\flat})-[X,Y]^{\flat},$
vanishes.
The results from [13] and [14] concerning the natural lifts, allow us to
introduce an $(1,1)$ tensor field $P$ on $T^{*}M$, which is a natural diagonal
lift of the metric $g$ from the base manifold to the total space $T^{*}M$ of
cotangent bundle. Using the adapted frame
$\\{\partial^{i},\delta_{j},\\}_{i,j=\overline{1,n}}$ to $T^{*}M$, we define
$P$ by the relations:
(3.3) $\displaystyle P\delta_{i}=P^{(1)}_{ij}\partial^{j},\quad
P\partial^{i}=P_{(2)}^{ij}\delta_{j},$
where the $M-$tensor fields involved as coefficients have the forms
(3.4)
$\displaystyle~{}~{}~{}~{}~{}~{}~{}P^{(1)}_{ij}=a_{1}(t)g_{ij}+b_{1}(t)p_{i}p_{j},\quad
P_{(2)}^{ij}=a_{2}(t)g^{ij}+b_{2}(t)g^{0i}g^{0j}.$
$a_{1},\ b_{1},\ a_{2},$ and $b_{2}$ being smooth functions of the energy
density $t$.
The invariant expression of the defined structure is
(3.8)
$\displaystyle\begin{array}[]{c}PX^{H}_{p}=a_{1}(t)(X^{\flat})^{V}_{p}+b_{1}(t)p(X)p_{p}^{V},\\\
\\\
P\alpha^{V}_{p}=a_{2}(t)(\alpha^{\sharp})_{p}^{H}+b_{2}(t)g^{-1}_{\pi(p)}(p,\alpha)(p^{\sharp})_{p}^{H},\end{array}$
in every point $p$ of the induced local card $(\pi^{-1}(U),\Phi)$ on $T^{*}M$,
$\forall~{}X\in\mathcal{T}^{1}_{0}(M),\forall~{}\alpha\in\mathcal{T}^{0}_{1}(M)$.
The vector $p^{\sharp}$ is tangent to $M$ in $\pi(p)$,
$p^{V}=p_{i}\partial^{i}$ is the Liouville vector field on $T^{*}M$ , and
$(p^{\sharp})^{H}=g^{0i}\delta_{i}$ is the geodesic spray on $T^{*}M$.
###### Example 3.1.
When $a_{1}=a_{2}=1,$ $b_{1}$ and $b_{2}$ vanish, we have the structure given
by $(\ref{Q}).$
The following theorems present the conditions under which the above tensor
field $P$ defines an almost product (locally product) structure on the total
space of the cotangent bundle.
###### Theorem 3.2.
The tensor field $P$, given by (3.3) or (3.8), defines an almost product
structure of natural diagonal lift type on $T^{*}M$, if and only if its
coefficients satisfy the relations
(3.9)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}a_{1}=\frac{1}{a_{2}}\
,\ \ \ a_{1}+2tb_{1}=\frac{1}{a_{2}+2tb_{2}}.$
###### Proof.
The condition $P^{2}=I$ in the definition of the almost product structure, may
be written in the following form, by using (3.3):
$P^{(1)}_{ij}P_{(2)}^{il}=\delta^{l}_{j},~{}~{}~{}~{}~{}~{}P_{(2)}^{ij}P^{(1)}_{il}=\delta^{j}_{l},$
and replacing (3.4) it becomes
$(a_{1}a_{2}-1)\delta^{l}_{j}+[b_{1}(a_{2}+2tb_{2})+a_{1}b_{2}]g^{0l}p_{j}=0.$
Using Lemma 2.1, we have that the above expression vanish if and only if
$a_{1}=\frac{1}{a_{2}},\ b_{1}=-\frac{a_{1}b_{2}}{a_{2}+2tb_{2}},$
which imply also the second relation in (3.9). ∎
###### Remark 3.3.
If $a_{1}=\frac{1}{\beta},\ a_{2}=\beta$, $b_{1}=\frac{u}{\alpha\beta}$ and
$b_{2}=-\frac{u\beta}{\alpha+2tu}$, where $\alpha$ and $\beta$ are real
constants and $u$ is a smooth function of $t$, the statements of Theorem 3.2
are satisfied, so the structure considered in [24] is an almost product
structure on the total space $T^{*}M$ of the cotangent bundle.
###### Theorem 3.4.
The natural diagonal almost product structure $P$ on the total space of the
cotangent bundle of an $n$-dimensional connected Riemannian manifold $(M,g)$,
with $n>2$, is a locally product structure on $T^{*}M$ (i.e. $P$ is
integrable) if and only if the base manifold is of constant sectional
curvature $c$, and the coefficients $b_{1},\ b_{2}$ are given by:
(3.10)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}b_{1}=\frac{a_{1}a_{1}^{\prime}+c}{a_{1}-2ta_{1}^{\prime}},\
b_{2}=\frac{a_{1}a_{2}^{\prime}-a_{2}^{2}c}{a_{1}+2cta_{2}}.$
###### Proof.
The almost product structure $P$ on $T^{*}M$ is integrable if and only if the
vanishing condition for the Nijenhuis tensor field $N_{P}$,
$N_{P}(X,Y)=[PX,PY]-P[PX,Y]-P[X,PY]+P^{2}[X,Y],\forall
X,Y\in\mathcal{T}^{1}_{0}(T^{*}M),$
is satisfied.
Studying the components of $N_{P}$ with respect to the adapted frame on
$T^{*}M$, $\\{\partial^{i},\delta_{j}\\}_{i,j=\overline{1,n}}$, we first
obtain:
$N_{P}(\partial^{i},\partial^{j})=[P^{(1)}_{km}(\partial^{j}P_{(2)}^{mi}-\partial^{i}P_{(2)}^{mj})+\operatorname{Rim}^{0}_{kml}P_{(2)}^{mi}P_{(2)}^{lj}]\partial^{k}.$
Replacing the values (3.4), after some tensorial computations the above
expression becomes
(3.11)
$a_{1}(a_{2}^{\prime}-b_{2})(\delta^{h}_{i}p_{j}-\delta^{h}_{j}p_{i})-a_{2}^{2}\operatorname{Rim}^{h}_{kij}+a_{2}b_{2}(\operatorname{Rim}^{h}_{kjl}p_{i}-\operatorname{Rim}^{h}_{kil}p_{j})g^{0k}g^{0l}=0.$
We differentiate (3.11) with respect to $p_{h}$. Since the curvature of the
base manifold does not depend on $p$, we take the value of this derivative at
$p=0$, and we obtain
(3.12)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}R^{h}_{kij}=c(\delta^{h}_{i}g_{kj}-\delta^{h}_{j}g_{ki}),$
where
$c=\frac{a_{1}(0)}{a_{2}^{2}(0)}(a^{\prime}_{2}(0)-b_{2}(0))$
is a function depending on $q^{1},...,q^{n}$ only. Schur’s theorem implies
that $c$ must be a constant when $M$ is connected, of dimension $n>2$.
Moreover, by using (3.12), the relation (3.11) becomes
(3.13)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}[a_{1}a_{2}^{\prime}-a_{2}^{2}c-b_{2}(a_{1}+2ta_{2}c)](\delta^{h}_{i}g_{kj}-\delta^{h}_{j}g_{ki})=0.$
Solving (3.13) with respect to $b_{2}$, we obtain the second relation in
(3.10).
The Nijenhuis tensor field computed for both horizontal arguments is
$N_{P}(\delta_{i},\delta_{j})=(P^{(1)}_{li}\partial^{l}P^{(1)}_{hj}-P^{(1)}_{lj}\partial^{l}P^{(1)}_{hi}+\operatorname{Rim}^{0}_{hij})\partial^{h},$
which vanishes if and only if
$[b_{1}(2ta_{1}^{\prime}-a_{1})+a_{1}^{\prime}a_{1}+c](g_{hj}p_{i}-g_{hi}p_{j})=0,$
namely when $b_{1}$ has the expression in (3.10).
The mixed components of the Nijenhuis tensor field, have the forms
$N_{P}(\delta_{i},\partial^{j})=-N_{P}(\partial^{j},\delta_{i})=(P^{(1)}_{mi}\partial^{m}P_{(2)}^{hj}+P_{(2)}^{hl}\partial^{j}P^{(1)}_{li}-P_{(2)}^{lh}P_{(2)}^{jm}\operatorname{Rim}^{0}_{lim})\delta_{h},$
which after replacing (3.4) and (3.12) become
$(a_{1}a_{2}^{\prime}+a_{2}b_{1}-a_{2}^{2}c+2ta_{2}^{\prime}b_{1})g^{hj}p_{i}$
$+(a_{2}b_{1}+a_{1}b_{2}+2tb_{1}b_{2})\delta^{j}_{i}g^{0h}+(a_{1}^{\prime}a_{2}+a_{1}b_{2}+a_{2}^{2}c+2cta_{2}b_{2})\delta^{h}_{i}g^{0j}$
$+(a_{2}b_{1}^{\prime}+a_{1}^{\prime}b_{2}+3b_{1}b_{2}+a_{1}b_{2}^{\prime}-a_{2}b_{2}c+2tb_{1}^{\prime}b_{2}+2tb_{1}b_{2}^{\prime})p_{i}g^{0h}g^{0j}.$
Taking (3.9) into account, the above expression takes the form
$\frac{(a_{1}-2a_{1}^{\prime}t)b_{1}-a_{1}a_{1}^{\prime}-c}{a_{1}^{2}}g^{hj}p_{i}+\frac{a_{1}a_{1}^{\prime}+c-(a_{1}-2a_{1}^{\prime}t)b_{1}}{a_{1}(a_{1}+2tb_{1})}\delta^{h}_{i}g^{0j}$
$+\frac{(a_{1}a_{1}^{\prime}+c)b_{1}-(a_{1}-2ta_{1}^{\prime})b_{1}^{2}}{a_{1}^{2}(a_{1}+2tb_{1})}p_{i}g^{0h}g^{0j}$
and it vanishes if and only if $b_{1}$ is expressed by the first relation in
(3.10).
One can verify that all the components of the Nijenhuis tensor field vanish
under the same conditions, so the almost product structure $P$ on $T^{*}M$ is
integrable, i.e. $P$ is a locally product structure on $T^{*}M$. ∎
###### Remark 3.5.
If the coefficients involved in the definition of $P$ have the values
presented in Remark 3.3, the relations (3.10) take the form
$u=c\alpha\beta^{2}$, so Theorem 3.4 implies the results stated in [24,
Theorem 4.2].
## 4\. Natural diagonal Riemannian almost product and almost para-Hermitian
structures on $T^{*}M$
Authors like M. Anastasiei, C. Bejan, V. Cruceanu, H. Farran, A. Heydari, S.
Ishihara, I. Mihai, G. Mitric, C. Nicolau, V. Oproiu, L. Ornea, N. Papaghiuc,
E. Peyghan, K. Yano, and M. S. Zanoun considered almost product structures and
almost para-Hermitian structures (called also almost hyperbolic Hermitian
structures) on the total spaces of the tangent and cotangent bundles.
A Riemannian manifold $(M,g)$, endowed with an almost product structure $J$,
satisfying the relation
(4.1)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}g(JX,JY)=\varepsilon
g(X,Y),\ \forall X,Y\in\mathcal{T}^{1}_{0}(M),$
is called a _Riemannian almost product manifold_ if $\varepsilon=1$, or an
_almost para-Hermitian manifold_ (called also an _almost hyperbolic Hermitian
manifold_) if $\varepsilon=-1$.
In the following we shall find the Riemannian almost product (locally product)
and the (almost) para-Hermitian cotangent bundles of natural diagonal lift
type.
To this aim, we consider a natural diagonal lifted metric on the total space
$T^{*}M$ of the cotangent bundle, defined by:
(4.2)
$\left\\{\begin{array}[]{l}G_{p}(X^{H},Y^{H})=c_{1}(t)g_{\pi(p)}(X,Y)+d_{1}(t)p(X)p(Y),\\\
\\\
G_{p}(\alpha^{V},\omega^{V})=c_{2}(t)g^{-1}_{\pi(p)}(\alpha,\omega)+d_{2}(t)g^{-1}_{\pi(p)}(p,\alpha)g^{-1}_{\pi(p)}(p,\omega),\\\
\\\ G_{p}(X^{H},\alpha^{V})=G_{p}(\alpha^{V},X^{H})=0,\end{array}\right.$
$\forall~{}X,Y\in\mathcal{T}^{1}_{0}(M),$
$\forall~{}\alpha,\omega\in\mathcal{T}^{0}_{1}(M),\forall~{}p\in T^{*}M$,
where the coefficients $c_{1},\ c_{2},\ d_{1},\ d_{2}$ are smooth functions of
the energy density.
The conditions for $G$ to be nondegenerate are assured if
$c_{1}c_{2}\neq 0,\ (c_{1}+2td_{1})(c_{2}+2td_{2})\neq 0.$
The metric $G$ is positive definite if
$c_{1}+2td_{1}>0,\quad c_{2}+2td_{2}>0.$
Using the adapted frame $\\{\partial^{i},\delta_{j}\\}_{i,j=\overline{1,n}}$
on $T^{*}M$, (4.2) becomes
(4.3)
$\displaystyle\begin{cases}G(\delta_{i},\delta_{j})=G^{(1)}_{ij}=c_{1}(t)g_{ij}+d_{1}(t)p_{i}p_{j},\\\
G(\partial^{i},\partial^{j})=G_{(2)}^{ij}=c_{2}(t)g^{ij}+d_{2}(t)g^{0i}g^{0j},\\\
G(\partial^{i},\delta_{j})=G(\delta_{i},\partial^{j})=0,\end{cases}$
where $c_{1},\ c_{2},\ d_{1},\ d_{2}$ are smooth functions of the density
energy on $T^{*}M$.
Next we shall prove the following characterization theorem:
###### Theorem 4.1.
Let $(M,g)$ be an $n$-dimensional connected Riemannian manifold, with $n>2$,
and $T^{*}M$ the total space of its cotangent bundle. Let $G$ be a natural
diagonal lifted metric on $T^{*}M$, defined by (4.2), and $P$ an almost
product structure on $T^{*}M$, characterized by Theorem 3.2. Then
$(T^{*}M,G,P)$ is a Riemannian almost product manifold, or an almost para-
Hermitian manifold if and only if the following proportionality relations
between the coefficients hold:
(4.4)
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\frac{c_{1}}{a_{1}}=\varepsilon\frac{c_{2}}{a_{2}}=\lambda,\
\frac{c_{1}+2td_{1}}{a_{1}+2tb_{1}}=\varepsilon\frac{c_{2}+2td_{2}}{a_{2}+2tb_{2}}=\lambda+2t\mu,$
where $\varepsilon$ takes the corresponding values from the definition
$(\ref{defprodpara})$, and the proportionality coefficients $\lambda>0$ and
$\lambda+2t\mu>0$ are some functions depending on the energy density $t$.
If moreover, the relations stated in Theorem 3.4 are fulfilled, then
$(T^{*}M,G,$ $P)$ is a Riemannian locally product manifold for
$\varepsilon=1$, or a para-Hermitian manifold for $\varepsilon=-1$.
###### Proof.
With respect to the adapted frame
$\\{\partial^{i},\delta_{j}\\}_{i,j=\overline{1,n}}$, the relation (4.1)
becomes:
(4.5) $~{}~{}~{}~{}G(P\delta_{i},P\delta_{j})=\varepsilon
G(\delta_{i},\delta_{j}),\ G(P\partial^{i},P\partial^{j})=\varepsilon
G(\partial^{i},\partial^{j}),\ G(P\partial^{i},P\delta_{j})=0,$
and using (3.3) and (4.3) we have
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\begin{array}[]{c}(-\varepsilon
c_{1}+a_{1}^{2}c_{2})g_{ij}+[-\varepsilon
d_{1}+a_{1}^{2}d_{2}+2b_{1}c_{2}(a_{1}+tb_{1})+4tb_{1}d_{2}(a_{1}+tb_{1})]p_{i}p_{j}=0,\\\
\\\ (a_{2}^{2}c_{1}-\varepsilon c_{2})g^{ij}+[-\varepsilon
d_{2}+a_{2}^{2}d_{1}+2b_{2}c_{1}(a_{2}+tb_{2})+4tb_{2}d_{1}(a_{2}+tb_{2})]g^{0i}g^{0j}=0.\end{array}$
Taking into account Lemma 2.1, the coefficients which appear in the above
expressions vanish. Due to the first relation in (3.9), we get by equalizing
to zero the coefficients of $g_{ij}$ and $g^{ij}$, the first relation in
(4.4).
Moreover, multiplying by $2t$ the coefficients of $p_{i}p_{j}$ and
$g^{0i}g^{0j}$ and adding them to the coefficients of $g_{ij}$ and $g^{ij}$,
respectively, we obtain:
(4.10)
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\begin{array}[]{l}-\varepsilon(c_{1}+2td_{1})+(a_{1}+2tb_{1})^{2}(c_{2}+2td_{2})=0,\\\
\\\
(a_{2}+2tb_{2})^{2}(c_{1}+2tb_{1})-\varepsilon(c_{2}+2td_{2})=0.\end{array}$
Using the second relation in (3.9), (4.10) leads to the second relation in
(4.4). ∎
###### Remark 4.2.
When the coefficients of the almost product structure $P$ have the expressions
in Remark 3.3, and the coefficients of the metric $G$ on $T^{*}M$ are
$c_{1}=a_{1},$ $d_{1}=b_{1},$ $c_{2}=-a_{2}$ and $d_{2}=-b_{2},$ Theorem 4.1
implies that $T^{*}M$ endowed with the almost product structure and with the
metric considered in [24] is an almost para-Hermitian manifold.
## 5\. Natural diagonal para-Kähler structures on $T^{*}M$
In the sequel we shall study the cotangent bundles endowed with para-Kähler
structures of natural diagonal lift type. This class of almost para-Hermitian
structures, studied for example in [1] and [15], is characterized by the
closure of the associated 2-form $\Omega$.
The 2-form $\Omega$ associated to the almost para-Hermitian structure $(G,P)$
of natural diagonal lift type on the total space of the cotangent bundle is
given by the relation
$\Omega(X,Y)=G(X,PY),\forall X,Y\in\mathcal{T}^{1}_{0}(T^{*}M).$
Studying the closure of $\Omega$, we may prove the following theorem:
###### Theorem 5.1.
The almost para-Hermitian structure $(G,P)$ of natural diagonal lift type on
the total space $T^{*}M$ of the cotangent bundle of a Riemannian manifold
$(M,g)$ is almost para-Kählerian if and only if
$\mu=\lambda^{\prime}.$
###### Proof.
The 2-form $\Omega$ on $T^{*}M$ has the following expression with respect to
the local adapted frame $\\{\partial^{i},\delta_{j}\\}_{i,j=1,\dots,n}$:
$\Omega(\partial^{i},\partial^{j})=\Omega(\delta_{i},\delta_{j})=0,\quad\Omega(\partial^{j},\delta_{i})=G_{(2)}^{jh}P^{(1)}_{hi},\quad\Omega(\delta_{i},\partial^{j})=G^{(1)}_{ih}P_{(2)}^{hj},$
By substituting (3.4) and (4.3) in the above expressions, and taking into
account the conditions for $(T^{*}M,G,P)$ to be an almost para-Hermitian
manifold (see Theorem 4.1), we have
(5.1)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\Omega(\delta_{i},\partial^{j})=-\Omega(\partial^{j},\delta_{i})=\lambda\delta^{j}_{i}+\mu
p_{i}g^{0j},$
which has the invariant expression
$\Omega\left(X_{p}^{H},\alpha_{p}^{V}\right)=\lambda\alpha(X)+\mu
p(X)g^{-1}_{\pi(p)}(p,\alpha),$
for every $X\in\mathcal{T}^{1}_{0}(M),\ \alpha\in\mathcal{T}^{0}_{1}(M),\ p\in
T^{*}M,$
Taking into account the relation (5.1) we have that the $2$-form $\Omega$
associated to the natural diagonal para-Hermitian structure has the form
(5.2)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\Omega=(\lambda\delta^{j}_{i}+\mu
p_{i}g^{0j})dq^{i}\wedge Dp_{j},$
where $Dp_{j}=dp_{j}-\Gamma^{0}_{jh}dq^{h}$ is the absolute differential of
$p_{i}$.
Moreover, the differential of $\Omega$ will be
$d\Omega=(d\lambda\delta^{j}_{i}+d\mu g^{0j}p_{i}+\mu dg^{0j}p_{i}+\mu
g^{0j}dp_{i})\wedge dq^{i}\wedge Dp_{j}-(\lambda\delta^{j}_{i}+\mu
p_{i}g^{0j})dq^{i}\wedge dDp_{j}.$
Let us compute the expressions of $d\lambda,\ d\mu,\ dg^{0i}$ and $dDp_{i}$:
$d\lambda=\lambda^{\prime}g^{0h}Dp_{h},\quad
d\mu=\mu^{\prime}g^{0h}Dp_{h},\quad
dg^{0i}=g^{hi}Dp_{h}-\Gamma^{i}_{j0}dq^{j},$
$dDp_{i}=\frac{1}{2}R^{0}_{ijh}dq^{h}\wedge dq^{j}-\Gamma^{h}_{ij}Dp_{h}\wedge
dq^{j}.$
Then, by substituting these relations into the expression of $d\Omega$, taking
into account the properties of the external product, the symmetry of $g^{ij}$
and $\Gamma^{h}_{ij}$ and the Bianchi identities, we obtain
$d\Omega=(\mu-\lambda^{\prime})p_{k}g^{kh}\delta^{i}_{j}Dp_{h}\wedge
Dp_{i}\wedge dq^{j},$
which, due to the antisymmetry of $\delta^{i}_{j}Dp_{i}\wedge dq^{j}$, may be
written as
$d\Omega=\frac{1}{2}(\mu-\lambda^{\prime})p_{k}(g^{kh}\delta^{j}_{i}-g^{kj}\delta^{h}_{i})Dp_{h}\wedge
Dp_{j}\wedge dq^{i},$
and it vanishes if and only if $\mu=\lambda^{\prime}$. ∎
Using the theorems 3.2, 3.4 and 5.1, we immediately prove:
###### Theorem 5.2.
An almost para-Hermitian structure $(G,P)$ of natural diagonal lift type on
the total space $T^{*}M$ of the cotangent bundle of a Riemannian manifold
$(M,g)$ is para-Kählerian if and only if $P$ is a locally product structure
(see Theorem 3.4) and $\mu=\lambda^{\prime}$.
###### Remark 5.3.
The almost para-Kählerian structures of natural diagonal lift type on $T^{*}M$
depend on three essential coefficients $a_{1},\ b_{1}$ and $\lambda$, while
the natural diagonal para-Kählerian structures on $T^{*}M$ depend on two
essential coefficients $a_{1}$ and $\lambda$, which in both cases must satisfy
the supplementary conditions $a_{1}>0,\ a_{1}+2tb_{1}>0,\ \lambda>0,\
\lambda+2t\lambda^{\prime}>0$, where $b_{1}$ is given by $(\ref{b}).$
###### Remark 5.4.
Taking into account Remark 4.2, we have that the constant $\lambda$ is equal
to $1$ and $\mu$ vanishes, so Theorem 5.1 leads to the statements in [24,
Theorem 3.1], namely the structure constructed in [24] is almost para-
Kählerian on $T^{*}M$. Moreover, taking into account Remark 3.5, it follows
that the relations (14) in [24] are fulfilled in the case when the constructed
structure is para-Kähler.
Acknowledgements. The author would like to thank Professor V. Oproiu for the
techniques learned and Professor M. I. Munteanu for the critical remarks
during the preparation of this work.
The paper was supported by the Program POSDRU/89/1.5/S/49944, ”Al. I. Cuza”
University of Iaşi, Romania.
## References
* [1] D. V. Alekseevsky, C. Medori, and A. Tomassini: Homogeneous para-Kahler Einstein manifolds. Russ. Math. Surv. 64 (2009), no. 1, 1–43.
* [2] M. Anastasiei: Some Riemannian almost product structures on tangent manifold. Proceedings of the 11th National Conference on Finsler, Lagrange and Hamilton Geometry (Craiova, 2000), Algebras Groups Geom. 17 (2000), no. 3, 253-262.
* [3] C. Bejan: A classification of the almost parahermitian manifolds. Proc. Conference on Diff. Geom. And Appl., Dubrovnik, 1988, 23–27.
* [4] C. Bejan: Almost parahermitian structures on the tangent bundle of an almost paracohermitian manifold. Proc. Fifth Nat. Sem. Finsler and Lagrange spaces, Braşov, 1988, 105–109.
* [5] C. Bejan and L. Ornea: An example of an almost hyperbolic Hermitian manifold. Internat. J. Math., Math. Sci. 21 (1998), no. 3, 613–618.
* [6] V. Cruceanu: Selected Papers, Editura PIM, Iaşi, 2006.
* [7] S. L. Druţă: Cotangent bundles with general natural Kähler structures. Rév. Rou. Math. Pures Appl. 54 (2009), 13–23.
* [8] P. M. Gadea, and J. Muñoz Masqué: Classification of almost para-Hermitian manifolds. Rend. Mat. Appl. (7) 11 (1991), 377–396.
* [9] H. Farran and M. S. Zanoun: On hyperbolic Hermite manifolds. Publ. Inst. Math. (Beograd) 46 (1989) 173–182.
* [10] A. Heydari and E. Peyghan: A characterization of the infinitesimal conformal transformations on tangent bundles. Bulletin of the Iranian Mathematical Society 34 (2008), no. 2, 59–70.
* [11] S. Ivanov and S. Zamkovoy: Parahermitian and paraquaternionic manifolds. Differential Geom. Appl. 23 (2005), no. 2, 205 -234.
* [12] I. Kolář: On cotangent bundles of some natural bundles. Geometry and physics (Zdíkov, 1993). Rend. Circ. Mat. Palermo (2) Suppl. No. 37 (1994), 115 120.
* [13] I. Kolář, P. Michor, and J. Slovak: Natural Operations in Differential Geometry. Springer-Verlag, Berlin, 1993\.
* [14] O. Kowalski and M. Sekizawa: Natural transformations of Riemannian metrics on manifolds to metrics on tangent bundles - a classification. Bull. Tokyo Gakugei Univ. (4) 40 (1988), 1–29.
* [15] D. Luczyszyn and Z. Olszak: On paraholomorphically pseudosymmetric para-Kählerian manifolds. J. Korean Math. Soc. 45 (2008), no. 4, 953–963.
* [16] D. Mekerov: On Riemannian almost product manifolds with nonintegrable structure. J. Geom. 89 (2008), 119–129.
* [17] I. Mihai and C. Nicolau: Almost product structures on the tangent bundle of an almost paracontact manifold. Demonstratio Math. 15 (1982), 1045–1058.
* [18] K. P. Mok, E. M. Patterson, and Y. C. Wong: Structure of symmetric tensors of type (0,2) and tensors of type (1,1) on the tangent bundle. Trans. Amer. Math. Soc. 234 (1977), 253–278.
* [19] M. I. Munteanu: CR-structures on the unit cotangent bundle. An. Şt. Univ. Al. I. Cuza Iaşi, Math. 44 (1998), I, no. 1, 125–136.
* [20] A. M. Naveira: A classification of Riemannian almost product manifolds. Rend. Math. 3 (1983) 577–592.
* [21] V. Oproiu and N. Papaghiuc: A pseudo-Riemannian structure on the cotangent bundle. An. Şt. Univ. Al. I. Cuza Iaşi, Math. 36 (1990), 265–276.
* [22] V. Oproiu, N. Papaghiuc, and G. Mitric, Some classes of parahermitian structures on cotangent bundles, An. Şt. Univ. Al. I. Cuza Iaşi, Math. 43 (1996), 7–22.
* [23] V. Oproiu and D. D. Poroşniuc: A class of Kaehler Einstein structures on the cotangent bundle of a space form. Publ. Math. Debrecen 66 (2005), 457–478.
* [24] E. Peyghan and A. Heydari: A class of locally symmetric para-Kähler Einstein structures on the cotangent bundle. International Mathematical Forum 5 (2010), no. 3, 145–153.
* [25] M. Staikova and K. Gribachev: Cannonical connections and conformal invariants on Riemannian almost product manifolds. Serdica Math. J. 18 (1992), 150–161.
* [26] K. Yano: Differential geometry of a complex and almost complex spaces. Pergamon Press, 1965.
* [27] K. Yano and S. Ishihara: Tangent and Cotangent Bundles. M. Dekker Inc., New York, 1973.
Author’s address: Simona-Luiza Druţă-Romaniuc, Department of Sciences, 54
Lascar Catargi Street RO-700107, ”Al. I. Cuza” University of Iaşi, Romania,
e-mail: simonadruta@yahoo.com.
|
arxiv-papers
| 2011-04-14T13:56:51 |
2024-09-04T02:49:18.257600
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Simona-Luiza Druta-Romaniuc",
"submitter": "Simona Druta-Romaniuc",
"url": "https://arxiv.org/abs/1104.2766"
}
|
1104.2778
|
# A New Limit on Time-Reversal Violation in Beta Decay
H.P. Mumm National Institute of Standards and Technology, Gaithersburg, MD
20899 CENPA and Physics Department, University of Washington, Seattle, WA
98195 T.E. Chupp Physics Department, University of Michigan, Ann Arbor, MI,
48104 R.L. Cooper Physics Department, University of Michigan, Ann Arbor, MI,
48104 K.P. Coulter Physics Department, University of Michigan, Ann Arbor,
MI, 48104 S.J. Freedman Physics Department, University of California at
Berkeley and Lawrence Berkeley National Laboratory, Berkeley, CA 94720 B.K.
Fujikawa Physics Department, University of California at Berkeley and
Lawrence Berkeley National Laboratory, Berkeley, CA 94720 A. García CENPA
and Physics Department, University of Washington, Seattle, WA 98195
Department of Physics, University of Notre Dame, Notre Dame, IN 46556 G.L.
Jones Physics Department, Hamilton College, Clinton, NY 13323 J.S. Nico
National Institute of Standards and Technology, Gaithersburg, MD 20899 A.K.
Thompson National Institute of Standards and Technology, Gaithersburg, MD
20899 C.A. Trull Physics Department, Tulane University, New Orleans, LA
70118 J.F. Wilkerson Department of Physics and Astronomy, University of
North Carolina, Chapel Hill, NC 27599 CENPA and Physics Department,
University of Washington, Seattle, WA 98195 F.E. Wietfeldt Physics
Department, Tulane University, New Orleans, LA 70118
###### Abstract
We report the results of an improved determination of the triple correlation
$D{\bf P}\cdot(\bf{p}_{e}\times\bf{p}_{v})$ that can be used to limit possible
time-reversal invariance in the beta decay of polarized neutrons and constrain
extensions to the Standard Model. Our result is $D=(-0.96\pm 1.89(stat)\pm
1.01(sys))\times 10^{-4}$. The corresponding phase between $g_{A}$ and $g_{V}$
is $\phi_{AV}=180.013^{\circ}\pm 0.028^{\circ}$ (68% confidence level). This
result represents the most sensitive measurement of $D$ in nuclear beta decay.
###### pacs:
24.80.1y, 11.30.Er, 12.15.Ji, 13.30.Ce
††preprint: APS/123-QED
The existence of charge-parity (CP) symmetry violation in nature is
particularly important in that it is necessary to explain the preponderance of
matter over antimatter in the universe SAK91 . Thus far, CP violation has been
observed only in the K and B meson systems CHR64 ; AUB01 ; ABE01 and can be
entirely accounted for by a phase in the Cabbibo-Kobayashi-Maskawa matrix in
the electroweak Lagrangian. This phase is insufficient to account for the
known baryon asymmetry in the context of Big Bang cosmology RIO99 , so there
is good reason to search for CP violation in other systems. As CP and time-
reversal (T) violation can be related to each other through the CPT theorem,
experimental limits on electric dipole moments and T-odd observables in
nuclear beta decay place strict constraints on some, but not all, possible
sources of new CP violation.
Figure 1: A schematic of the emiT detector illustrating the alternating
electron and proton detector segments. The darker shaded proton detectors
indicate the the paired-ring at $z=\pm 10$ cm. The cross section view
illustrates, in an exaggerated manner, the effect of the magnetic field on the
particle trajectories and average opening angle. A P2E3 coincidence event is
shown.
The decay probability distribution for neutron beta decay, $dW$, can be
written in terms of the beam polarization $\bf{P}$ and the momenta (energies)
of the electron ${\bf p}_{e}$ ($E_{e}$) and antineutrino ${\bf p}_{\nu}$
($E_{\nu}$) as JAC57
$\displaystyle dW\propto 1+a\frac{{\bf p}_{e}\cdot{\bf
p}_{\nu}}{E_{e}E_{\nu}}+b\frac{{m}_{e}}{E_{e}}+$ (1) $\displaystyle\hskip
36.135pt{\bf P}\cdot\left(A\frac{{\bf p}_{e}}{E_{e}}+B\frac{{\bf
p}_{\nu}}{E_{\nu}}+D\frac{{\bf p}_{e}\times{\bf
p}_{\nu}}{E_{e}E_{\nu}}\right).$
A contribution of the parity-even triple correlation $D{\bf P}\cdot({\bf
p}_{e}\times{\bf p}_{\nu})$ above the level of calculable final-state
interactions (FSI) implies T-violation. The PDG average of recent measurements
is $D=(-4\pm 6)\times 10^{-4}$ PDG:2010 ; LIS00 ; SOL04 , while the FSI for
the neutron are $\sim 10^{-5}$ CAL67 ; Ando2009 . Complementary limits can be
set on other T-violating correlations, and recently a limit on $R$ has been
published Kozela:2009ni . Various theoretical models that extend the SM, such
as left-right symmetric theories, leptoquarks, and certain exotic fermions
could give rise to observable effects that are as large as the present
experimental limits HER98 . Calculations performed within the Minimal
Supersymmetric Model, however, predict $D\lesssim 10^{-7}$ Drees2003 .
In the neutron rest frame, the triple correlation can be expressed as $D{\bf
P}\cdot({\bf p}_{p}\times{\bf p}_{e})$, where ${\bf p}_{p}$ is the proton
momentum. Thus one can extract $D$ from the spin dependence of proton-electron
coincidences in the decay of cold polarized neutrons. Our measurement was
carried out at the National Institute of Standards and Technology Center for
Neutron Research MUM04 . The detector, shown schematically in Fig. 1,
consisted of an octagonal array of four electron-detection planes and four
proton-detection planes concentric with a longitudinally polarized beam. The
beam, with a neutron capture fluence rate at the detector of $1.7\times
10^{8}$ cm-2 s-1, was defined using a series of 6LiF apertures and polarized
to $>91\%$ (95% C.L.) by a double-sided bender-type supermirror MUM04 . A 560
$\mu$T guide field maintained the polarization direction throughout the
fiducial volume and a current-sheet spin-flipper was used to reverse the
neutron spin direction every 10 s. The symmetric octagonal geometry was chosen
to maximize sensitivity to $D$ while approximately canceling systematic
effects stemming from detector efficiency variations or coupling to the spin
correlations $A$ and $B$ WAS94 ; LIS00 . Each of the four proton segments
consisted of a $2\times 8$ array of silicon surface-barrier detectors (SBDs)
with an active layer 300 mm2$\times$ 300 $\mu$m. Each SBD was contained within
an acceleration and focusing cell consisting of a 94% transmitting grounded
wire-mesh box through which the recoil protons entered. Each SBD, situated
within a field-shaping cylindrical tube, was held at a fixed voltage in the
range $-25$ kV to $-32$ kV. The sensitive regions of the beta detectors were
plastic scintillator measuring 50 cm by 8.4 cm by 0.64 cm thick, with
photomultiplier tube (PMT) readout at both ends. This thickness is sufficient
to stop electrons at the decay endpoint energy of 782 keV. The proton and beta
detectors were periodically calibrated in situ with gamma and beta sources
respectively. Details of the apparatus are presented elsewhere MUM03 ; MUM04 ;
LIS00 .
Data were acquired in a series of runs from October 2002 through December
2003. Typical count rates were 3 s-1 and 100 s-1 for single proton and beta
detectors, respectively, while the coincidence rate for the entire array was
typically 25 s-1. Of the raw events, 12% were eliminated by filtering on
various operational parameters (e.g. coil currents) and by requiring equal
counting time in each spin-flip state. A beta-energy software threshold of 90
keV eliminated detection efficiency drifts due to changes in PMT gain coupled
with the hardware threshold. This was the largest single cut, eliminating 14%
of the raw events. A requirement that a single beta be detected in coincidence
with each proton eliminated 7% of events. All cuts were varied to test for
systematic effects.
Figure 2: Intensity log plot of SBD-scintillator coincidence data showing
proton energy vs delay time. Events near $\Delta t=0$ are prompt coincidences
due primarily to beam-related backgrounds.
The remaining coincidence events were divided into two timing windows: a
preprompt window from -12.3 $\mu$s to -0.75 $\mu$s that was used to determine
the background from random coincidences, and the decay window from -0.5 $\mu$s
to 6.0 $\mu$s as shown in Fig. 2. The recoil proton has an endpoint of 750 eV.
On average it is delayed by $\sim$ 0.5 $\mu$s. The average signal-to-
background ratio was $\sim$ 30/1. The energy-loss spectrum produced by minimum
ionizing particles in 300 $\mu$m of silicon is peaked at approximately 100 keV
and, being well separated from the proton energy spectrum, yielded an
estimated contamination below 0.1%. The final data set consisted of
approximately 300 million accepted coincidence events.
A detailed Monte Carlo simulation was used to estimate a number of systematic
effects. The program Penelope penelope , which has been tested against data in
a variety of circumstances of relevance to neutron decay martin06 , was
embedded within a custom tracking code. All surfaces visible to decay
particles were included. The Monte Carlo was based on the measured beam
distribution upstream and downstream of the fiducial volume MUM04 and
incorporated the magnetic field and electron energy threshold. A separate
Monte Carlo based on the package SIMION SIMION , incorporating the detailed
geometry of the proton cells, was used to model the proton detection response
function.
Achieving the desired sensitivity to $D$ in the presence of the much larger
spin-asymmetries due to $A$ and $B$ depends critically on the measurement
symmetry. To the extent that this symmetry is broken, corrections must be
applied to the measured result. These corrections are listed in Table 1 and
are discussed below. To extract $D$, coincident events were first combined
into approximately efficiency-independent asymmetries
$w^{p_{i}e_{j}}=\frac{N^{p_{i}e_{j}}_{+}-N^{p_{i}e_{j}}_{-}}{N^{p_{i}e_{j}}_{+}+N^{p_{i}e_{j}}_{-}},$
(2)
where $N^{p_{i}e_{j}}_{+}$ is the integrated number of coincident events in
proton detector $i=1...64$, beta detector $j=1...4$, with neutron spin + ($-$)
aligned (anti-aligned) with the guide field. For uniform polarization, ${\bf
P}$, the asymmetries, $w^{p_{i}e_{j}}$, can be written in terms of decay
correlations as
$w^{p_{i}e_{j}}\approx{\bf P}\cdot(A\tilde{\bf K}^{p_{i}e_{j}}_{A}+B\tilde{\bf
K}^{p_{i}e_{j}}_{B}+D\tilde{\bf K}^{p_{i}e_{j}}_{D}),$ (3)
where the $\bf K$’s are obtained from Eqn. 1 by integrating the normalized
kinematic terms over the phase space of the decay, the neutron beam volume,
and the acceptance of the indicated detectors LIS00 . $\tilde{\bf
K}_{A}\propto\langle{\bf p}_{e}/E_{e}\rangle$ and $\tilde{\bf
K}_{B}\propto\langle{\bf p}_{\nu}/E_{\nu}\rangle$ are primarily transverse to
the detector axis but have roughly equal longitudinal components for
coincidence events involving the two beta detectors opposite from the
indicated proton detector (E2 and E3 for P2 as shown in Fig. 1). The
$\tilde{\bf K}_{D}$’s, however, are primarily along the detector axis and are
opposite in sign for the two beta detectors. Thus for each proton detector we
can choose an appropriate combination of detector pairs that is sensitive to
the $D$-correlation but that largely cancels the parity-violating $A$ and $B$
correlations. One such combination is
$v^{p_{i}}=\frac{1}{2}(w^{p_{i}e_{R}}-w^{p_{i}e_{L}}),$ (4)
where $e_{R}$ and $e_{L}$ label the electron-detector at approximately 135∘
giving a positive and negative cross-product ${\bf p}_{p}\times{\bf p}_{e}$
respectively (P2E3 vs P2E2 as shown in Fig. 1). Proton cells at the detector
ends accept decays with larger longitudinal components of $\tilde{\bf K}_{A}$
and are more sensitive to a range of effects that break the detector symmetry.
We therefore define $\bar{v}$ as the average of the values of $v$ from the
sixteen proton-cells at the same $|z|$, i.e. $z=\pm$2, $\pm$6, $\pm$10, and
$\pm$14 cm. Each set of detectors corresponds to a paired-ring with the same
symmetry as the full detector, e.g. the shaded detectors in Fig. 1. We then
define
$\tilde{D}=\frac{\bar{v}}{P\bar{K}_{D}},$ (5)
where $\bar{K}_{D}=0.378$ is the average of ${\hat{z}}\cdot(\tilde{\bf
K}^{p_{i}e_{R}}_{D}-\tilde{\bf K}^{p_{i}e_{L}}_{D})$ determined by Monte
Carlo. The experiment provides four independent measurements corresponding to
each of the four paired-rings. The systematic corrections to $\tilde{D}$
presented in Table I yield our final value for $D$. Eqn. 5 is based on the
following: 1) accurate background corrections, 2) uniform proton and electron
detection efficiencies, 3) cylindrical symmetry of the neutron beam and
polarization, and 4) accurate determination of $\bar{K}_{D}$, $P$, and spin
state.
Table 1: Systematic corrections and combined standard uncertainties (68%
confidence level). Values should be multiplied by $10^{-4}$.
Source | Correction | Uncertainty
---|---|---
Background asymmetry | $0$a | $0.30$
Background subtraction | $0.03$ | $0.003$
Electron backscattering | $0.11$ | $0.03$
Proton backscattering | $0$a | $0.03$
Beta threshold | $0.04$ | $0.10$
Proton threshold | $-0.29$ | $0.41$
Beam expansion, magnetic field | $-1.50$ | $0.40$
Polarization non-uniformity | $0$a | $0.10$
ATP - misalignment | $-0.07$ | $0.72$
ATP - Twist | $0$a | $0.24$
Spin-correlated fluxb | $0$a | $3\times 10^{-6}$
Spin-correlated pol. | $0$a | $5\times 10^{-4}$
Polarizationc | | $0.04$d
$\bar{K}_{D}$c | | $0.05$
Total systematic corrections | $-1.68$ | $1.01$
a Zero indicates no correction applied. b Includes spin-flip time, cycle
asymmetry, and flux variation. c Included in the definition of $\tilde{D}$. d
Assumed polarization uncertainty of 0.05.
Backgrounds not properly accounted for contribute two systematic errors: 1)
multiplicative errors due to dilution of the asymmetries, and 2) spin-
dependent backgrounds that can lead to a false $D$. Errors in background
subtraction, as well as possible spin-dependent asymmetries in this
background, have a small effect. The multiplicative correction to the value of
$w^{p_{i}e_{j}}$ due to backscattered electrons was determined using the Monte
Carlo. The uncertainty given in Table 1 reflects the 20% uncertainty assigned
to the backscattering fractions due to limitations of the detector and beam
model and due to limited knowledge of backscattering at energies below a few
hundred keV. Proton backscattering, though observable, produces a negligible
effect on $\tilde{D}$.
In principle, the values of $w^{p_{i}e_{j}}$ are independent of the absolute
efficiencies of the proton and electron detectors; however, they do depend on
any energy dependence of the efficiencies through the factors $\langle{\bf
p}_{e}/E_{e}\rangle$. Spatial variation of the efficiencies breaks the
symmetry assumed in combining proton-cell data into paired-rings. Beta energy
thresholds were observed to vary less than 20 keV across the detector,
implying the almost negligible correction given in Table 1. Proton detector
efficiency variations, however, were more significant. Lower energy proton
thresholds varied across the detector and over the course of the experiment.
These thresholds combined with the spin-dependence of the accelerated proton
energy spectra can result in significant deviations in the values of
$w^{p_{i}e_{j}}$, though the effect on the the value of $v^{p_{i}}$ is largely
mitigated because the low-energy portion of the proton energy spectrum is
roughly the same for the $e_{R}$ and $e_{L}$ coincidence pairs. To estimate
the proton-threshold-nonuniformity effect on $\tilde{D}$, spin-dependent
proton energy spectra were generated by Monte Carlo for all proton-detector-
electron-detector pairings and convoluted with model detector response
functions based on fits to the average proton-SBD spectra. The average fit
parameters were varied over a range characteristic of the observed variations
during the run. Representative thresholds were then applied to determine the
effect on $\tilde{D}$. An alternative and consistent estimate was derived by
correcting the values of $w^{p_{i}e_{j}}$ on a day-by-day basis using the
spectrum centroid shift and empirically determined functional form of the
spectrum at the threshold.
Beam expansion from a radius of 2.5 cm to 2.75 cm combined with the magnetic
field breaks the symmetry of the detector because the average proton-electron
opening angle for each proton-electron detector pair is modified. Monte Carlo
calculations using measured upstream and downstream density profile maps were
used to calculate the correction given in Table 1. Possible inaccuracies in
the determination of the beam density were estimated and their implications
explored with the Monte Carlo code. The effect on the value of $v^{p_{i}}$ as
a function of position is illustrated in Fig. 3.
Figure 3: Solid (open) squares show the values of $v$ averaged over the four
planes for proton cells on the even (odd) side of the proton detection plane
(the side with P2 vs P1 in Fig. 1). Monte Carlo results are indicated by
lines. The broken symmetry, due to the combination of the magnetic field and
beam expansion, is evident in the shift in the crossing point from the
detector center.
For a symmetric beam, contributions to $v^{p_{i}}$ due to transverse
polarization cancel for opposing proton planes; however small azimuthal beam
asymmetries can affect this cancellation. This asymmetric-beam transverse-
polarization (ATP) effect is proportional to both $\sin\theta_{P}$, the angle
of the average neutron-spin orientation with respect to the detector axis, and
$\sin(\phi_{P}-\phi^{p_{i}})$, where $\phi^{p_{i}}$ is the effective azimuthal
position of the proton cell, and $\phi_{P}$ is the azimuthal direction of the
neutron polarization. To study this effect transverse-polarization calibration
runs with $\theta_{P}=90^{\circ}$ and several values of $\phi_{P}$ were taken
over the course of the experiment. In these runs the ATP effect was amplified
by $\approx 200$. The values of $\sin\theta_{P}$ and $\phi_{P}$ for the
experiment were determined using the calibration runs and Monte Carlo
corrections for the beam density variations. To estimate the effect, the
extreme value of sin$\theta_{P}=12.8\times 10^{-3}$ and the range of
$-31.5^{\circ}<\phi_{P}<112.2^{\circ}$ were used. The uncertainty is due to
uncertainties in the angles $\theta_{P}$ and $\phi_{P}$. The effect of
nonuniform beam polarization is also given in Table 1. Time-dependent
variations in flux, polarization, and the spin-flipper, as well as the
uncertainty in the instrumental constant $\bar{K}_{D}$, can be shown to
produce asymmetries proportional to $\tilde{D}$. These effects are listed in
Table 1.
Correlations of $\tilde{D}$ with a variety of experimental parameters were
studied by varying the cuts and by breaking the data up into subsets taken
under different conditions of proton acceleration voltage and number of live
SBDs as shown in Fig. 4. A linear correlation of $\tilde{D}$ with high-
voltage, revealed by the cuts study, yields $\chi^{2}=5.6$ with 11 DOF
compared to 10.4 for 12 DOF for no correlation. The acceleration-voltage
dependence of the focusing properties was extensively studied by Monte Carlo
with no expected effect, and we intrepret the 2.1 sigma slope as a statistical
fluctuation.
To improve the symmetry of the the detector, we combine counts for the entire
run to determine the values of $v^{p_{i}}$ and to extract $\tilde{D}$ for each
paired-ring. A blind analysis was performed by adding a constant hidden factor
to Eqn. 2. The blind was only removed once all analyses of systematic effects
were complete and had been combined into a final result. The weighted average
of $\tilde{D}$ for the four paired-rings is $0.72\pm 1.89$ with
$\chi^{2}=0.73$ for 3 DOF.
Figure 4: Results for $\tilde{D}$ by run subset. Uncertainties are
statistical. The fully functioning paired-rings used for each subset are
indicated: 1-4 indicates all paired-rings were used.
The result including all corrections from Table 1 is
$D=(-0.96\pm 1.89(stat)\pm 1.01(sys))\times 10^{-4}.$
Our result represents the most sensitive measurement of the $D$ coefficient in
nuclear beta decay. Assuming purely vector and axial-vector currents,
$\phi_{AV}=180.013^{\circ}\pm 0.028^{\circ}$ which is the best direct
determination of a possible CP-violating phase between the axial and vector
currents in nuclear beta decay. Previously the most sensitive measurement was
in 19Ne, with $D=(1\pm 6)\times 10^{-4}$ CAL85 .
The authors acknowledge the support of the National Institute of Standards and
Technology, U.S. Department of Commerce, in providing the neutron facilities
used in this work. This research was made possible in part by grants from the
U.S. Department of Energy Office of Nuclear Physics (DE-FG02-97ER41020, DE-
AC02-05CH11231, and DE-FG02-97ER41041) and the National Science Foundation
(PHY-0555432, PHY-0855694, PHY-0555474, and PHY-0855310).
## References
* (1) A. Sakharov, Sov. Phys. Usp. 34, 417 (1991).
* (2) J. H. Christenson et al., Phys. Rev. Lett. 13, 138 (1964).
* (3) B. Aubert et al., Phys. Rev. Lett. 87, 091801 (2001).
* (4) K. Abe et al., Phys. Rev. Lett. 87, 091802 (2001).
* (5) A. Riotto and M. Trodden, Ann. Rev. Nucl. Part. Sci. 49, 35 (1999).
* (6) J. Jackson, S. Treiman, and H. Wyld, Phys. Rev. 106, 517 (1957).
* (7) K. Nakamura et al., J. Phys. G 37, 075021 (2010), (Particle Data Group).
* (8) L. J. Lising et al., Phys. Rev. C 62, 055501 (2000).
* (9) T. Soldner et al., Phys. Lett. B 581, 49 (2004).
* (10) C. G. Callan and S. B. Treiman, Phys. Rev. 162, 1494 (1967).
* (11) S. Ando, J. McGovern, and T. Sato, Phys. Lett. B 677, 109 (2009).
* (12) A. Kozela et al., Phys. Rev. Lett. 102, 172301 (2009).
* (13) P. Herczeg, in Proc. of the 6th Int. PASCOS-98, edited by P. Nath (World Scientific, Singapore, 1998).
* (14) M. Drees and M. Rauch, Eur. Phys. J. C 29, 573 (2003).
* (15) H. P. Mumm et al., Rev. Sci. Instrum. 75, 5343 (2004).
* (16) E. G. Wasserman, Ph.D. thesis, Harvard University, 1994.
* (17) H. P. Mumm, Ph.D. thesis, University of Washington, 2004.
* (18) J. Sempau et al., Nucl. Instrum. Meth. B 132, 377 (1997).
* (19) J. Martin et al., Phys. Rev. C 73, 015501 (2006).
* (20) D. A. Dahl, Int. J. Mass Spectrom. 200, 3 (2000).
* (21) F. Calaprice, in Hyperfine Interactions (Springer, Netherlands, 1985), Vol. 22.
|
arxiv-papers
| 2011-04-14T14:34:30 |
2024-09-04T02:49:18.262431
|
{
"license": "Public Domain",
"authors": "H. P. Mumm (1 and 2), T. E. Chupp (3), R. L. Cooper (3), K. P. Coulter\n (3), S. J. Freedman (4), B. K. Fujikawa (4), A. Garcia (2 and 5), G. L. Jones\n (6), J. S. Nico (1), A. K. Thompson (1), C. A. Trull (7), J. F. Wilkerson (8\n and 2), F. E. Wietfeldt (7) ((1) National Institute of Standards and\n Technology, Gaithersburg, MD, (2) CENPA and Physics Department, University of\n Washington, Seattle, WA, (3) Physics Department, University of Michigan, Ann\n Arbor, MI, (4) Physics Department, University of California at Berkeley and\n Lawrence Berkeley National Laboratory, Berkeley, CA, (5) Department of\n Physics, University of Notre Dame, Notre Dame, IN, (6) Physics Department,\n Hamilton College, Clinton, NY, (7) Physics Department, Tulane University, New\n Orleans, LA, (8) Department of Physics and Astronomy, University of North\n Carolina, Chapel Hill, NC)",
"submitter": "Pieter Mumm",
"url": "https://arxiv.org/abs/1104.2778"
}
|
1104.2785
|
# A UV flux drop preceding the X-ray hard-to-soft state transition during the
2010 outburst of GX 339$-$4
Zhen Yan1 and Wenfei Yu1
1Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical
Observatory,Chinese Academy of Sciences,
80 Nandan Road, Shanghai 200030, China E-mail:wenfei@shao.ac.cn
(Accepted . Received … ; in original form )
###### Abstract
The black hole X-ray transient GX 339$-$4 was observed with the Swift
satellite across the hard-to-soft state transition during its 2010 outburst.
The ultraviolet (UV) flux measured with the filter UVW2 of the Swift/UVOT
started to decrease nearly 10 days before the drop in the hard X-ray flux when
the hard-to-soft state transition started. The UV flux $F_{\mathrm{UV}}$
correlated with the X-ray flux $F_{\mathrm{X}}$ as $F_{\mathrm{UV}}\propto
F_{\mathrm{X}}^{0.50\pm 0.04}$ before the drop in the UV flux. During the UV
drop lasting about 16 days, the X-ray flux in 0.4–10 keV was increasing. The
drop in the UV flux indicates that the jet started to quench 10 days before
the hard-to-soft state transition seen in X-rays, which is unexpected.
###### keywords:
accretion, accretion disks — black hole physics —ISM: jets and outflows—
X-rays:binaries — X-rays:individual (GX 339$-$4)
††pagerange: A UV flux drop preceding the X-ray hard-to-soft state transition
during the 2010 outburst of GX 339$-$4–References††pubyear: 2011
## 1 Introduction
Most known black hole (BH) X-ray binaries (XRB) are transient sources. They
usually stay in quiescence and occasionally undergo dramatic X-ray outbursts.
These BH transients are known to experience different X-ray spectral states
with particular X-ray spectral, timing and radio properties during an outburst
(McClintock & Remillard, 2006), including two major spectral states. One is
called the soft state, the X-ray spectrum of which is dominated by a multi-
color blackbody from an accretion disk (Shakura & Sunyaev, 1973). The other is
called the hard state, the X-ray spectrum of which is dominated by a powerlaw
with a photon index of $1.5<\Gamma<2.1$ (McClintock & Remillard, 2006). Radio
emission with a flat spectrum, which is thought as the self-absorbed
synchrotron emission from a compact steady jet, is also a characteristic of
the hard state (Fender, 2001). Suppressed radio emission in the soft state is
thought to indicate the jet is quenched (e.g. Fender et al., 1999). During the
transition from the hard state to the soft state, the X-ray spectral
properties are rather transitional and complex. This state is called the
intermediate state or the steep powerlaw state, the X-ray spectrum of which is
composed of a steep powerlaw component ($\Gamma\geq 2.4$) sometimes combined
with a comparable disk component (McClintock & Remillard, 2006). Many studies
attempted to explain spectral state transitions (e.g. Esin et al., 1997), but
to date there are still discrepancies between theories and observations
probably due to the limitation of the underlying stationary accretion
assumption, which is inconsistent with the observational evidence that non-
stationary accretion plays a dominating role in determination of the
luminosity threshold at which the hard-to-soft state transition occurs (Yu &
Yan, 2009).
In the hard state, the radio flux and X-ray flux, if not considering outliers,
display a correlation in a form of $L_{\mathrm{radio}}\propto
L_{\mathrm{X}}^{0.5-0.7}$(e.g. Corbel et al., 2000, 2003, 2008; Gallo et al.,
2003), indicating the coupling between the jet and the accretion flow. A
similar correlation also holds between the optical/near-infrared (NIR) and the
X-ray flux (Homan et al., 2005; Russell et al., 2006; Coriat et al., 2009),
which suggests that the optical/NIR emission has the same origin as that of
the radio emission, i.e. the optically thick part of the jet spectrum can
extend to NIR or even optical band (Corbel & Fender, 2002; Buxton & Bailyn,
2004; Homan et al., 2005; Russell et al., 2006). In contrast to the extensive
studies of the relations between the radio, optical/NIR emission and the X-ray
emission, there are much less studies about the ultraviolet (UV) emission in
different X-ray spectral states because of limited UV observations. Swift/UVOT
has three filters to cover the UV band. Simultaneous observations with the
UVOT and the XRT onboard Swift provide unprecedented opportunities for us to
study the UV emission in different X-ray spectral states.
GX 339$-$4 is one of the best-studied BH X-ray transients which undergoes
frequent outbursts in the past decades. Because its donor star is very faint
(Shahbaz et al., 2001), GX 339$-$4 during outbursts is therefore an excellent
target to investigate the UV/optical/NIR emission from the accretion and
ejection in the various X-ray spectral states in detail. In this paper, we
focus on reporting the results obtained from our proposed Swift observations
of GX 339$-$4 during the 2010 outburst. We found the UV flux decreased well
before the hard-to-soft state transition occurred.
## 2 Observations and data reduction
In the past two decades GX 339$-$4 have undergone rather frequent outbursts.
There is a unique correlation found between the peak flux in the hard X-ray
and the outburst waiting time in this source (Yu et al., 2007; Wu et al.,
2010). This empirical relation allowed us to predict the peak flux in the hard
X-ray in an on-going outburst if the time when the hard X-ray flux reaches its
peak can be determined (Wu et al., 2010). GX 339$-$4 entered a new outburst
since the beginning of 2010 (Yamaoka et al., 2010; Tomsick, 2010; Lewis et
al., 2010), and then experienced an extended flux plateau of about 50 days
which was unusual. The accretion of disk mass during the extended plateau
period was expected to cause the actual hard X-ray peak lower than the
prediction. However we were still able to predict the time of the hard-to-soft
state transition based on a simple linear extrapolation of the rising trend
and the relation predicted in Wu et al. (2010). We requested a series of
target of opportunity (TOO) observations with Swift spanned by about 20 days,
which indeed covered the hard-to-soft state transition. These observations,
together with other TOO observations immediately before or after (under
Targets IDs 30943,31687) were analysed, which covered the period from January
21, 2010 to June 19, 2010 (MJD 55217–55366).
Due to high count rates, all the XRT observations of GX 339$-$4 during this
period were automatically taken in the windowed timing (WT) mode which only
provided one-dimensional imaging. We processed all the initial event data with
the task xrtpipeline to generate cleaned event files by standard quality cuts
for the observations. There was no cleaned event data generated for the
observation 00030943009, so we excluded this observation in our XRT data
analysis. We extracted all the cleaned WT mode events using XSELECT v2.4 as
part of the HEASOFT v6.11 package. Source extraction was performed with a
rectangular region of 20 pixels $\times$ 40 pixels centered at the coordinate
of GX 339$-$4 and the background extraction was performed with two rectangular
regions of 20 pixels $\times$20 pixels whose centers are 50 pixels away from
the coordinate of GX 339$-$4\. Some observations with high count rates ($>$
150 c/s) are affected by the pile-up. The easiest way to avoid distortions
produced by the pile-up is to extract spectra by eliminating the events in the
core of the point spread function (PSF). The method by investigating the grade
distribution (Pagani et al., 2006; Mineo et al., 2006) was used to determine
the excluded region, which was chosen as the central 2 pixels$\times$20
pixels, 4 pixels$\times$20 pixels, 6 pixels$\times$20 pixels and 8
pixels$\times$20 pixels when the count rate was in the range of 150–200 c/s,
200–300 c/s, 300–400 c/s and 400–500 c/s, respectively. After events
selection, we produced the ancillary response files (ARFs) by the task
xrtmkarf. The process included inputting exposure maps with xrtexpomap and
applying the vignetting and PSF corrections. The standard redistribution
matrix file (RMF, swxwt0to2s6_20010101v014.rmf) in the CALDB database was used
for the spectral analysis. All the spectra were grouped to at least 20 counts
bin-1 using grppha to ensure valid results using $\chi^{2}$ statistical
analysis.
Swift/UVOT has six filters: V, B, U, UVW1, UVM2 and UVW2. We applied aspect
corrections to all the UVOT data by using uvotskycorr. For each observation,
the sky image of each filter was summed using uvotimsum in order to increase
photon statistics, then the spectral files which are compatible with XSPEC can
be created by uvot2pha. We performed aperture photometry with the summed
images by using uvotsource with an aperture radius 5″. The background region
was chosen in a larger void region, which is away from the target. The
photometry results in AB magnitude system are shown in the upper panel of Fig.
2.
## 3 Data analysis and results
Figure 1: From top to bottom, Swift/XRT spectral parameters column density
$N_{H}$, inner disk temperature $T_{in}$, photon index, reduced $\chi^{2}$,
the RXTE/ASM and Swift/BAT lightcurves, and the hardness ratio between the BAT
and the ASM fluxes. The vertical dotted and dashed lines indicate the time
interval when the source transited between the hard and the soft state.
All the XRT spectra were fitted in XSPEC v12.7.0. The photons with energies
below 0.4 keV were ignored during the spectral fits because of hot pixel
problems at lower energy (Kim Page, private communication). The 0.4–10 keV
X-ray spectra during the period of MJD 55217–55293 can be fitted well by a
model composed of an absorbed powerlaw with an photon index $\sim$ 1.61–1.79,
which indicates the source was in the hard state. Then the photon index
steepened ($\Gamma\sim$ 2.09–2.49) in the period of MJD 55297–55324 and an
additional disk component was required after MJD 55302, indicating the source
was leaving from the hard state to the soft state (see also Motta et al.,
2010). The fitted $N_{H}$ shows a sudden increase during the state transition.
A possible reason for the high absorption is that the powerlaw model does not
exhibit a low energy cut-off which is caused by the seed photon of the
Comptonization. We fitted the XRT spectra in order to get the photon index and
to estimate the X-ray flux, and the discussion of the physical spectral
components is beyond the scope of this paper. So we used the simple powerlaw
model rather than the Comptonization models for simplicity. The 0.4–10 keV
X-ray spectra taken after MJD 55334 can be fitted with only an absorbed multi-
color disk blackbody model ($T_{in}\sim$ 0.84–0.87 keV), indicating that the
source has entered the soft state. The best-fit spectral parameters are
plotted in Fig. 1. These model parameters were used to show the spectral
states the source resided in (see Fig. 1) and to estimate the 0.4–10 keV X-ray
flux (see Fig. 2). We also plotted the public daily lightcurves of GX 339$-$4
obtained from the RXTE/ASM and the Swift/BAT in Fig. 1. The X-ray spectral
states are distinguishable by the hardness ratio between the count rates in
the BAT (15–50 keV) and in the ASM (2–12 keV) (see also Yu & Yan, 2009; Tang
et al., 2011), consistent with the XRT spectral results (see Fig. 1).
Figure 2: The X-ray and UV/optical lightcurves of GX339$-$4 across the hard-
to-soft state transition during the 2010 outburst. Spectral state boundaries
are indicated by the vertical lines. The UVOT photometry results are plotted
in the upper panel. The UVW2 filter was used through all the observations. The
UV flux initially increased with the X-ray flux, and then decreased after MJD
55285, while the X-ray flux was still increasing.
The upper panel of Fig. 2 shows the magnitudes of six UVOT filters in
different X-ray spectral states. The UVW2 filter was used throughout all the
observations. The flux in the UVW2 band increased with the X-ray flux at the
beginning, and then began to decline about 10 days before the hard-to-soft
state transition started. The UVW2 flux decreased by about 62% in a period of
16 days since MJD 55285 and then kept at a steady level (see Fig. 2). Yale
SMARTS XRB project 111http://www.astro.yale.edu/buxton/GX339/ has offered
optical/NIR photometric coverage of this outburst on a daily basis. The
optical/NIR flux also showed a drop (see also Russell et al., 2010a; Cadolle
Bel et al., 2011; Buxton et al., 2012), which was simultaneous with the UV
flux drop within the uncertainties of the sampling. The flux decreased by
about 80% in V band, 84% in I band, 92% in J band and 94% in H band,
respectively. As we can see, the decreased amplitudes were wavelength
dependent, i.e. the flux decreased more at lower frequencies than at higher
ones (see also Homan et al., 2005). Moreover, such a drop in radio flux was
also seen (Zhang et al., 2012), implying a common physical origin for the flux
drops in the broad bands.
We further investigated how the UV flux correlated with the X-ray flux in the
hard state and across the hard-to-soft state transition during the 2010
outburst of GX 339$-$4 (see Fig. 3). The UV flux correlated with the
unabsorbed 0.4–10 keV X-ray flux in a power-law form with an index of $\sim
0.50\pm 0.04$ before its drop. The X-ray flux and the UV flux in the soft
state did not follow this correlation (see Fig. 3), indicating that the
origins of the UV emission differ between the hard state and the soft state.
Figure 3: UVW2 band flux vs. 0.4-10 keV X-ray flux. The circles represent the
data when the UV flux increased, the diamonds represent the data when the UV
flux decreased, the triangle represent the data in the soft state, and the
squares represent the rest of data. The data with maximal UV flux is the end
of UV flux increase and the start of the UV flux decrease, so two symbols (a
circle and a diamond) were plotted on this data. The solid line shows the best
power-law fit to the data during UV flux increasing (circle symbols) with an
index of $0.50\pm 0.04$. The dotted line shows how the X-ray and UV fluxes
evolve, and the arrow indicates the beginning.
## 4 Discussion
The origin of the UV emission of black hole X-ray binaries in the hard state
has not been well-understood. The UV emission could be from the jet, the cool
disk, the hot ADAF and X-ray reprocessing in the accretion disk (Markoff et
al., 2003; Yuan et al., 2005; Rykoff et al., 2007; Maitra et al., 2009). Let’s
look at these options in details. The correlation of the form of
$F_{\mathrm{UV}}\propto F_{\mathrm{X}}^{0.50\pm 0.04}$ (see Fig. 3) before the
UV flux drop is quantitatively inconsistent with a disk origin of the UV
emission, since the UV flux $F_{\mathrm{UV}}$ should be proportional to the
X-ray flux as $F_{\mathrm{X}}^{0.3}$ for a simple viscously heated disk (see
Russell et al., 2006). X-ray reprocessing in the accretion disk could be a
promising option if not considering the UV flux drop before the hard-to-soft
transition we report in the paper. van Paradijs & McClintock (1994) has
predicted the emission of X-ray reprocessing should be proportional to
$L_{\mathrm{X}}^{0.5}$, which was seen in XTE J1817$-$330 during the decay of
the 2006 outburst in which the UV luminosity was found to correlate with the
2–10 keV X-ray luminosity as $L_{X}^{\alpha}$ where $\alpha=0.47\pm 0.02$
(Rykoff et al., 2007). In the case of GX 339$-$4 the UV flux drop was
accompanied by an X-ray increase, against the X-ray reprocessing
interpretation. Therefore the origin of the UV emission before the UV flux
drop as X-ray reprocessing by the outer disk is also ruled out.
The flux drop in the radio or optical/NIR band usually associated with the
hard-to-soft state transition is thought due to the jet quench (Fender et al.,
1999; Homan et al., 2005; Coriat et al., 2009). In the observations we
analyzed, a drop of the UV flux started about 10 days before the transition.
The flux drop is similar to those drops in the radio, the NIR, and the
optical, which is highly suggestive of a common origin. So the drop in the UV
flux should be due to jet quenching as well.
After the jet quenched, the UV/optical/NIR fluxes declined to a steady flux
level, indicating additional components other than the compact jet contributed
to the UV/optical/NIR emission. The likely source is the X-ray irradiation,
since the steady UV/optical/NIR fluxes maintain through the soft state, and we
found the irradiation model diskir (Gierliński et al., 2009) can provide an
acceptable fit to the UVOT and XRT data in the soft state. Moreover when the
hard X-ray reached its maximum, the UV/optical emission was at a very low
level (see Fig. 2), indicating potential X-ray irradiation contributes little
UV/optical/NIR emission. Therefore, the dominant UV/optical/NIR emission
during the rising phase of the outburst was from the compact jet.
The scales of the flux densities corresponding to the drops in the
UV/optical/NIR bands (peak flux density minus the steady flux density after
the jet quenching) should be all contributed by the compact jet, if the X-ray
irradiation also contributed a steady level of UV/optical/NIR flux densities
during the jet quenching. Then the scales of the dereddened flux densities
corresponding to the drops in the UV/optical/NIR bands can be used to
construct a spectral energy distribution (SED) of the emission from the
compact jet. However, the observed SED in UV/optical/NIR bands is highly
influenced by the extinction, which is not well constrained in GX 339$-$4\. We
then used different color excesses to deredden the scales of the flux
densities corresponding to the drops in the UV/optical/NIR bands with the
extinction law of Cardelli et al. (1989). To convert the magnitudes obtained
from SMARTS to flux densities, we used the zero-point fluxes in Bessell et al.
(1998). We then constructed the SED in the wavelength from radio to
UV/optical/NIR in order to investigate the jet spectrum. The flux density of
UVW2 band reached its peak on MJD 55285, the closest radio observation (on MJD
55286) shows the radio flux densities were 22.38 mJy and 25.07 mJy at 5.5 GHz
and 9 GHz, respectively (Zhang et al., 2012). The two radio flux densities can
be well fitted by a power-law with an index $\sim$ 0.19, which is consistent
with the optically thick synchrotron emission of the jet. If using the color
excess of $E(B-V)=1.2$ given by Zdziarski et al. (1998), the extrapolation of
the radio spectrum coincides with the scale of the dereddened flux density
corresponding to the drop in the UVW2 band, indicating that the optically
thick spectrum of the jet might had extended to the UVW2 band. But the scales
of the dereddened flux densities corresponding to the drops in the V, I, J and
H bands are all below the extrapolation of the radio spectrum. We found the
UV-NIR SED can be well fitted with two power-law components. From the H band
to the I band, the power-law index was about $-0.27$, and from the I band to
the UVW2 band, the power-law index is about $+1.16$. The data in different
wavelengths were not exactly simultaneous. So this complex SED may be caused
by the large variation of the jet spectrum on the timescale of less than one
day in the optical/NIR band (Gandhi et al., 2011). If using the color excess
of $E(B-V)=0.94$ given by Homan et al. (2005), the scales of the dereddened
flux densities corresponding to the drops in the UV/optical/NIR bands were all
below the extrapolation of the radio spectrum and also show a turnover feature
in the I band. The turnover feature can be well fitted with two power-law
components with the power-law index $\sim-0.66$ from H band to I band, and
$\sim+0.16$ from the I band to the UVW2 band. If using the much smaller color
excess of $E(B-V)=0.66$ given by Motch et al. (1985), the scales of the
dereddened flux densities corresponding to the drop in the UV/optical/NIR
bands can be well fitted by a single power-law with an index of $-1.06$, which
is roughly consistent with the optically thin part of the jet spectrum.
Therefore, different color excess leads to different conclusion about the jet
spectral properties. The jet spectrum can not be well constrained in the
UV/optical/NIR bands due to large uncertainties in the estimation of the
extinction.
However, because there is a positive correlation between the UV and the X-ray
fluxes during the UV flux rise ($F_{\mathrm{UV}}\propto
F_{\mathrm{X}}^{0.50\pm 0.04}$, see also Fig. 3), we suggest that the
optically thick part of the jet spectrum could extend to the UVW2 band (see,
e.g., the discussion in Coriat et al., 2009). If this is true, the break
frequency between the optically thick and the optically thin parts of the jet
spectrum should have been in the UVW2 band or higher frequencies. Then a lower
limit of $-1.17$ on the spectral slope of the optically thin synchrotron
emission of the jet can be given by a power-law fit from the UVW2 band to 0.4
keV, which is steeper than the typical spectral slope ($\sim-0.7$) of the
optically thin synchrotron jet emission. Spectral slopes steeper than $-0.7$
have been found before, for example, $-0.95$ in GX 339$-$4 (Dinçer et al.,
2012) and $-1.5$ in XTE J1550$-$564 (Russell et al., 2010b).
If the UV flux is mainly from optically thick synchrotron emission of the
compact jet, the jet should be very energetic. We estimated the radiative flux
of the jet at the UV peak was $3.21\times
10^{-9}~{}\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{cm}^{2}$ by extrapolating
the radio spectrum from 5.5 GHz to UVW2 band. The corresponding 0.1-100 keV
X-ray flux, extrapolated from the 0.4–10 keV spectrum, was about $2.53\times
10^{-8}~{}\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{cm}^{2}$, corresponding to
$\sim 0.15~{}L_{\mathrm{Edd}}$ if adopting a distance of 8 kpc and a mass of
$10~{}M_{\sun}$ (Zdziarski et al., 2004). Taking reasonable radiative
efficiency of the jet of about 0.05 and the Doppler correction factor of about
1 (Fender, 2001; Corbel & Fender, 2002), the jet power $P_{\mathrm{jet}}$ is
about 2.5 times of the X-ray luminosity $L_{\mathrm{X}}$. This is larger than
previous estimates of $P_{\mathrm{jet}}/L_{\mathrm{X}}$ in the hard state of
GX 339$-$4, which were all less than 1 (e.g. Corbel & Fender, 2002; Dinçer et
al., 2012). Moreover, the jet power at the UV flux peak was greater than the
X-ray luminosity during a bright hard state of more than 0.1
$L_{\mathrm{Edd}}$, much higher than the critical value of $4\times
10^{-5}L_{\mathrm{Edd}}$ determined in Fender et al. (2003). It is worth
noting that a much lower jet power ( $P_{\mathrm{jet}}/L_{\mathrm{X}}\sim
0.05$) was seen in GX 339$-$4 at similar X-ray luminosity $\sim
0.12~{}L_{\mathrm{Edd}}$ (Corbel & Fender, 2002). Therefore similar X-ray
luminosity corresponds to a large range of jet power in the same source. This
phenomenon is similar to the hysteresis effect of spectral state transitions,
in which mass accretion rate is not the dominant parameter (Yu & Yan, 2009).
This suggests that the jet power might be determined by non-stationary
accretion, which leads to insignificant dependence on the mass accretion rate
or X-ray luminosity.
## acknowledgements
We would like to thank the anonymous referees for valuable comments and
suggestions. We thank Kim Page, Chris Done, Albert Kong and Wenda Zhang for
help on the Swift/XRT data analysis, Andrzej A. Zdziarski and Feng Yuan for
useful discussions and the Swift team for the scheduling the TOO observations.
We also acknowledge the use of data obtained through the High Energy
Astrophysics Science Archive Research Center Online Service, provided by the
NASA/Goddard Space Flight Center. This work is supported by the National
Natural Science Foundation of China (10773023, 10833002,11073043), the One
Hundred Talents project of the Chinese Academy of Sciences, the National Basic
Research Program of China (973 project 2009CB824800), and the starting funds
at the Shanghai Astronomical Observatory.
## References
* Bessell et al. (1998) Bessell M. S., Castelli F., Plez B., 1998, A&A, 333, 231
* Buxton & Bailyn (2004) Buxton M. M., Bailyn C. D., 2004, ApJ, 615, 880
* Buxton et al. (2012) Buxton M. M., Bailyn C. D., Capelo H. L., Chatterjee R., Dinçer T., Kalemci E., Tomsick J. A., 2012, AJ, 143, 130
* Cadolle Bel et al. (2011) Cadolle Bel M. et al., 2011, A&A, 534, A119+
* Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245
* Corbel & Fender (2002) Corbel S., Fender R. P., 2002, ApJ, 573, L35
* Corbel et al. (2000) Corbel S., Fender R. P., Tzioumis A. K., Nowak M., McIntyre V., Durouchoux P., Sood R., 2000, A&A, 359, 251
* Corbel et al. (2008) Corbel S., Koerding E., Kaaret P., 2008, MNRAS, 389, 1697
* Corbel et al. (2003) Corbel S., Nowak M. A., Fender R. P., Tzioumis A. K., Markoff S., 2003, A&A, 400, 1007
* Coriat et al. (2009) Coriat M., Corbel S., Buxton M. M., Bailyn C. D., Tomsick J. A., Körding E., Kalemci E., 2009, MNRAS, 400, 123
* Dinçer et al. (2012) Dinçer T., Kalemci E., Buxton M. M., Bailyn C. D., Tomsick J. A., Corbel S., 2012, ApJ, 753, 55
* Esin et al. (1997) Esin A. A., McClintock J. E., Narayan R., 1997, ApJ, 489, 865
* Fender et al. (1999) Fender R. et al., 1999, ApJ, 519, L165
* Fender (2001) Fender R. P., 2001, MNRAS, 322, 31
* Fender et al. (2003) Fender R. P., Gallo E., Jonker P. G., 2003, MNRAS, 343, L99
* Gallo et al. (2003) Gallo E., Fender R. P., Pooley G. G., 2003, MNRAS, 344, 60
* Gandhi et al. (2011) Gandhi P. et al., 2011, ApJ, 740, L13+
* Gierliński et al. (2009) Gierliński M., Done C., Page K., 2009, MNRAS, 392, 1106
* Homan et al. (2005) Homan J., Buxton M., Markoff S., Bailyn C. D., Nespoli E., Belloni T., 2005, ApJ, 624, 295
* Lewis et al. (2010) Lewis F., Russell D. M., Cadolle Bel M., 2010, The Astronomer’s Telegram, 2459
* Maitra et al. (2009) Maitra D., Markoff S., Brocksopp C., Noble M., Nowak M., Wilms J., 2009, MNRAS, 398, 1638
* Markoff et al. (2003) Markoff S., Nowak M., Corbel S., Fender R., Falcke H., 2003, A&A, 397, 645
* McClintock & Remillard (2006) McClintock J. E., Remillard R. A., 2006, Black hole binaries, Lewin, W. H. G. & van der Klis, M., ed., pp. 157–213
* Mineo et al. (2006) Mineo T. et al., 2006, Nuovo Cimento B Serie, 121, 1521
* Motch et al. (1985) Motch C., Ilovaisky S. A., Chevalier C., Angebault P., 1985, Space Sci. Rev., 40, 219
* Motta et al. (2010) Motta S., Belloni T., Muñoz Darias T., 2010, The Astronomer’s Telegram, 2545
* Pagani et al. (2006) Pagani C. et al., 2006, ApJ, 645, 1315
* Russell et al. (2010a) Russell D. M., Buxton M., Lewis F., Altamirano D., 2010a, The Astronomer’s Telegram, 2547
* Russell et al. (2006) Russell D. M., Fender R. P., Hynes R. I., Brocksopp C., Homan J., Jonker P. G., Buxton M. M., 2006, MNRAS, 371, 1334
* Russell et al. (2010b) Russell D. M., Maitra D., Dunn R. J. H., Markoff S., 2010b, MNRAS, 405, 1759
* Rykoff et al. (2007) Rykoff E. S., Miller J. M., Steeghs D., Torres M. A. P., 2007, ApJ, 666, 1129
* Shahbaz et al. (2001) Shahbaz T., Fender R., Charles P. A., 2001, A&A, 376, L17
* Shakura & Sunyaev (1973) Shakura N. I., Sunyaev R. A., 1973, A&A, 24, 337
* Tang et al. (2011) Tang J., Yu W.-F., Yan Z., 2011, Research in Astronomy and Astrophysics, 11, 434
* Tomsick (2010) Tomsick J. A., 2010, The Astronomer’s Telegram, 2384
* van Paradijs & McClintock (1994) van Paradijs J., McClintock J. E., 1994, A&A, 290, 133
* Wu et al. (2010) Wu Y. X., Yu W., Yan Z., Sun L., Li T. P., 2010, A&A, 512, A32+
* Yamaoka et al. (2010) Yamaoka K. et al., 2010, The Astronomer’s Telegram, 2380
* Yu et al. (2007) Yu W., Lamb F. K., Fender R., van der Klis M., 2007, ApJ, 663, 1309
* Yu & Yan (2009) Yu W., Yan Z., 2009, ApJ, 701, 1940
* Yuan et al. (2005) Yuan F., Cui W., Narayan R., 2005, ApJ, 620, 905
* Zdziarski et al. (2004) Zdziarski A. A., Gierliński M., Mikołajewska J., Wardziński G., Smith D. M., Harmon B. A., Kitamoto S., 2004, MNRAS, 351, 791
* Zdziarski et al. (1998) Zdziarski A. A., Poutanen J., Mikolajewska J., Gierlinski M., Ebisawa K., Johnson W. N., 1998, MNRAS, 301, 435
* Zhang et al. (2012) Zhang et al., 2012, in preparation
|
arxiv-papers
| 2011-04-14T14:48:46 |
2024-09-04T02:49:18.266818
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Zhen Yan and Wenfei Yu",
"submitter": "Zhen Yan",
"url": "https://arxiv.org/abs/1104.2785"
}
|
1104.2977
|
# Near-Infrared Imaging Polarimetry Toward Serpens South: Revealing the
Importance of the Magnetic Field
K. Sugitani,11affiliation: Graduate School of Natural Sciences, Nagoya City
University, Mizuho-ku, Nagoya 467-8501, Japan; sugitani@nsc.nagoya-cu.ac.jp.
F. Nakamura,22affiliation: National Astronomical Observatory, Mitaka, Tokyo
181-8588, Japan. M. Watanabe,33affiliation: Department of Cosmosciences,
Hokkaido University, Kita-ku, Sapporo, Hokkaido 060-0810, Japan. M.
Tamura,22affiliation: National Astronomical Observatory, Mitaka, Tokyo
181-8588, Japan. S. Nishiyama,44affiliation: Department of Astronomy, Kyoto
University, Sakyo-ku, Kyoto 606-8502, Japan. T. Nagayama,55affiliation:
Department of Astrophysics, Nagoya University, Chikusa-ku, Nagoya 464-8602,
Japan. R. Kandori,22affiliation: National Astronomical Observatory, Mitaka,
Tokyo 181-8588, Japan. T. Nagata, 44affiliation: Department of Astronomy,
Kyoto University, Sakyo-ku, Kyoto 606-8502, Japan. S. Sato,55affiliation:
Department of Astrophysics, Nagoya University, Chikusa-ku, Nagoya 464-8602,
Japan. R. A. Gutermuth, 66affiliation: Smith College, University of
Massachusetts, Northampton, MA 01063. G. W. Wilson,77affiliation: Department
of Astronomy, University of Massachusetts, Amherst, MA 01003. and R. Kawabe
88affiliation: Nobeyama Radio Observatory, Nobeyama, Minamimaki, Minamisaku,
Nagano 384-1305, Japan.
###### Abstract
The Serpens South embedded cluster, which is located at the constricted part
in a long filamentary infrared dark cloud, is believed to be in very early
stage of cluster formation. We present results of near-infrared (JHKs)
polarization observations toward the filamentary cloud. Our polarization
measurements of near-infrared point sources indicate a well-ordered global
magnetic field that is perpendicular to the main filament, implying that the
magnetic field is likely to have controlled the formation of the main
filament. On the other hand, the sub-filaments, which converge on the central
part of the cluster, tend to run along the magnetic field. The global magnetic
field appears to be curved in the southern part of the main filament. Such
morphology is consistent with the idea that the global magnetic field is
distorted by gravitational contraction along the main filament toward the
northern part that contains largermass. Applying the Chandrasekhar-Fermi
method, the magnetic field strength is roughly estimated to be a few
$\times$100 $\mu$G, suggesting that the filamentary cloud is close to
magnetically critical as a whole.
polarization — stars: formation — ISM: magnetic fields — ISM: structure — open
clusters and associations: individual (Serpens South) — infrared: stars
## 1 Introduction
It is now widely accepted that stars form predominantly within clusters inside
dense clumps of molecular clouds that are turbulent and magnetized. However,
how clusters form in such dense clumps remains poorly understood. This is due
in part to the lack of the observational characterization of processes
involved in cluster formation. In particular, the role of magnetic fields in
cluster formation is a matter of debate.
Recent numerical simulations of cluster formation suggest that a moderately-
strong magnetic field is needed to impede star formation in molecular clouds
in order for the simulated star formation rates to match observed values
(Vázquez-Semadeni et al., 2005; Li & Nakamura, 2006; Nakamura & Li, 2007;
Price & Bate, 2008). In contrast, Padoan et al. (2001) claim that the magnetic
field in molecular clouds should be significantly weak (a few $\mu$G) and that
the turbulent compression largely controls the structure formation in
molecular clouds on scales of a few to several parsecs. Such a very weak
magnetic field is necessary to reproduce the core mass spectrum that resembles
the stellar initial mass spectrum in shape, in the context of their turbulent
fragmentation scenario (Padoan & Nordlund, 2002). In this picture, strong
magnetic fields sometimes observed in dense clumps and cores are due to local
amplification by the turbulent compression. Recent Zeeman measurements in
molecular clouds and cores appear to support this idea (Crutcher et al.,
2010). If this is the case, the magnetic fields associated with cluster-
forming clumps are expected to be distorted significantly by the supersonic
turbulent flows.
In order to characterize the magnetic structure of cluster-forming clumps, it
is important to uncover the global magnetic field structures associated with
as many cluster forming clumps as possible. Polarimetry of background star
light is one of the techniques suitable for mapping the magnetic fields in
molecular clouds and cores (Tamura et al., 1987) and the technique is now
strengthened with wide-field polarimetry by employing large array detectors
(e.g., Tamura et al., 2007). As a continuation of our recent polarimetry study
of the Serpens cloud core (Sugitani et al., 2010), which revealed an hour-
glass shaped magnetic field that implies the global gravitational contraction
of the cluster forming clump, we have made near-infrared polarization
observations toward Serpens South.
Serpens South is a nearby ($d\sim$260 pc), embedded cluster, discovered by
Gutermuth et al. (2008) in the context of the Spitzer Gould Belt Legacy
Survey. The Serpens South cluster appears to be located at the constricted
region in a long filamentary cloud or at the joint region of multiple less
dense filaments - now referred to as a ”hub-filament” structure (Myers, 2009).
The Serpens cloud core resembles such morphology, although the filamentary
structures are much more prominent around Serpens South. In the central part
of the cluster, the fraction of protostars (Class 0/I sources) reachs 77 %
with a high surface density of 430 pc-2. This fraction is largest among the
cluster-forming regions within the nearest 400 pc. Recently, Bontemps et al.
(2010) discovered 5 Class 0 sources in the central core as a part of the
Herschel Gould Belt Survey. These observations strongly suggest a very recent
initialization of star formation in this region. Therefore, Serpens South is
one of the best objects to characterize the initial conditions of cluster
formation.
Here, we present the results of our near-infrared polarization measurements,
and discuss the role of magnetic fields in the formation of this region.
## 2 Observations and Data Reduction
Simultaneous polarimetric observations were carried out toward Serpens South
in $JHK$s-bands on 2009 August 28 and September 3, and 2010 June 25, August 9,
12, 13, 14, and 15 UT with the imaging polarimeter SIRPOL (polarimetry mode of
the SIRIUS camera: Kandori et al., 2006) mounted on the IRSF 1.4 m telescope
at the South Africa Astronomical Observatory. The SIRIUS camera is equipped
with three 1024 $\times$ 1024 HgCdTe (HAWAII) arrays, $JHK$s filters, and
dichroic mirrors, which enables simultaneous $JHK$s observations (Nagashima et
al., 1999; Nagayama et al., 2003). The field of view at each band is
$7^{\prime}.7\times 7^{\prime}.7$ with a pixel scale of 0”.45, and a
3$\times$3 mosaic area centered around (R.A., decl.)J2000=(18h30m05s, -02°03’)
was covered, including duplicate field observations.
We obtained 10 dithered exposures, each 10 or 15 s long, at four wave-plate
angles (0∘, 22∘.5, 45∘, and 67∘.5 in the instrumental coordinate system) as
one set of observations and repeated this nine or six times, respectively.
Thus, the total on-target exposure time for each field was 900 s per wave-
plate angle. Sky images were also obtained in between target observations.
Self sky images were also used with the sky images to make better median sky
images. The average seeing was $\sim$1.$\arcsec$4 at $K$s during the
observations with a range of $\sim$1.$\arcsec$2–1.$\arcsec$6\. Twilight flat-
field images were obtained at the beginning and/or end of the observations.
Standard reduction procedures were applied with IRAF. Aperture polarimetry was
performed at $H$ and $K$s with an aperture of $\sim$FWHM by using APHOT of the
DAOPHOT package. No polarimetry for J band sources was done due to their much
smaller number, compared with those of $H$ and $K$s band sources. The 2MASS
catalog (Skrutskie et al., 2006) was used for absolute photometric
calibration. See Sugitani et al. (2010) for more details of the data reduction
procedure and the method to derive the polarization degree ($P$) and its error
($\Delta P$), and the polarization angle ($\theta$ in P.A.) and its error
($\Delta\theta$).
## 3 RESULTS and DISCUSSION
### 3.1 Magnetic Field Structure toward Serpens South
We have measured $H$ and $K$s polarization for point sources, in order to
examine the magnetic field structure. Only the sources with photometric errors
of $<$0.1 mag and $P/{\mathit{\Delta}}P>3.0$ were used for analysis.
Figures 1a and 2a present the polarization degree versus the $H-K$s color for
sources having polarization errors of $<0.3\%$ at $H$ and $K$s, respectively.
There is a tendency for the upper envelope of the plotted points to increase
with $H-K$s, and the average polarization degree is slightly smaller at $K$s
than at $H$ for the same $H-K$s color. These are consistent with the origin of
the polarization being dichroic absorption. Therefore, here we assume the
polarization vectors as the directions of the local magnetic field average
over the line of sight of the sources. These sources, except those of low
$H-K$s colors, appear to have the maximum polarization efficiencies of
$P_{H}/([H-K\rm{s}]-0.2)=6.6$ and $P_{K_{\rm s}}/([H-K\rm{s}]-0.2)=4.4$, where
we adopt $H-K\rm{s}=0.2$ as an offset of the intrinsic $H-K\rm{s}$ color,
because the 2MASS point sources with good photometric qualities are mostly
located on the reddening belt that begins from the position of ($J-H$,
$H-K$s)=($\sim$0.7, $\sim$0.2) on the $J-H$ versus $H-K$s diagram, within a
$1\arcdeg\times 1\arcdeg$ area centered at Serpens South.
Figures 3 and 4 present $H$ and $K$s-band polarization vectors of
$P/{\mathit{\Delta}}P>3.0$, excluding sources with polarizations larger than
the above maximum polarization, superposed on 3$\times$3 mosaic $H$ and
$K$s-band images, respectively. In general, the global magnetic field
structures deduced from the $H$ and $K$s-band data seem to be the same. Most
of the vectors are roughly aligned with the NE-SW direction with the exception
of those appearing in the NE and SW corners of the map. Two distribution peaks
are clearly seen in the histogram of the $K$s-band vector angles, although a
sub-peak at $H$ can be barely seen (Figure 5). Our two-Gaussian fit analysis
for the $K$s-band vectors indicates that the mean angle of the main peak is
$52.9\arcdeg\pm 3.0\arcdeg$ with a standard deviation of $22.8\arcdeg\pm
2.8\arcdeg$ and that the mean angle of the sub-peak is $0.0\arcdeg\pm
5.2\arcdeg$ with a standard deviation of $16.6\arcdeg\pm 4.3\arcdeg$. The low
number density of background stars toward the NE corner area suggests a
molecular cloud or filament, other than the Serpens South cloud, having a
different magnetic field structure. On the other hand, toward the SW corner
area, no clear signs of clouds are seen with high density of background stars.
To investigate this further we examined the sources within a zone enclosed by
the dotted lines toward the SW corner of the maps. Unlike in the main
filament, there is no tendency that the degree of polarization in this zone
increases as an increase of $H-K$s color, and the degrees of polarization are
relatively low with values $\lesssim$3% at $H$ and $\lesssim$2% at $K$s
(except a few sources) in Figures 1b and 2b. Thus, the polarization in this SW
area may not be dominated by the dichroic absorption of the Serpens South
cloud and is likely contaminated by either foreground or background
interstellar polarization, although it could be also possible that the-line-
of-sight component of the magnetic field is dominant toward the SW corner
area.
We apply a cut on the degree of polarization of at least 3% in $P_{H}$ and 2%
in $P_{K{\rm s}}$ in order to insure that we are sampling the magnetic field
associated with the dichroic absorption of the material in Serpens South.
Figures 6 and 7 present the polarization vectors for the sources of
$P_{H}>3.0$% and $P_{K{\rm s}}>2.0$%, resepctively, superposed on 1.1mm dust
continuum image (Gutermuth et al., 2011), which was taken with the 144 pixel
bolometer camera AzTEC (Wilson et al., 2008) mounted on the Atacama
Submillimeter Telescope Experiment (ASTE). Figure 8 presents the schematic
drawing of the filaments and the magnetic field directions toward the Serpens
South region. The illustrations of the filaments were deduced from this
continuum image. The directions of the magnetic field were deduced from the
the $H$-band polarization vectors in Figure 6.
The 1.1 mm dust continuum image clearly shows a main filament that is
elongated toward the NW and SE directions from the central part of Serpens
South. Along this main filament, the magnetic field is roughly perpendicular
to it, although some small deviation is seen. This ordered magnetic
configuration suggests that the magnetic field is strong enough to regulate
the whole structure of the main filament and, therefore, that the formation of
the main filament has proceeded under the influence of the magnetic field.
Also the 1.1 mm dust continuum image shows sub-filaments that converge on the
central part of the cluster or intersect the main filament (Figures 6, 7 and
8). These sub-filaments are also seen in the column density map of Aquila
(André et al., 2010). In contrast to the main filament, the magnetic field
appears to be nearly parallel to the elongation directions of the sub-
filaments, except in some parts of the sub-filaments. The southern sub-
filament has a more complicated structure than a simple elongated structure
and its global elongation seems parallel to the magnetic field, although some
parts seem perpendicular or diagonal to the magnetic field. The east-southeast
sub-filament is a long filament that stretches from the central part of the
cluster toward the east-southeast direction, and appears to have some parts
parallel to the magnetic field and some other parts diagonal to the magnetic
field. Near the convergent point on the main filament, this ESE sub-filament
appears to change its elongation direction from the ESE-WNW direction to the
E-W direction and to split into a few, thinner filaments that are connected to
the main filament (see Figure 9, and also Fig. 1 of Gutermuth et al., 2008).
Toward this convergent point, the magnetic field also seems to be nearly
perpendicular to the main filament just like the other sub-filaments. These
suggest that all the sub-filaments intersect the main filament along the
magnetic field, i.e. these sub-filaments could be outflows from the cluster or
inflows toward the main filament. Recent CO (3-2) observations toward Serpens
South suggest that CO (3-2) outflow lobes are anti-correlated with the sub-
filaments (Nakamura et al., 2011), preferring the inflow view of the sub-
filaments. In case of the inflows, these sub-filaments could be important as
mass supply agents of the cluster.
Looking at the overall magnetic field structure in the entire observed region,
we can recognize that the magnetic field is curved toward the cluster,
particularly in the southern area of the observed region (Figures 6, 7 and 8).
Although the origin of this large-scale curved magnetic field remains unclear,
such morphology is consistent with the idea that the global magnetic field is
distorted by gravitational contraction along the main filament toward the
northern part of the main filament, which probably contains the majority of
the mass in the Serpens South cloud. However, we should wait for the detailed
analysis of the dust continuum data (e.g., Gutermuth et al., 2011) and/or
molecular line data in order to know whether the northern part have enough
mass to cause the large-scale curved magnetic field observed here.
### 3.2 Rough Estimate of the Magnetic Field Strength
We roughly estimate the magnetic field strength toward two (north, and south)
zones enclosed by dotted lines in Figure 8, where in the $H$-band polarization
map (Figure 6) the local number density of the polarization vectors is
relatively large and the polarization vectors seem to be locally well-ordered,
using the Chandrasekhar-Fermi (CF) method (Chandrasekhar & Fermi, 1953). Here,
we calculate the plan-of-the-sky component of the magnetic field strength,
$B_{\parallel}$, using the equation of the CF method (e.g., eq. 4 of Houde,
2004) and a correction factor, $C$, for the CF method (Houde, 2004; Houde et
al., 2009), where we adopt $C\sim 0.3$ following Sugitani et al. (2010). In
this calculation, we use the $H$-band sources in Figure 6, due to the sample
number larger than that of the $K$s-band sources in Figure 7.
For 21 sources toward the north zone, an average $\theta$ in P.A. is
calculated to be 51.1$\arcdeg\pm$9.6$\arcdeg$, and an average $H-K$s color to
be 1.09$\pm$0.15 mag. Removing the dispersion due to the measurement
uncertainty of the polarization angle of 4.7$\arcdeg$, the intrinsic
dispersion of the polarization angle ($\sigma_{\theta}$) of 8.4$\arcdeg$ is
obtained. From the average $H-K$s color, we estimate $A_{\rm V}$ by adopting
the color excess equation of $H-K$ color, $E(H-K)=[H-K]_{\rm
observed}-[H-K]_{\rm instrinsic}$ (e.g., Lada et al., 1994), and the reddening
law, $E(H-K)=0.065\times A_{\rm V}$ (Cohen et al., 1981). With the standard
gas-to-extinction, $N_{\rm H_{2}}/A_{\rm V}\sim 1.25\times 10^{21}$ cm-2 mag-1
(Dickman, 1978), H2 column density can be roughly obtained as $N\sim 1.9\times
10^{22}\times E(H-K)$ cm-2. Adopting a distance from the filament axis of
$\sim$2’ ($l/2\sim$0.15 pc at $d\sim 260$ pc) as a half of the depth of this
area, and $H-K=0.2$ as the intrinsic $H-K$ color, we can approximately derive
a column density of $\sim 1.7\times 10^{22}$ cm-2 and a number density ($n\sim
N/l$) of $\sim 1.5\times 10^{4}$ cm-3. On the basis of HCO+ (4-3)
observations, we estimate the typical FWHM velocity width of $\sim$1.5–2 km
s-1 near the cluster (Nakamura et al., 2011). Adopting this value to derive
the velocity dispersion ($\sigma_{v}$), a mean molecular mass ($\mu$=2.3), and
the mass of a hydrogen atom ($m_{\rm H}$), we obtain $B_{\parallel}\sim$150
$\mu$G. For 25 sources toward the south zone, with
$\overline{\theta}=33.8\arcdeg\pm 6.9\arcdeg$, $\overline{[H-K_{\rm
s}]}=1.05\pm 0.18$ mag., the $\theta$ measurement uncertainty of 5.2°, and
$l\sim 0.45$ pc, $B_{\parallel}\sim$200 $\mu$G is obtained. Here we adopted
$d=$260 pc, following Gutermuth et al. (2008) and Bontemps et al. (2010).
However, they also mentioned the possibility of a larger distance up to 700
pc. In case of the larger distance, $B$ becomes smaller by a factor of
$(d/260{\rm pc})^{-0.5}$ in these estimates.
The dynamical state of a magnetized cloud core is measured by the ratio
between the cloud mass and the magnetic flux, i.e., the mass-to-flux ratio,
which is given by $M_{\rm cloud}/\Psi=\mu m_{\rm H}N/B$ $\sim$ 0.5–0.7
$\times(M_{\rm cloud}/\Psi)_{\rm critical}$ for these two zones, where
$(M_{\rm cloud}/\Psi)_{\rm critical}$ is the critical value for a magnetic
stability of the cloud (=$(4\pi^{2}G)^{-1/2}$; Nakano & Nakamura, 1978). Here,
we assumed that the magnetic field is almost perpendicular to the line-of-
sight. The estimated mass-to-flux ratios are close to the critical, implying
that the magnetic field is likely to play an important role in the cloud
dynamics and, thus, star formation significantly.
### 3.3 Conclusions
We have presented near-infrared imaging polarimetry toward the Serpens South
cloud. The main findings are summarized as follows.
1\. The $H$ and $K$s-band polarization measurements of near-infrared point
sources indicated a well-ordered global magnetic field that is nearly
perpendicular to the main filament with a mean position angle of $\sim
50\arcdeg$. This imply that the magnetic field is likely to have controlled
the formation of the main filament. On the other hand, the sub-filaments,
which converge on the central part of the cluster, tend to run along the
magnetic field, indicating a possibility that they are inflows or outflows
along the magnetic field.
2\. The global magnetic field appears to be curved in the southern part of the
observed region. This curved morphology suggests an idea that the global
magnetic field is distorted by gravitational contraction along the main
filament toward the northern part where the mass of the Serpens South cloud
seems to be mostly concentrated.
3\. Applying the Chandrasekhar-Fermi method, the magnetic field strength is
roughly estimated to be a few $\times$100 $\mu$G in two zones along the main
filament. The mass-to-flux rations in these zones indicate that the
filamentary cloud is close to magnetically critical as a whole.
4\. All the above results show that the magnetic field appears to
significantly influence the dynamics of the Serpens South cloud, which is
associated with the cluster that is considered to be in the very early stage
of cluster formation. This does not appear to support the weak magnetic field
models of molecular cloud evolution/cluster formation (e.g., Padoan et al.,
2001), at least not for the Serpens South cloud.
We are grateful for the support of the staff members of SAAO during the
observation runs. K. S. thanks Y. Nakajima for assisting in the data reduction
with the SIRPOL pipeline package. This work was partly supported by Grant-in-
Aid for Scientific Research (19204018, 20403003) from the Ministry of
Education, Culture, Sports, Science and Technology of Japan.
## References
* André et al. (2010) André, P., et al. 2010, A&A, 518, L102
* Bontemps et al. (2010) Bontemps, S., et al. 2010, A&A, 518, L85
* Chandrasekhar & Fermi (1953) Chandrasekhar, S., & Fermi, E. 1953, ApJ, 118, 113
* Cohen et al. (1981) Cohen, J. G., Frogel, J. A., Persson, S. E., & Elias, J. H. 1981, ApJ, 249, 481
* Crutcher et al. (2010) Crutcher, R. M., Wandelt, B., Heiles, C., Falgarone, E. Troland, T. H. 2010, ApJ, 725, 466
* Dickman (1978) Dickman, R. L. 1978, ApJS, 37, 407
* Gutermuth et al. (2011) Gutermuth, R. A., et al. 2011, in preparation
* Gutermuth et al. (2008) Gutermuth, R. A., et al. ApJ, 2008, 673, L151
* Houde (2004) Houde, M. 2004, ApJ, 616, L111
* Houde et al. (2009) Houde, M., Vaillancourt, J. E., Hildebrand, R. H., Chitsazzadeh, S., & Kirby, L. 2009, ApJ, 706, 1504
* Kandori et al. (2006) Kandori, R., et al. 2006, Proc. SPIE, 6269, 626951
* Lada et al. (1994) Lada, C., Lada, E., Clemens, & Bally, J., 1994, ApJ, 429, 694
* Li & Nakamura (2006) Li, Z.-Y., & Nakamura, F., 2006, ApJ, 640, L187
* Myers (2009) Myers, P. C. 2009, ApJ, 700, 1609
* Nagashima et al. (1999) Nagashima, C., et al. 1999, in Star Formation 1999, ed. T. Nakamoto (Nobeyama: Nobeyama Radio Observatory), 397
* Nagayama et al. (2003) Nagayama, T., et al. 2003, Proc. SPIE, 4841, 459
* Nakamura & Li (2007) Nakamura, F., & Li, Z.-Y., 2007, ApJ, 662, 395
* Nakamura et al. (2011) Nakamura, F., et al. 2011, in preparation
* Nakano & Nakamura (1978) Nakano, T., & Nakamura, T. 1978, PASJ, 30, 671
* Padoan et al. (2001) Padoan, P., Juvela, M., Goodman, A. A., & Nordlund, A. 2001, ApJ, 553, 227
* Padoan & Nordlund (2002) Padoan, P., & Nordlund, A. 2002, ApJ, 576, 870
* Price & Bate (2008) Price, D. J., & Bate, M. R., 2008, MNRAS, 385, 1820
* Ridge et al. (2003) Ridge, N. A., Wilson, T. L., Megeath, S. T., Allen, L. E., & Myers, P. C. 2003, AJ, 126, 286
* Skrutskie et al. (2006) Skrutskie, M. F., et al. 2006, AJ, 131, 1163
* Sugitani et al. (2010) Sugitani, K., et al. 2010, ApJ, 716, 299
* Tamura et al. (1987) Tamura, M., Nagata, T., Sato, S., & Tanaka, M. 1987, MNRAS, 224, 413
* Tamura et al. (2007) Tamura, M., et al. 2007, PASJ, 59, 467
* Vázquez-Semadeni et al. (2005) Vázquez-Semadeni, E., Kim, J., & Ballesteros-Paredes, J., 2005, ApJ, 630, L49
* Wilson et al. (2008) Wilson, G. W., et al., 2008, MNRAS, 386, 807
Figure 1: Polarization degree at $H$ versus $H-K$s color diagram for sources
having polarization errors of $<0.3\%$ in (a) the whole region and (b) the SW
area. YSOs/YSO candidates identified by Gutermuth et al. (2008) and Bontemps
et al. (2010) are not included. The lines of the adopted maximum polarization
efficiency of $P_{H}=6.6([H-K{\rm s}]-0.2)$ are shown both in the top and
bottom panels, and the line of $P_{H}=3.0$ is shown in the bottom panel.
Figure 2: Polarization degree at $K$s versus $H-K$s color diagram for sources
having polarization errors of $<0.3\%$ in (a) the whole region and (b) the SW
area. YSOs/YSO candidates identified by Gutermuth et al. (2008) and Bontemps
et al. (2010) are not included. The lines of the adopted maximum polarization
efficiency of $P_{K{\rm s}}=4.4([H-Ks]-0.2)$ are shown both in the top and
bottom panels, and the line of $P_{K{\rm s}}=2.0$ is shown in the bottom
panel. Figure 3: $H$-band polarization vector map toward Serpens South for
point sources having $P/{\mathit{\Delta}}P>3.0$ and $P<6.6([H-Ks]-0.2)$,
superposed on the $H$-band image. The Serpens South cluster is located toward
the center of the image. Figure 4: $K$s-band polarization vector map toward
Serpens South for point sources having $P/{\mathit{\Delta}}P>3.0$ and
$P<4.4([H-Ks]-0.2)$, superposed on the $K$s-band image. The Serpens South
cluster is located toward the center of the image. Figure 5: Histograms of
the position angles of a) $H$-band vectors for point sources of
$P/{\mathit{\Delta}}P>3.0$ and $P<6.6([H-Ks]-0.2)$, and b) $K$s-band vectors
for point sources of $P/{\mathit{\Delta}}P>3.0$ and $P<4.4([H-Ks]-0.2)$. A
two-Gaussian fit is shown on the $K$s-band histogram. Figure 6: $H$-band
polarization vector map toward Serpens South for point sources having
$P/{\mathit{\Delta}}P>3.0$, $P<6.6([H-Ks]-0.2)$, and $P>3.0$%, superposed on
1.1mm dust continuum image of ASTE/AzTEC (Gutermuth et al., 2011). YSOs
identified by Gutermuth et al. (2008) and Bontemps et al. (2010) are not
included, but those of Gutermuth et al. (2008) are indicated by red (class
0/I) and blue (class II) open circles. Figure 7: $K$s-band polarization
vector map toward Serpens South for point sources having
$P/{\mathit{\Delta}}P>3.0$, $P<4.4([H-Ks]-0.2)$, and $P>2.0$%, superposed on
1.1mm dust continuum image of ASTE/AzTEC (Gutermuth et al., 2011). YSOs
identified by Gutermuth et al. (2008) and Bontemps et al. (2010) are not
included, but those of Gutermuth et al. (2008) are indicated by red (class
0/I) and blue (class II) open circles. Figure 8: Schematic drawing of the
main and sub-filaments of the Serpens South cloud. The outlines of the
filaments and the magnetic field lines are shown by black lines and green
dotted lines, respectively. The outlines of the filaments were drawn based on
the 1.1mm dust continuum image (Gutermuth et al., 2011) and the magnetic field
lines were deduced from the $H$-band polarization vectors of Figure 6. In this
figure, the magnetic field lines presents only the direction of the magnetic
field, not the strength of the magnetic field. The boxes outlines by black,
dotted lines indicate the zones where the strength of the magnetic field is
roughly estimated. Figure 9: $H$-band polarization vector map, superposed on a
closeup, $JHK$s composite color image of Serpens South (R: $K$s, G: $H$, B:
$J$). Only the vectors of the sources of $P/\Delta P>3.0$, $P<6.6([H-K{\rm
s}]-0.2)$ and $P>3.0$%, are shown. YSOs identified by Gutermuth et al. (2008)
are indicated by red (Class 0/I) and blue (Class II) open circles, but their
polarization vectors are not shown.
|
arxiv-papers
| 2011-04-15T08:08:11 |
2024-09-04T02:49:18.274148
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "K. Sugitani, F. Nakamura, M. Watanabe, M. Tamura, S. Nishiyama, T.\n Nagayama, R. Kandori, T. Nagata, S. Sato, R. A. Gutermuth, G. W. Wilson, R.\n Kawabe",
"submitter": "Koji Sugitani",
"url": "https://arxiv.org/abs/1104.2977"
}
|
1104.3031
|
Lemaître’s Hubble relationship
The detection of the expansion of the Universe is one of the most important
scientific discoveries of the 20th century. It is still widely held that in
1929 Edwin Hubble discovered the expanding Universe (Hubble 1929) and that
this discovery was based on his extended observations of redshifts in spiral
nebulae. Both statements are incorrect. There is little excuse for this, since
there exists sufficient well-supported evidence about the circumstances of the
discovery. The circumstances have been well documented even recently with the
publication of two books: Bartusiak (2010), Nussbaumer & Bieri (2009). Both
were positively reviewed in the December 2009 issue of PHYSICS TODAY (page
51). Other writers have stated the facts correctly as well (e.g. Peebles
1984).
Friedman (1922) was the first to publish non-static solutions to Albert
Einstein’s field equations. However, he did not extend that into a
cosmological model built on astronomical observations. Some five years later,
Georges Lemaître also discovered dynamical solutions (Lemaître 1927). In the
same publication in which he reported his discovery, he extracted (on
theoretical grounds) the linear relationship between velocity $v$ and distance
$r$: $v=$H$r$. Combining redshifts published by Strömberg (1925) (who relied
mostly on redshifts from Vesto Slipher (e.g. Slipher 1917)) and Hubble’s
distances from magnitudes (Hubble 1926), he calculated two values for the
“Hubble constant” H, 575 and 670 km sec-1 Mpc-1, depending on how the data is
grouped. Lemaître concluded from those results that the Universe was
expanding. Two years later Hubble found the same velocity–distance
relationship on observational grounds (Hubble 1929) from practically the same
observations that Lemaître had used. However, Hubble did not credit anyone for
the redshifts, most of which again came from Slipher.
Several of today’s professional astronomers and popular authors (e.g. Singh
2005) believe that the entirety of Lemaître’s 1927 French-language paper was
re-published in English (Lemaître 1931) with the help of Arthur Eddington.
That is also incorrect; the two pages from the 1927 paper that contain
Lemaître’s estimates of the Hubble constant are not in the 1931 paper, for
reasons that have never been properly explained.
Unfortunately several prominent people writing in the popular press continue
to promote Hubble’s discovery of the expansion of the Universe. See, for
example, Brian Greene’s Op-Ed piece in the New York Times on 15 January 2011.
There is a great irony in these falsehoods still being promoted today. Hubble
himself never came out in favor of an expanding Universe; on the contrary, he
doubted it to the end of his days. It was Lemaître who was the first to
combine theoretical and observational arguments to show that we live in an
expanding Universe.
## References
* Bartusiak (2010) Bartusiak, M. 2010, “The Day We Discovered the Universe”, Vintage, ISBN-10: 9780307276605
* Friedman (1922) Friedman. A. 1922, “Über die Krümmung des Raumes”, Zeitschrift für Physik, 10, 377.
* Hubble (1926) Hubble, E. 1926, “Extra-Galactic Nebulae”, Astrophysical Journal, 64, 321
* Hubble (1929) Hubble, E. 1929, “A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae”, Proceedings of the National Academy of Sciences of the United States of America, Vol. 15, Issue 3, p.168
* Lemaître (1927) Lemaître , G. 1927, “Un univers homogène de masse constante et de rayon croissant, rendant compte de la vitesse radiale des nébuleuses extragalactiques”, Annales de la Société scientifique de Bruxelles, série A, vol. 47, p. 49
* Lemaître (1931) Lemaître , G. 1931, “A Homogeneous Universe of Constant Mass and Increasing Radius accounting for the Radial Velocity of Extra-galactic Nebulae”, Monthly Notices of the Royal Astronomical Society, 91, 483
* Nussbaumer & Bieri (2009) Nussbaumer, H. & Bieri, L. 2009 “Discovering the Expanding Universe”, Cambridge University Press, ISBN-10: 9780521514842
* Peebles (1984) Peebles, P.J.E. 1984, “Impact of Lemaître’s ideas on modern cosmology”, in The Big Bang and Georges Lemaître ; Proceedings of the Symposium Louvain-la-Neuve, Belgium, October 10-13, 1983 (A85-48726 24-90). Dordrecht, D. Reidel Publishing Co., 1984, p. 23-30
* Singh (2005) Singh, S. 2005, “Big Bang: The Origin of the Universe”, Harper Perennial, ISBN-10: 9780007162215
* Slipher (1917) Slipher, V. M. 1917, “Nebulae”, Proceedings of the American Philosophical Society, vol. 56, p. 403
* Strömberg (1925) Strömberg, G. 1925, “Analysis of Radial Velocities of Globular Clusters and Non-Galactic Nebulae”, Astrophysical Journal, 61, 353
Michael Way is a research scientist at the NASA Goddard Institute for Space
Studies in New York City and an Adjunct Professor at Hunter College, City
University of New York, USA
Harry Nussbaumer is an Astronomy professor emeritus at ETH Zurich,
Switzerland. He recently published as first author the book “Discovering the
Expanding Universe”.
|
arxiv-papers
| 2011-04-15T12:19:15 |
2024-09-04T02:49:18.279255
|
{
"license": "Public Domain",
"authors": "M.J. Way (NASA/GISS), Harry Nussbaumer (ETH, Switzerland)",
"submitter": "Michael Way",
"url": "https://arxiv.org/abs/1104.3031"
}
|
1104.3103
|
# Noncooperatively Optimized Tolerance: Decentralized Strategic Optimization
in Complex Systems
Yevgeniy Vorobeychik Jackson R. Mayo
Robert C. Armstrong Joseph R. Ruthruff
Sandia National Laboratories
P.O. Box 969
Livermore, CA 94551
###### Abstract
We introduce noncooperatively optimized tolerance (NOT), a generalization of
highly optimized tolerance (HOT) that involves strategic (game theoretic)
interactions between parties in a complex system. We illustrate our model in
the forest fire (percolation) framework. As the number of players increases,
our model retains features of HOT, such as robustness, high yield combined
with high density, and self-dissimilar landscapes, but also develops features
of self-organized criticality (SOC) when the number of players is large
enough. For example, the forest landscape becomes increasingly homogeneous and
protection from adverse events (lightning strikes) becomes less closely
correlated with the spatial distribution of these events. While HOT is a
special case of our model, the resemblance to SOC is only partial; for
example, the distribution of cascades, while becoming increasingly heavy-
tailed as the number of players increases, also deviates more significantly
from a power law in this regime. Surprisingly, the system retains considerable
robustness even as it becomes fractured, due in part to emergent cooperation
between neighboring players. At the same time, increasing homogeneity promotes
resilience against changes in the lightning distribution, giving rise to
intermediate regimes where the system is robust to a particular distribution
of adverse events, yet not very fragile to changes.
## 1 Introduction
Highly optimized tolerance (HOT) and self-organized criticality (SOC) have
received considerable attention as alternative explanations of emergent power-
law cascade distributions [1, 2]. The SOC model [1, 3, 4] posits that systems
can naturally arrive at criticality and power-law cascades, independently of
initial conditions, by following simple rule-based processes. Among the
important features of SOC are (a) self-similarity and homogeneity of the
landscape, (b) fractal structure of cascades, (c) a small power-law exponent
(i.e., heavier tails), and (d) low density and low yield (e.g., in the context
of the forest fire model, described below). HOT [2, 5, 6, 7], in contrast,
models complex systems that emerge as a result of optimization in the face of
persistent threats. While SOC is motivated by largely mechanical processes,
the motivation for HOT comes from evolutionary processes and deliberately
engineered systems, such as the electric power grid. The key features of HOT
are (a) a highly structured, self-dissimilar landscape, (b) a high power-law
exponent, and (c) high density and high yield [6].
HOT and SOC can be cleanly contrasted in the context of the forest fire
(percolation) model [2, 4]. The forest fire model features a grid, usually
two-dimensional, with each cell being a potential site for a tree.
Intermittently, lightning strikes one of the cells according to some
probability distribution. If there is a tree in the cell, it is set to burn.
At that point, a cascade begins: fires spread recursively from cells that are
burning to neighboring cells that contain trees, engulfing the entire
connected component in which they begin. The main distinction between HOT and
SOC in the forest fire model is how they conceive the process of growing trees
in the grid. In the classical forest fire model (SOC) a tree sprouts in every
empty cell with some fixed probability $p$. The lightning strike distribution
is usually conceived as being uniform. As the process of tree growth
interleaves with burnouts, the system reaches criticality, at which burnout
cascades (equivalently, sizes of connected components) follow a power-law
distribution. At criticality, the tree landscape is homogeneous and self-
similar, and burnouts follow a fractal pattern. Additionally, both the
fraction of cells with trees before lightning (the density) and after
lightning (the yield) are relatively low at criticality. In contrast, the HOT
model conceives of a global optimizer charged with deciding the configuration
of each cell (i.e., whether a tree will grow or not). What emerges globally as
a consequence is a collection of large connected components of trees separated
by “barriers” of no trees that prevent fires from spreading outside
components. This pattern, which is self-dissimilar, adapts to the specific
spatial distribution of lightning strikes. In certain cases, for example, when
the lightning distribution is exponential or Gaussian, out of this adaptation
emerges a precise balance of connected component sizes and fire probabilities
so as to yield a power-law distribution of cascades (burnouts), with a higher
exponent than the distribution of cascades in the SOC model. The HOT model
features both high yield and high density, as it is deliberately robust to
lightning strikes with the specified distribution; however, it is also
extremely fragile to changes in the lightning distribution, whereas SOC does
not exhibit such fragility. The HOT landscape tends to have a highly non-
uniform distribution of “fire breaks”, or areas where no trees are planted,
whereas the SOC landscape is homogeneous.
A natural criticism of the HOT paradigm is that, in complex systems, it is
difficult to conceive of a single designer that manages to optimally design
such a system. As a partial response, much work demonstrates that HOT yields
qualitatively similar results when heuristic optimization or an evolutionary
process is used [5, 8, 9]. Still, most complex systems, particularly those
that are engineered, are not merely difficult to design globally, but are
actually _decentralized_ , with many entities responsible for parts (often
small) of the whole system. Each entity is generally not motivated by global
concerns, but is instead responding to local incentives, which may or may not
align with global goals. For example, the Internet is fundamentally a
combination of autonomous entities, each making its own decisions about
network topology, protocols, and composition, with different decisions made at
different levels of granularity (some by large Internet service providers,
some by large organizations connected to the Internet, some by small
organizations and individual users). Likewise, the electric grid emerges as a
by-product of complex interactions among many self-interested parties,
including electric utilities, electricity users (which are themselves
businesses or individuals), and various government and regulatory entities
that have their own interests in mind. Most moderately complex engineered
products are manufactured from components produced by different firms, each
with its own goals driven primarily by the market within which it competes,
and many of these components are further broken down and produced by their own
set of suppliers, and so on.
Our central contribution is to model complex systems as complex patterns of
strategic interactions among self-interested players making independent
decisions. We conceive that out of _strategic interactions_ of such self-
interested players emerges a system that is optimized _jointly_ by all
players, rather than _globally_ by a single “engineer”. Thus, we call our
model _noncooperatively optimized tolerance_ (NOT). Formally, our model is
game theoretic, and we seek to characterize emergent properties of the system
in a Nash equilibrium [10]. Our model strictly generalizes the HOT framework,
with HOT being the special case of a game with a single player.
## 2 A Game Theoretic Forest Fire Model
We begin by introducing some general game theoretic notions, and then
instantiate them in the context of a forest fire model. A _game_ is described
by a set of players $I$, numbering $m=|I|$ in all, where each player $i\in I$
chooses actions from a strategy set $S_{i}$ so as to maximize his _utility_
$u_{i}(\cdot)$. Notably, each player’s utility function depends on the actions
of other players as well as his own, and so we denote by
$u_{i}(s)=u_{i}(s_{i},s_{-i})$ the utility to player $i$ when he plays a
strategy $s_{i}$ and others jointly play
$s_{-i}\equiv(s_{1},\ldots,s_{i-1},s_{i+1},\ldots,s_{m})$, where these combine
to form a joint strategy profile $s=(s_{1},\ldots,s_{i},\ldots,s_{m})$. In our
context, each player controls a portion of a complex system and is responsible
for engineering his “domain of influence” against perceived threats, just as
in the HOT model. The distinction with the HOT model is that the interests of
different players may be opposed if, say, an action that is desirable for one
has a negative impact on another (for example, one player may dump his trash
on another’s territory). Such interdependencies are commonly referred to as
_externalities_ [11], and form a central aspect of our model. However, HOT
arises as a special case of our construction, when the game has a single
player.
We implement the game theoretic conception of complex system engineering in
the familiar two-dimensional forest fire model, thereby allowing direct
contrast with the now mature literature on HOT and SOC. In the NOT forest fire
model, each player is allotted a portion of the square grid over which he
optimizes his yield less cost of planting trees.111We note the resemblance of
our grid division into subplots to the framework studied by Kauffman et al.
[12], which divides a lattice in a similar manner, but with the goal of
studying joint optimization of a global objective, rather than strategic
interactions among players controlling different plots and having different
goals. Let $G_{i}$ be the set of grid cells under player $i$’s direct control,
let $s_{i}$ be player $i$’s strategy expressed as a vector in which
$s_{i,g}=1$ if $i$ plants a tree in grid cell $g$ and $s_{i,g}=0$ otherwise,
and let $\Pr\\{\,g=1\mid s,s_{i,g}=1\,\\}$ be the probability (with respect to
the lightning distribution) that a tree planted in cell $g$ survives a fire
given the joint strategy (planting) choices of all players. Since exactly one
player controls each grid cell, we simplify notation and use $s_{g}=s_{i,g}$
where $i$ is the player controlling grid cell $g$. Let $N_{i}=|G_{i}|$ be the
number of grid cells under $i$’s control and $\rho_{i}$ be the density of
trees planted by $i$,
$\rho_{i}=\frac{1}{N_{i}}\sum_{g\in G_{i}}s_{g}.$
Define the yield for player $i$ to be
$Y_{i}(s)=\sum_{g\in G_{i}}\Pr\\{\,g=1\mid s\,\\}s_{g}$
(it is convenient to define the yield as an absolute number of trees). We
assume further that each tree planted by a player incurs a fixed cost $c$. The
utility of player $i$ is then
$u_{i}(s)=\sum_{g\in G_{i}}(\Pr\\{\,g=1\mid
s\,\\}-c)s_{i,g}=Y_{i}(s)-cN_{i}\rho_{i}.$
The result of joint decisions by all players is a grid that is partially
filled by trees, with overall density $\rho(s)$ and overall yield $Y(s)$ given
by a sum ranging over the entire grid $G$, i.e., $Y(s)=\sum_{g\in
G}\Pr\\{\,g=1\mid s\,\\}s_{g}$. Let $N$ be the number of cells in the entire
grid. We then define _global utility (welfare)_ as
$W(s)=\sum_{i\in I}u_{i}(s)=Y(s)-cN\rho(s).$
Note that when $m=1$, $W(s)$ coincides with the lone player’s utility. A part
of our endeavor below is to characterize $W(s^{*})$ and $\rho(s^{*})$ when
$s^{*}$ is a Nash equilibrium, defined as a configuration of joint decisions
by all players such that no individual player can gain by choosing an
alternative strategy $s_{i}^{\prime}$ (alternative configuration of trees
planted) _keeping the decisions of other players $s_{-i}^{*}$ fixed_.
We systematically vary several parameters of the model. The first, which is
the main subject of this work, is the number of players $m$. Fixing the size
of the grid at $N=128\times 128$,222This was the largest grid size on which we
could approximate equilibria in reasonable time. we vary the number of players
between the two extremes, from $m=1$ to $m=N$. The former extreme corresponds
precisely to the HOT setting, while in the latter the players are entirely
myopic in their decision problems, each concerned with only a single cell of
the grid. The negative externalities of player decisions are clearly strongest
in the latter case. The entire range of player variation is
$m\in\\{1,2^{2},4^{2},8^{2},16^{2},32^{2},64^{2},128^{2}\\}$. The second
parameter that we vary is the cost of planting trees:
$c\in\\{0,0.25,0.5,0.75,0.9\\}$. Finally, we vary the scale of the lightning
distribution, which is always a truncated Gaussian centered at the top left
corner of the grid. We let the variance (of the Gaussian before truncation) be
$N/v$, and vary $v\in\\{0.1,1,10,100\\}$. For example, at $v=1$ the standard
deviation of the Gaussian covers, roughly, the entire size of the grid, and at
$v=0.1$ the distribution of lightning strikes is approximately uniform over
the grid. In contrast, $v=100$ gives a distribution with lightning strikes
highly concentrated in the top left corner of the grid.
The question yet to be addressed is how to partition the grid into regions of
influence for a given number of players $m$. We do this in the most natural
way by partitioning the grid into $m$ identical square subgrids, ensuring
throughout that $m$ is a power of 4.
## 3 Analysis of the NOT Forest Fire Model
To build some intuition about our model, consider first a one-dimensional
forest fire setting. Since in one dimension sequences of planted cells (1’s)
are interleaved with unplanted sequences (0’s), we define $k$ to be the length
of a planted sequence and $l$ be the length of an unplated sequence, and
assume that $1\ll k\ll N$. First, consider the case with $m=1$ and assume that
$c<1-1/N$. We further assume that $k$ is identical for all sequences of 1’s
(when $k\ll N$, this is almost with no loss of generality, since 1’s can be
swapped, keeping the density constant, without changing the utility) and note
that in an optimal solution $l=1$. The utility of the player (and global
utility) is then
$u_{i}(k)=W(k)=N\rho(k)\left(1-\frac{k}{N}-c\right),0$
where $\rho(k)=k/(k+1)$. If we view $k$ as a continuous variable, we can
obtain a maximizer, $k^{*}=O(\sqrt{N(1-c)})$. In contrast, if we consider the
case with each player occupying a single grid cell (i.e., $m=N$),
$k^{E}=O(N(1-c))$. While the density of planting approaches 1 in both the
optimal and equilibrium configurations as $N$ increases (as long as $N(1-c)\gg
1$), it turns out that the equilibrium density is generally higher than
optimal (all the results discussed here are derived in supporting online
material). This agrees with our intuition on the consequence of negative
externalities of decentralized planting decisions: when a player decides
whether to plant a tree, he takes into account only the concomitant chance of
his own tree burning down, and not the global impact the decision has on the
sizes of cascades.
Armed with some intuition based on the one-dimensional model, we now turn to
our main subject, the two-dimensional forest fire model—varying systematically
the number of players, planting cost, and lightning distribution as described
above. A full analysis of the two-dimensional model in all the relevant
parameters is beyond mathematical tractability. Furthermore, the problem of
computing exact equilibria, or even exact _optima_ for any player, is
intractable, as the size of the space of joint player strategies in our
setting is $2^{16384}$ (for example, at one extreme, we need to compute or
approximate an equilibrium in a game with 16384 players, each having binary
strategies).
Despite the daunting size of the problem, it turns out that simple iterative
algorithms for approximating equilibria as well as optimal decisions by
individual players are extremely effective. Specifically, we use the following
procedure for approximating Nash equilibria, building on previous
methodological work in simulation-based game theoretic analysis [13, 14, 15,
16, 17, 18]:
1. 1.
Start with no trees planted as the initial joint strategy profile $s$
2. 2.
For a fixed number of iterations:
1. (a)
For each player $i$:
1. i.
Fix the decisions of other players $s_{-i}$
2. ii.
With probability $p$, compute $\hat{s}_{i}\leftarrow\mathrm{OPT}(s_{-i})$, the
optimal decision of player $i$ given $s_{-i}$; with probability $1-p$, let
$\hat{s}_{i}\leftarrow s_{i}$
3. iii.
Set $s_{i}\leftarrow\hat{s}_{i}$
This procedure is a variant of _best response dynamics_ , which is a well-
known simple heuristic for learning in games [19]. While convergence
properties of this heuristic are somewhat weak [19, 16], it has proved to be
quite effective at approximating equilibria computationally [13, 16], and has
the additional feature of being a principled model for adaptive behavior of
goal-driven agents in a strategic setting [19].
The inner loop of the algorithm involves computing an optimal (best) response
of a player $i$, which we already noted is in general intractable. Carlson and
Doyle [5] used a deterministic greedy heuristic to compute an optimum over an
entire grid of size $64\times 64$, starting with a grid devoid of trees and
iteratively choosing a grid cell that maximizes global utility given the
planting choices from the previous iterations. However, we use a larger
($128\times 128$) grid, and additionally must run the optimization heuristic
multiple times as an inner loop of equilibrium approximation, so their
approach is too computationally intensive to be practical in our setting.
Instead, we utilize the somewhat lighter-weight method of _sampled fictitious
play_ [20, 21], which allows us to more finely control the tradeoff between
the amount of optimization search and the incremental impact of additional
search on solution quality. In sampled fictitious play, each grid cell
controlled by player $i$ becomes a “player” in a cooperative subgame (where
each cell has $i$’s utility as its goal), and random subsets of cells are
iteratively chosen to make simultaneous optimizing decisions given a uniform
random sample of choices by the rest of the grid from a fixed window of
previous iterations. Random _exploration_ is introduced by occasionally
replacing historical actions of “players” (cells controlled by $i$) with
randomly chosen actions. In our implementation, it turned out to be most
effective to let the history window size be 1, which makes sampled fictitious
play resemble myopic best response dynamics. Since each grid cell has only two
actions, we choose the myopically best action, determined by the size of the
connected component of trees to which the cell belongs.
Insofar as the results below are the outcomes of the above algorithm, they
represent, approximately, principled predictions of coevolution of goal-
directed players whose incentives may not align with the global objective. As
such, our simulation results have an additional advantage over the 1-D
mathematical characterization, which does not allow direct insight into
_which_ of the many possible equilibrium configurations is likely to be
reached by adaptive players.
### 3.1 Global Utility
Our first question concerns the variation of global utility $W(s^{*})$ with
the number of players $m$, the cost $c$, and the parameter $v$ governing
variance of the Gaussian lightning distribution. First, note that $W(s^{*})$
will be no better than optimal for $m>1$, and it seems intuitive that it is a
non-increasing function of $m$. Additionally, when $c=0$ and $m=N$, we
anticipate a global utility of 0, since the only equilibria involve either all
players or all but one planting trees (we argue this formally in the online
supplement). The questions we address next are: what happens when $1<m<N$ and
when $c>0$? Figure 1 provides some answers. First, when $c=0$, we notice that
the initial drop in global utility is quite shallow for $m<256$, particularly
when the lightning distribution is relatively diffuse ($v<100$). However, once
the number of players is relatively large, global utility drops dramatically,
and nearly reaches 0 already when $m=4096$, even though this does not directly
follow from an argument above. For $c>0$, the dropoff in global utility with
the number of players becomes less dramatic.
|
---|---
Figure 1: Global utility $W(s^{*})$ as a function of $m$ for
$c\in\\{0,0.25,0.5,0.75,0.9\\}$. Left: $v=0.1$ (nearly uniform distribution).
Right: $v=100$ (highly concentrated distribution).
### 3.2 Density and Fire Break Distribution
Our next task is to consider how the density changes with our parameters of
interest. Based on the observation above, we expect the density to be $1$, or
nearly so, when $c=0$ and $m=N$. The density should be appreciably below $1$
when $m=1$. Furthermore, the density should decrease with increasing cost $c$.
In general, our intuition, based on all previous analysis, would suggest that
density should increase with the number of players: after all, each player’s
decision to plant a tree does not account for the negative impact it has on
other players.
Working from this intuition, the results in Figure 2 are highly
counterintuitive: the overall density _falls_ with increasing number of
players until $m$ reaches $1024$, and only when the number of players is very
high ($4096$ and $N$) is it generally higher than the optimal density. This
dip is especially apparent for a highly concentrated lightning distribution
($v=100$). To understand this phenomenon we must refer to Figure 3, showing
actual (approximate) equilibrium grid configurations for varying numbers of
players when $c=0$ and $v=100$. We can observe that each player’s myopic self-
interest induces him to construct _fire breaks_ in his territory where none
exist in a globally superior single-player configuration. Thus, for example,
contrast Figure 3 (a) and (b). In the former, most of the grid is filled with
trees, and much of the action happens in the upper left corner (the epicenter
of the lightning distribution), which is filled with fire breaks that confine
fires to relatively small fractions of the grid. In the latter, the upper left
corner is now under the control of a single player, and other players find it
beneficial to plant fire breaks of their own, since the “wasted” land amounts
to only a small fraction of their landmass, and offers some protection against
fire spread to the protected areas from “poorly” protected neighboring
territories. With more players, we see coordination between neighbors emerge,
as they jointly build mutually beneficial fire breaks, but such cooperation is
not global, and becomes increasingly diffuse with greater number of players.
Nevertheless, increasing the number of players results in a greater amount of
total territory devoted to fire breaks by individual players or small local
neighborhoods, and, as a result, an overall loss in planting density, which
have observed in Figure 2.
|
---|---
Figure 2: Density $\rho$ as a function of $m$ for
$c\in\\{0,0.25,0.5,0.75,0.9\\}$. Left: $v=0.1$ (nearly uniform distribution).
Right: $v=100$ (highly concentrated distribution).
Since the density is decreasing for intermediate numbers of players, a natural
hypothesis is that the fire breaks are distributed suboptimally. We can
observe this visually in Figure 3.
| | |
---|---|---|---
(a) | (b) | (c) | (d)
| | |
(e) | (f) | (g) | (h)
Figure 3: Sample equilibrium grid configurations with $c=0$, $v=100$, and the
number of players varied between $1$ and $N=16384$. Blank cells are planted
and marked cells are unplanted. Player domains of influence are shaded in a
checkerboard pattern. (a) $1$ player, equivalent to HOT; (b) $4$ players; (c)
$16$ players; (d) $64$ players; (e) $256$ players; (f) $1024$ players; (g)
$4096$ players; (h) $16384$ players. To avoid clutter, we omit the
checkerboard pattern with $N$ players. Here, the grid is blank, which
indicates that every grid cell contains a tree.
Specifically, the equilibrium grid configurations suggest that the location of
fire breaks becomes less related to the lightning distribution as the number
of players grows. To measure this formally, we compute
$C=\frac{\sum_{g\in G}p_{g}(1-s_{g})}{1-\rho}.$
The numerator is the probability that lightning strikes an empty (no tree)
cell, where $p_{g}$ is the probability of lightning hitting cell $g$, and
$s_{g}$ is the indicator that is 1 when $g$ has a tree and 0 otherwise. The
denominator is the fraction of the grid that is empty. The intuition behind
this measure is that when fire breaks (i.e., empty cells) lie largely in
regions with a high probability of lightning, $C$ will be much larger than 1,
whereas if empty cells are distributed uniformly on the grid,
$E[C]=1$.333These are both formally shown in the supporting online material.
Figure 4 (left) confirms our hypothesis: initially, $C$ is quite high, but as
the number of players increases, $C$ approaches 1. Interestingly, when the
number of players is very large ($m=4096$) this result reverses, with $C$
jumping abruptly. To understand this phenomenon, note that when $m=4096$, each
player controls only a $2\times 2$ subgrid, which is simply too small for a
local fire break to be worthwhile unless the fire risk is very high. Thus, the
only players with any incentive to build fire breaks are those close to the
epicenter of lightning.
Considering the spatial distribution of empty grid cells apart from lightning
strikes, we see in Figure 4 (right) that the centroid of the empty cells
begins near the $(0,0)$ point, but approaches the center of the grid with
increasing number of players.444Here again we see that the center shifts back
to near the $(0,0)$ point when $m=N/4$, for the same reasons we just outlined.
Interestingly, even for a moderate number of players ($m=16$), the
distribution of fire breaks is nearly homogeneous and almost unrelated to the
lightning distribution. This suggests that global utility would remain
relatively robust to changes in the lightning distribution compared to the HOT
model. To verify this, we show in Figure 5 average global utility of
equilibrium configuration _after the lightning distribution is randomly
changed_. Whether the cost of planting trees is high or low, the figure shows
significantly reduced fragility for an intermediate number of players (between
16 and 1024). Indeed, when cost is high, the system remains less fragile than
HOT even in the limiting case of $m=N$. If we now recall that global utility
remains relatively close to optimal across a wide range of settings when $m$
is below 256, our results suggest that the regime of intermediate numbers of
players retains the robustness of HOT, while developing some features of SOC
that make it less fragile to changes in the environment.
|
---|---
Figure 4: Left: a measure of correlation ($C$, defined in the text) between the lightning distribution and the fire breaks (empty cells) across subgrids for $c=0$ and $c=0.9$. As $C$ approaches 1, the locations of empty cells become essentially unrelated to the distribution of lightning strikes. Right: centroid coordinates of the empty grid cells when $c=0$ (the results are similar when $c=0.9$). |
---|---
Figure 5: Fragility of NOT configurations. Given the (approximate) equilibrium
configurations generated for a lightning distribution centered at the upper
left corner of the grid, we changed the lightning distribution by generating
the center of the Gaussian uniformly randomly from all grid locations. We the
evaluated expected global utility given the altered lightning distribution.
The graph plots averages of repeating this process 30-80 times, as compared to
global utility for the original environment. Left: $c=0$. Right: $c=0.9$.
### 3.3 Distribution of Burnout Cascades
One of the central results of both SOC and HOT models is a power-law
distribution of burnout cascades. Since our model generalizes HOT, we should
certainly expect to find this power-law distribution in the corresponding
special case of $m=1$, at least approximately (since the power-law result in
HOT is asymptotic and presumes exact, not approximate, optimization). We
therefore study in some detail how the burnout distribution behaves with
respect to the parameters of interest.
Figure 6 shows fire cascade distributions on the usual log-log plot for
$v=10$. First, when $m=1$ (red points), the results align with the expectation
of a near straight line (near power-law distribution) across a range of
scales. Additionally, even when $m$ is greater than 1 but relatively small
(green points), the distribution appears linear across a range of scales,
suggesting that the power law is likely not unique to the HOT setting. Once
the number of players is large, however, the distribution of cascades less
resembles a power law, and begins to feature considerable curvature even in
the intermediate scales. In that sense, the NOT setting with many players is
unlike both HOT and SOC.
The most important aspect of the cascade distributions is that the tails are
systematically increasing with the number of players in all observed settings
(this remains the case for Gaussians with greater and smaller variance, not
shown here). We study this in detail in Figure 7, which shows the 90th
percentile of the burnout distribution as a function of the number of players
$m$ for varying cost and variance of the lightning distribution. The 90th
percentile consistently increases with the number of players, confirming the
phenomenon of heavier tails with more players that we already observed. As is
intuitive, increasing either the cost of planting trees or the variance of the
lightning distribution has a dampening effect on burnout tails: in both cases,
more fire breaks are constructed, making very large cascades less likely.
|
---|---
Figure 6: Distribution of tree burnout cascades, shown on a log-log plot with $\Pr\\{X\geq x\\}$ on the vertical axis and $x$ on the horizontal axis, where $X$ is the random variable representing cascade size. The plots feature (bottom to top) $m=1$ (red), $m=16$ (green), $m=256$ (blue), and $m=4096$ (purple), with the left plot corresponding to $c=0$ and the right plot corresponding to $c=0.9$. Both plots correspond to $v=10$. |
---|---
Figure 7: Left: 90th percentile as a function of $m$ (plotted on a log scale)
for varying values of cost $c$, with $v=1$. Right: 90th percentile for
$c=0.75$ with $v=10$ (top, red) and $v=1$ (bottom, blue).
## 4 Discussion
The results described in the previous section show features of both HOT and
SOC. When the number of players is small, the NOT setting closely resembles
HOT, and, indeed, HOT is a special case when there is a single player. Perhaps
surprisingly, features of HOT persist even when the number of players becomes
larger, but as the number of players increases, we also begin to observe many
features identified with SOC. The system retains its robustness to the
lightning strikes—a key feature of HOT—even when the number of players is
relatively large. It achieves this robustness in part due to the emergence of
cooperation between neighboring players, who jointly build fire breaks
spanning several players’ territories. The cooperation required to retain
near-optimal performance becomes increasingly difficult, however, as the
system becomes highly fractured among small domains of influence.
As cooperation becomes less effective, players fall back on protecting their
own domain of influence by surrounding it (or parts of it) with deforested
land, so long as the fraction of land covered by trees is large enough to make
this endeavor worthwhile. This gives rise to the counterintuitive result that
the density of trees initially falls as the number of players increases.
Since even a moderately fractured landscape requires each player to focus on
protecting his or her own domain, we observe decreasing correlation between
locations of frequent lightning strikes and locations of fire breaks. With
increasing number of players, this correlation systematically decreases, and
the spatial distribution of empty cells becomes increasingly
homogeneous—striking features of SOC that emerge even when the number of
players is not very large and the global performance is still highly robust to
lightning strikes. Thus, the intermediate range of players appears to exhibit
both the robustness of HOT and the lack of fragility to changes in the
lightning distribution associated with SOC.
Another feature of SOC in contrast to HOT is a heavier-tailed distribution of
burnout cascades. We in fact observe that the tail of the burnout distribution
becomes heavier with increasing number of players, superficially appearing to
shift to an SOC regime. However, these distributions begin to substantially
deviate from a power law even visually, and the setting is therefore in that
respect entirely unlike the criticality observed in SOC.
## Acknowledgements
Sandia is a multiprogram laboratory operated by Sandia Corporation, a wholly
owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of
Energy under contract DE-AC04-94AL85000.
## References
* [1] P. Bak, C. Tang, K. Wiesenfeld, Physical Review Letters 59, 381 (1987).
* [2] J. M. Carlson, J. Doyle, Physical Review E 60, 1412 (1999).
* [3] S. Clar, B. Drossel, F. Schwabl, Journal of Physics: Condensed Matter 8, 6803 (1996).
* [4] C. L. Henley, Physical Review Letters 71, 2741 (1993).
* [5] J. M. Carlson, J. Doyle, Physical Review Letters 84, 2529 (2000).
* [6] J. M. Carlson, J. Doyle, Proceedings of the National Academy of Sciences 99, 2538 (2002).
* [7] M. E. J. Newman, M. Girvan, J. D. Farmer, Physical Review Letters 89, 028301 (2002).
* [8] T. Zhou, J. M. Carlson, J. Doyle, Proceedings of the National Academy of Sciences 99, 2049 (2002).
* [9] T. Zhou, J. M. Carlson, J. Doyle, Journal of Theoretical Biology 236, 438 (2005).
* [10] M. J. Osborne, A. Rubinstein, A Course in Game Theory (The MIT Press, 1994).
* [11] A. Mas-Colell, M. D. Whinston, J. R. Green, Microeconomic Theory (Oxford University Press, 1995).
* [12] S. Kauffman, W. G. Macready, E. Dickinson, Divide to coordinate: Coevolutionary problem solving (1994). Unpublished manuscript.
* [13] A. Sureka, P. R. Wurman, Fourth International Joint Conference on Autonomous Agents and Multiagent Systems (2005), pp. 1023–1029.
* [14] M. P. Wellman, et al., Computational Intelligence 21, 1 (2005).
* [15] D. A. Seale, J. E. Burnett, International Game Theory Review 8, 437 (2006).
* [16] Y. Vorobeychik, M. P. Wellman, Seventh International Joint Conference on Autonomous Agents and Multiagent Systems (2008), pp. 1055–1062.
* [17] Y. Vorobeychik, D. M. Reeves, International Journal of Electronic Business 6, 172 (2008).
* [18] Y. Vorobeychik, Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (2009), pp. 583–590.
* [19] D. Fudenberg, D. K. Levine, The Theory of Learning in Games (The MIT Press, 1998).
* [20] T. J. L. III, M. Epelman, R. L. Smith, Operations Research 53, 477 (2005).
* [21] M. Epelman, A. Ghate, R. L. Smith, Computers and Operations Research (2011). Forthcoming.
* [22] E. Koutsoupias, C. Papadimitriou, Sixteenth Annual Conference on Theoretical Aspects of Computer Science (1999), pp. 404–413.
* [23] T. Roughgarden, Selfish Routing and the Price of Anarchy (The MIT Press, 2005).
* [24] N. Nisan, T. Roughgarden, E. Tardos, V. V. Vazirani, eds., Algorithmic Game Theory (Cambridge University Press, 2007).
* [25] E. Anshelevich, et al., SIAM Journal on Computing 38, 1602 (2008).
## Appendix A Supporting Online Material
### A.1 Characterization of $1$\- and $N$-player Settings in the 1-D Case
We begin the analysis by considering the two extremes, $m=1$ and $m=N$, in a
simpler model where the forest fire grid is one-dimensional (i.e., a line) and
the lightning distribution is uniform. This analysis will provide some initial
findings and intuition that we then carry over into the more complex two-
dimensional case.
Without loss of generality, let $k$ be the length of a sequence of planted
cells (1’s) followed by $l$ unplanted cells (0’s) and suppose that $1\ll k\ll
N$.
First, consider the case with $m=1$ and assume that $c<1-1/N$. Assume that $k$
is identical for all sequences of 1’s (when $k\ll N$, this is almost with no
loss of generality, since 1’s can be swapped, keeping the density constant,
without changing the utility) and note that in an optimal solution $l=1$. The
utility of the player (and global utility) is then
$u_{i}(k)=W(k)=\sum_{g\in G}(\Pr_{f}\\{\,g=1\mid
s\,\\}-c)s_{g}=N\rho(k)\left(1-\frac{k}{N}-c\right),$
where $\rho(k)=k/(k+1)$. This function is concave in $k$. To see this rewrite
$u_{i}$ as
$u_{i}(k)=\frac{Nk(1-c)-k^{2}}{k+1}.$
Taking the first derivative, we get
$u_{i}^{\prime}=\frac{N(1-c)-k^{2}-2k}{(k+1)^{2}}.$
Differentiating again we get
$u_{i}^{\prime\prime}=-\frac{2(1+N(1-c))}{(k+1)^{3}}<0,$
and, hence, $u_{i}$ is concave in $k$.
Thus, treating $k$ as a continuous variable, which is approximately correct
when $k\gg 1$, the first-order condition gives us the necessary and sufficient
condition for the optimal $k^{*}$. This condition is equivalent to
$k^{2}+2k-N(1-c)=0.$
The solutions to this quadratic equation are
$k=\frac{-2\pm\sqrt{4+4N(1-c)}}{2}.$
Since $k$ must be positive, we can discard one of the solutions, leaving us
with
$k^{*}=\sqrt{N(1-c)+1}-1.$
Evaluating $\rho$ and $W$ at $k^{*}$, we get
$\rho(k^{*})=\frac{\sqrt{N(1-c)+1}-1}{\sqrt{N(1-c)+1}}$
and
$u_{i}(k^{*})=W(k^{*})=\rho(k^{*})(N(1-c)-\sqrt{N(1-c)+1}-1).$
We can observe that $\rho(k^{*})$ tends to 1 as $N$ grows, while $W(k^{*})$
tends to $N(1-c)$ (all derivations are shown in the supporting online
material).
Consider next the case with $m=N$. While there are many equilibria, we can
precisely characterize upper and lower bounds on $k$ and $l$, and,
consequently, the set of equilibria. First, we note that $l$ must be either 1
or 2; otherwise, by the assumption that $c<1-1/N$, the player governing any
grid cell that is not adjacent to a sequence of 1’s will prefer to plant a
tree. Formally, we first note that by definition, $l>0$. Suppose $l>2$ and,
thus, there is a player not planting a tree who is not adjacent to another
with $s_{g}=1$. Then his utility from planting is $1-1/N-c$, and he (weakly)
prefers not to plant as long as $1-1/N-c\leq 0$ or $c\geq 1-1/N$, which is
ruled out by our assumption that $c<1-1/N$.
Second, we can get an upper bound on $k$ by considering the incentive of a
player that is part of the sequence of 1’s. This player will prefer to plant
as long as $1-k/N-c\geq 0$, giving us $k^{E}\leq N(1-c)$. A well-known measure
of the impact of equilibrium behavior on global utility is the “price of
anarchy”, the ratio of optimal global utility, here $W(k^{*})$, to global
utility at the worst-case equilibrium [22, 23, 24]. The upper bound on $k^{E}$
gives us the worst-case equilibrium from the perspective of global utility,
with $W(k^{E})=0$ resulting in an infinite price of anarchy (that is, global
utility in the worst-case equilibrium is arbitrarily worse than optimal for a
large enough number of players and grid cells $N$).
Looking now at the lower bound on $k^{E}$, we can distinguish two cases, $l=1$
and $l=2$. When $l=2$, either player not planting a tree prefers not to plant
as long as $1-(k+1)/N-c\leq 0$, and, therefore, $k^{E}\geq N(1-c)-1$. For
$l=1$, suppose that the two sequences of 1’s on either side of the non-
planting player have lengths $k$ and $k^{\prime}$. The player will prefer not
to plant as long as $1-(k+k^{\prime}+1)/N-c\leq 0$, where we are adding $k$
and $k^{\prime}$ since he will be joining the two sequences together if he
plants. This gives us $k+k^{\prime}\geq N(1-c)-1$. Since we are after a lower
bound, suppose without loss of generality that $k\leq k^{\prime}$. We then get
$k^{E}\geq[N(1-c)-1]/2$. It is instructive to apply now another measure of the
impact of equilibrium behavior, the “price of stability”, defined as the ratio
of optimal global utility to global utility at the _best-case_ equilibrium
[24, 25]. The best-case equilibrium in our case has $l=1$ and
$k^{E}=[N(1-c)-1]/2$, and the asymptotic price of stability is 2.
Now we compare the density at equilibrium and at the optimal configuration. We
are looking for the conditions under which the equilibrium density is strictly
higher. Notice that it certainly isn’t always the case. For example, if
$c>1-1/N$, no trees will be planted at all in equilibrium or in an optimal
configuration. Consequently, the density will be 0 in both cases. When
$N(1-c)\gg 1$ and $N$ is large, the density in the best-case equilibrium is
$\rho(k^{E})=\frac{N(1-c)-1}{N(1-c)+1}.$
Thus, $\rho(k^{E})>\rho(k^{*})$ iff
$\displaystyle\frac{N(1-c)-1}{N(1-c)+1}$ $\displaystyle>$
$\displaystyle\frac{\sqrt{N(1-c)+1}-1}{\sqrt{N(1-c)+1}}$
$\displaystyle\Leftrightarrow(N(1-c)-1)\sqrt{N(1-c)+1}$ $\displaystyle>$
$\displaystyle(N(1-c)+1)(\sqrt{N(1-c)+1}-1)$ $\displaystyle\Leftrightarrow
2\sqrt{N(1-c)+1}$ $\displaystyle<$ $\displaystyle N(1-c)+1$
$\displaystyle\Leftrightarrow 4(N(1-c)+1)$ $\displaystyle<$
$\displaystyle(N(1-c))^{2}+2N(1-c)+1$ $\displaystyle\Leftrightarrow 2N(1-c)+3$
$\displaystyle<$ $\displaystyle(N(1-c))^{2}.$
Solving the corresponding quadratic inequality gives us the condition that
$N(1-c)>3.$
Since we assume $N(1-c)\gg 1$ throughout, we effectively have that
$\rho(k^{E})>\rho(k^{*})$ under the assumptions operational here.
### A.2 Equilibria When $c=0$ and $m=N$
Suppose that each player controls a single grid cell, i.e., $m=N$. When cost
of planting trees is 0, there are only two Nash equilibria: one with every
player planting a tree, and another with a single player not planting. Indeed,
planting is a weakly dominant strategy for every player. To see this, suppose
that the number of players planting is $z<N-1$, and consider a player who is
not planting a tree. If he decides to plant, the probability of his tree
burning down is at most $(N-1)/N<1$, and so the player has a strict incentive
to plant. Furthermore, since there is no cost of planting, any player who is
planting a tree does not lose anything by doing so. Thus, every player
strictly prefers to plant as long as $z<N-1$, and weakly prefers to plant when
$z=N-1$ (in which case expected utility is zero whether he plants or not).
Finally, every player planting is clearly an equilibrium, and the only other
equilibrium has a single player who does not plant (since he is indifferent,
and every other player strictly prefers to plant if that player does not).
### A.3 Details of Equilibrium Approximation
We now present the details of the algorithms we used to approximate
equilibria. First, we show the “outer loop” algorithm for best response
dynamics as Algorithm 1.
Algorithm 1 BestResponseDynamics($T_{br}$, $p_{player}$)
$s_{g}\leftarrow 0\ \forall g\in G$
for $n=1$ to $T_{br}$ do
for $i=1$ to $m$ do
Fix $s_{-i}$
if RAND $\leq p_{player}$ then
$\hat{s}_{i}\leftarrow\mathrm{OPT}(s_{-i})$
else
$\hat{s}_{i}\leftarrow s_{i}$
end if
$s_{i}\leftarrow\hat{s}_{i}$
end for
end for
The parameter $T_{br}$ varies depending on the number of players. For example,
if there is just one player, $T_{br}=1$, whereas $T_{br}=50$ when $m=N$. The
variation is a consequence of extensive experimentation looking at sensitivity
of results to increasing the number of iterations. Our values are high enough
that results do not change appreciably when the number of iterations
increases. We set $p_{player}=0.9$.
For each player selected by the random biased coin flip (“RAND” is a uniform
random number on the unit interval), the algorithm calls OPT() to approximate
the best response of the player to a fixed grid configuration chosen by the
others. Our choice for this procedure is sampled fictitious play, which is
shown in pseudocode as Algorithm 2.
Algorithm 2 OPT($s_{-i},T_{opt},p_{cell},\alpha,h$)
$s_{g}\leftarrow 0\ \forall g\in G_{i}$
$H\leftarrow()$ // Initialize history of past choices $H$ to an empty list
for $n=1$ to $T_{opt}$ do
$s^{\prime}_{i}\leftarrow\mathrm{ChooseActions}(i,\alpha,H)$
$\hat{s}_{i}\leftarrow s_{i}$
for $g\in G_{i}$ do
if RAND $\leq p_{cell}$ OR $|G_{i}|=1$ then
if $u_{i}(s_{g}=1,s^{\prime}_{i},s_{-i})>u_{i}(s_{g}=0,s^{\prime}_{i},s_{-i})$
then
$\hat{s}_{g}\leftarrow 1$
else
$\hat{s}_{g}\leftarrow 0$
end if
end if
end for
$\mathrm{append\\_back}(H,\hat{s}_{i})$ // Add $\hat{s}_{i}$ at the end of
list $H$
if $|H|>h$ then
$\mathrm{remove\\_front}(H)$ // Remove the first element
end if
if $u_{i}(\hat{s}_{i},s_{-i})>u_{i}(s_{i},s_{-i})$ then
$s_{i}\leftarrow\hat{s}_{i}$
end if
end for
return $s_{i}$
Here, RAND() when called with a list argument picks a uniformly random element
of the list. $u_{i}()$ is a call to an oracle (a simulator) to determine $i$’s
utility in a particular grid configuration.
$u_{i}(s_{g}=a,s^{\prime}_{i},s_{-i})$ denotes utility when $i$ plays
according to $s^{\prime}_{i}$, except he sets $s_{g}=a$. We set history size
$h=1$ and exploration parameter $\alpha=0$. Thus, each grid cell at iteration
$t$ is always best-responding to the grid configuration from iteration $t-1$.
We set $p_{cell}=\max\\{0.05,1/N_{i}\\}$. Thus, on average, one player best-
responds in each iteration. Our parameters for both the optimization routine
and the best response routine were chosen based on extensive experimentation.
Algorithm 2 uses the subroutine ChooseActions(), which is specified as
Algorithm 3.
Algorithm 3 ChooseActions($i,\alpha,H$)
for $g\in G_{i}$ do
if RAND $\leq\alpha$ OR $H=()$ then
$s_{g}\leftarrow\mathrm{RAND}((0,1))$
else
$s_{g}\leftarrow\mathrm{RAND}(H)_{g}$
end if
end for
return $s_{i}$
In Table 1 we specify the number of iterations used for the outer loop (best
response dynamics) and inner loop (approximate optimization).
# players | $T_{br}$ | $T_{opt}$
---|---|---
1 | 1 | 200
4 | 5 | 120
16 | 20 | 80
64 | 20 | 80
256 | 20 | 80
1024 | 40 | 80
4096 | 20 | 35
16384 | 50 | 1
Table 1: Numbers of iterations of best response dynamics and sampled
fictitious play in the 2nd and 3rd column respectively.
### A.4 Relationship Between Lightning Distribution and Empty Cells
Recall our measure that captures the relationship between the distribution of
empty cells (fire breaks) on the grid and the lightning distribution:
$C=\frac{\sum_{g\in G}p_{g}(1-s_{g})}{1-\rho}.$
We now formally demonstrate that (a) $C>1$ when empty cells have the largest
probability of lightning and (b) $E[\sum_{g\in G}p_{g}(1-s_{g})]=1-\rho$ when
empty cells are chosen uniformly randomly on the grid. First, suppose that the
cells that have the $L$ highest lightning probabilities on the grid are empty,
and let probabilities be ranked from highest to lowest such that $p_{l}$
indicates $l$th highest probability of lightning on the corresponding grid
cell $g_{l}$. Further, suppose that no two $p_{l}$ are the same. Then
$\sum_{g\in
G}p_{g}(1-s_{g})=\sum_{l=1}^{N}p_{l}(1-s_{g_{l}})=\sum_{l=1}^{L}p_{l}>\frac{L}{N},$
where the last inequality follows since there are no ties between $p_{l}$.
Since $\frac{L}{N}=1-\rho$, the result follows.
Next we show that if empty cells are uniformly distributed, $E[\sum_{g\in
G}p_{g}(1-s_{g})]=1-\rho$. First, note that $1-\rho=L/N$ if $L$ cells are
empty and the rest have a tree. Now, suppose that each cell is empty with
probability $q=L/N$. Then
$E[\sum_{g\in G}p_{g}(1-s_{g})]=\sum_{g\in G}p_{g}E[1-s_{g}]=q\sum_{g\in
G}p_{g}=q.$
Since $q=L/N=1-\rho$, the result follows.
### A.5 Tree Fines
Another interesting inquiry concerns the question of policy: can imposing a
fine on planting trees alleviate the impact of negative externalities when the
number of players is large? To this end, suppose that $p$ is a penalty for
planting a tree. Global utility is then redefined as
$W(p)=Y(p)-cN\rho(p),$
where $Y(p)$ and $\rho(p)$ are the equilibrium global yield and density
respectively when the true cost of planting a tree is $c$ but each player
perceives it to be $c+p$. As Figure 8 suggests, the results are somewhat
mixed. First, considering just the plot on the right, we note that a small
penalty can have a large impact when $m=N$. Increasing the penalty further,
however, seems to improve global utility only slightly when the number of
players is large. On the other hand, the plot on the left suggests that when
the number of players is small, increasing player costs is at best
ineffective, and at worst may actually lower global utility. In either case,
outcomes never quite reach the optimum, although they come quite close when
the number of players is small. The simple policy of raising costs of players
via fines is therefore a relatively ineffective instrument here, and can at
times be counterproductive.
|
---|---
Figure 8: Variation of global utility $W(p)$ relative to its optimal value, as
a function of penalty amount $p$, with actual cost of planting a tree $c=0$.
Left: “few” players, that is, $m\in\\{4,16,64\\}$. Right: “many” players, that
is, $m\in\\{N/64,N/16,N\\}$.
|
arxiv-papers
| 2011-04-15T16:26:08 |
2024-09-04T02:49:18.285767
|
{
"license": "Public Domain",
"authors": "Yevgeniy Vorobeychik, Jackson Mayo, Robert Armstrong, Joseph Ruthruff",
"submitter": "Yevgeniy Vorobeychik",
"url": "https://arxiv.org/abs/1104.3103"
}
|
1104.3334
|
# Hamiltonicity, independence number, and pancyclicity
Choongbum Lee Department of Mathematics, UCLA, Los Angeles, CA, 90095. Email:
choongbum.lee@gmail.com. Research supported in part by a Samsung Scholarship.
Benny Sudakov Department of Mathematics, UCLA, Los Angeles, CA 90095. Email:
bsudakov@math.ucla.edu. Research supported in part by NSF grant DMS-1101185,
NSF CAREER award DMS-0812005 and by USA-Israeli BSF grant.
###### Abstract
A graph on $n$ vertices is called pancyclic if it contains a cycle of length
$\ell$ for all $3\leq\ell\leq n$. In 1972, Erdős proved that if $G$ is a
Hamiltonian graph on $n>4k^{4}$ vertices with independence number $k$, then
$G$ is pancyclic. He then suggested that $n=\Omega(k^{2})$ should already be
enough to guarantee pancyclicity. Improving on his and some other later
results, we prove that there exists a constant $c$ such that $n>ck^{7/3}$
suffices.
## 1 Introduction
A Hamilton cycle of a graph is a cycle which passes through every vertex of
the graph exactly once, and a graph is called Hamiltonian if it contains a
Hamilton cycle. Determining whether a given graph is Hamiltonian is one of the
central questions in graph theory, and there are numerous results which
establish sufficient conditions for Hamiltonicity. For example, a celebrated
result of Dirac asserts that every graph of minimum degree at least $\lceil
n/2\rceil$ is Hamiltonian. A graph is pancyclic if it contains a cycle of
length $\ell$ for all $3\leq\ell\leq n$. By definition, every pancyclic graph
is Hamiltonian, but it is easy to see that the converse is not true.
Nevertheless, these two concepts are closely related and many nontrivial
conditions which imply Hamiltonicity also imply pancyclicity of a graph. For
instance, extending Dirac’s Theorem, Bondy [2] proved that every graph of
minimum degree at least $\left\lceil n/2\right\rceil$ either is the complete
bipartite $K_{\lceil n/2\rceil,\lfloor n/2\rfloor}$, or is pancyclic.
Moreover, in [3], he made a meta conjecture in this context which says that
almost any non-trivial condition on a graph which implies that the graph is
Hamiltonian also implies that the graph is pancyclic (there may be a simple
family of exceptional graphs).
Let the independence number $\alpha(G)$ of a graph $G$ be the order of a
maximum independent set of $G$. A classical result of Chvátal and Erdős [4]
says that every graph $G$ whose vertex connectivity (denoted $\kappa(G)$) is
at least as large as its independence number is Hamiltonian. Motivated by
Bondy’s metaconjecture, Amar, Fournier, and Germa [1] obtained several results
on the lengths of cycles in a graph $G$ that satisfies the Chvátal-Erdős
condition $\kappa(G)\geq\alpha(G)$, and conjectured that if such a graph $G$
is not bipartite then either $G=C_{5}$, or $G$ contains cycles of length
$\ell$ for all $4\leq\ell\leq n$ (Lou [9] made some partial progress towards
this conjecture). In a similar context, Jackson and Ordaz [7] conjectured that
that every graph $G$ with $\kappa(G)>\alpha(G)$ is pancyclic. Keevash and
Sudakov [8] proved that there exists an absolute constant $c$ such that
$\kappa(G)\geq c\alpha(G)$ is sufficient for pancyclicity.
In this paper, we study a relation between Hamiltonicity, pancyclicity, and
the independence number of a graph. Such relation was first studied by Erdős
[5]. In 1972, he proved a conjecture of Zarins by establishing the fact that
every Hamiltonian graph $G$ on $n\geq 4k^{4}$ vertices with $\alpha(G)\leq k$
is pancyclic (see also [6] for a different proof of a weaker bound). Erdős
also suggested that the bound $4k^{4}$ on the number of vertices is probably
not tight, and that the correct order of magnitude should be $\Omega(k^{2})$.
The following graph shows that this, if true, is indeed best possible. Let
$K_{1},\cdots,K_{k}$ be disjoint cliques of size $k-2$, where each $K_{i}$ has
two distinguished vertices $v_{i}$ and $w_{i}$. Let $G$ be the graph obtained
by connecting $v_{i}\in K_{i}$ and $w_{i+1}\in K_{i+1}$ by an edge (here
addition is modulo $k$). One can easily show that this graph is Hamiltonian,
has $k(k-2)$ vertices, and independence number $k$. However, this graph does
not contain a cycle of length $k-1$ (thus is not pancyclic), since every cycle
either is a subgraph of one of the cliques, or contains at least one vertex
from each clique $K_{i}$. The former type of cycles have length at most $k-2$,
and the later type of cycles have length at least $2k$.
Recently, Keevash and Sudakov [8] improved Erdős’ result and showed that
$n>150k^{3}$ already implies pancyclicity. Our main theorem further improves
this bound.
###### Theorem 1.1.
There exists a constant $c$ such that for every positive integer $k$, every
Hamiltonian graph on $n\geq ck^{7/3}$ vertices with $\alpha(G)\leq k$ is
pancyclic.
Suppose that one established the fact that for some function $f(k)$, every
Hamiltonian graph on $n\geq f(k)$ vertices with $\alpha(G)\leq k$ contains a
cycle of length $n-1$. Then, by iteratively applying this result, one can
easily see that for every constant $C\geq 1$, every graph on $n\geq Cf(k)$
vertices with $\alpha(G)\leq k$ contains cycles of all length between
$\frac{n}{C}$ and $n$. This simple observation were used in both [5] and [8],
where they found cycles of length linear in $n$ using this method, and then
found cycles of smaller lengths using other methods. Thus the problem finding
a cycle of length $n-1$ is a key step in proving pancyclicity. Keevash and
Sudakov suggested that if one just is interested in this problem, then the
bound between the number of vertices and independence number can be
significantly improved. More precisely, they asked whether there is an
absolute constant $c$ such that every Hamiltonian graph on $n\geq ck$ vertices
with independence number $k$ contains a cycle of length $n-1$.
Despite the fact that this bound suggested by Keevash and Sudakov is only
linear in $k$, even improving Erdős’ original estimate of $n=\Omega(k^{3})$
was not an easy task. Moreover, as we will explain in the concluding remarks,
currently the bottleneck of proving pancyclicity lies in this step of finding
a cycle of length $n-1$. Thus in order to prove Theorem 1.1, we partially
answer Keevash and Sudakov’s question for the range $n\geq ck^{7/3}$, and
combine this result with tools developed in [8]. Therefore, the main focus of
our paper will be to prove the following theorem.
###### Theorem 1.2.
There exists a constant $c$ such that for every positive integer $k$, every
Hamiltonian graph on $n\geq ck^{7/3}$ vertices with $\alpha(G)\leq k$ contains
a cycle of length $n-1$.
In Section 2, we state a slightly stronger form of Theorem 1.2, and use it to
deduce Theorem 1.1. Then in Sections 3 and 4, we prove the strengthened
version Theorem 1.2. To simplify the presentation, we often omit floor and
ceiling signs whenever these are not crucial and make no attempts to optimize
absolute constants involved.
## 2 Pancyclicity
In order to prove Theorem 1.1, we use the following slightly stronger form of
Theorem 1.2 whose proof will be given in the next two sections.
###### Theorem 2.1.
There exists a constant $c$ such that for every positive integer $k$, every
Hamiltonian graph on $n\geq ck^{7/3}$ with $\alpha(G)\leq k$ contains a cycle
of length $n-1$. Moreover, for an arbitrary fixed set of vertices $W$ of size
$|W|\leq 20k^{2}$, we can find such a cycle which contains all the vertices of
$W$.
As mentioned in the Introduction, Theorem 2.1 will be used to find cycles of
linear lengths. The following two results from [8, Theorem 1.3 and Lemma 3.2]
allows us to find cycle lengths in the range not covered by Theorem 2.1.
###### Theorem 2.2.
If $G$ is a graph with $\delta(G)\geq 300\alpha(G)$ then $G$ contains a cycle
of length $\ell$ for all $3\leq\ell\leq\delta(G)/81$.
###### Lemma 2.3.
Suppose $G$ is a graph with independence number $\alpha(G)\leq k$ and $V(G)$
is partitioned into two parts $A$ and $B$ such that
1. (i)
$G[A]$ is Hamiltonian.
2. (ii)
$|B|\geq 9k^{2}+k+1$, and
3. (iii)
every vertex in $B$ has at least 2 neighbors in $A$.
Then $G$ contains a cycle of length $\ell$ for all
$2k+1+\left\lfloor\log_{2}(2k+1)\right\rfloor\leq\ell\leq|A|/2$.
Proof of Theorem 1.1. Note that the conclusion is immediate if $k=1$. Thus we
may assume that $k\geq 2$. Let $c$ be the maximum of the constant coming from
Theorem 2.1 and 300 and $G$ be Hamiltonian graph on $n=3ck^{7/3}$ vertices
such that $\alpha(G)\leq k$. By repeatedly applying Theorem 2.1 with
$W=\emptyset$, we can find cycles of length $ck^{7/3}$ to $3ck^{7/3}$.
Moreover, as we will see, by carefully using Theorem 2.1 in the previous step,
we can prepare a setup for applying Lemma 2.3. Let $C_{1}$ be the cycle of
length $n-1$ obtained by Theorem 2.1, and let $v_{1}$ be the vertex not
contained in $C_{1}$. We know that $v_{1}$ has at least 2 neighbors in
$C_{1}$. Let $W_{1}$ be arbitrary two vertices out of them. By applying
Theorem 2.1 with $W=W_{1}$, we can find a cycle $C_{2}$ of length $n-2$ which
contains $W_{1}$. Let $v_{2}$ be the vertex contained in $C_{1}$ but not in
$C_{2}$, and let $W_{2}$ be the union of $W_{1}$ and arbitrary two neighbors
of $v_{2}$ in $C_{2}$. We can repeat it $10k^{2}$ times (note that we maintain
$|W|\leq 20k^{2}$), to obtain a cycle $C_{10k^{2}}$ of length $n-10k^{2}$, and
vertices $v_{1},\cdots,v_{10k^{2}}$ so that each $v_{i}$ has at least 2
neighbors in the cycle $C_{10k^{2}}$. Since $10k^{2}\geq 9k^{2}+k+1$, by Lemma
2.3, $G$ contains a cycle of length $\ell$ for all
$2k+1+\left\lfloor\log_{2}(2k+1)\right\rfloor\leq\ell\leq(n-10k^{2})/2$.
Now we find all the remaining cycle lengths. From the graph $G$, pick one by
one, a vertex of degree less than $ck^{4/3}$, and remove it together with its
neighbors. Note that since the picked vertices form an independent set in $G$,
at most $k$ vertices will be removed. Therefore, when there are no more
vertices to pick, at least $3ck^{7/3}-k\cdot(ck^{4/3}+1)>ck^{7/3}$ vertices
remain, and the induced subgraph of $G$ on these vertices will be of minimum
degree at least $ck^{4/3}$. Since $ck^{4/3}\geq 300k\geq 300\alpha(G)$, by
Theorem 2.2, $G$ contains a cycle of length $\ell$ for all
$3\leq\ell\leq(c/81)k^{4/3}$.
By noticing the inequalities $(n-10k^{2})/2=(3ck^{7/3}-10k^{2})/2\geq
ck^{7/3}$ and $(c/81)k^{4/3}\geq 2k+1+\left\lfloor\log_{2}(2k+1)\right\rfloor$
we can see the existence of cycles of all possible lengths. ∎
## 3 A structural lemma
In Sections 3 and 4, we will prove Theorem 2.1. Given a Hamiltonian graph on
$n$ vertices, one can easily see that there are many ways one can find a cycle
of length $n-1$, if certain ‘chords’ are present in the graph. Our strategy is
to find such chords that are ‘nicely’ arranged. In particular, in this
section, we consider pairs of chords and the way they cross each other in
order to deduce some structure of our graph. Then in the next section, we
prove the main theorem by considering certain triples of chords, which we call
semi-triangles.
Throughout this section, let $G$ be a fixed graph on $n\geq 80k^{2}$ vertices
such that $\alpha(G)\leq k$, and let $W$ be a fixed set of vertices such that
$|W|\leq 20k^{2}$. Note that the bound on the number of vertices is weaker
than that of Theorem 2.1. The results developed in this section still holds
under this weaker bound, and we only need the stronger bound $n\geq ck^{7/3}$
in the next section. Since our goal is to prove the existence of a cycle of
length $n-1$, assume to the contrary that $G$ does not contain a cycle of
length $n-1$. Under these assumptions, we will prove a structural lemma on the
graph $G$ which will immediately imply a slightly weaker form of Theorem 2.1
where the bound on the number of vertices is replaced by $\Omega(k^{5/2})$. In
the next section, we will apply this structural lemma more carefully to prove
Theorem 2.1.
One of the main ingredients of the proof is the following proposition proved
by Erdős [5], whose idea has its origin in [4].
###### Proposition 3.1.
For all $1\leq i\leq 12k$, $G$ does not have a cycle of length $n-i$
containing $W$ for which all the vertices not in this cycle have degree at
least $13k$.
###### Proof.
Assume that $C$ is the vertex set of a cycle given as above and let
$X=V(G)\setminus C$. We will show that there exists a cycle of length $|C|+1$
which contains $C$. By repeatedly applying the same argument, we can show the
existence of a cycle of length $n-1$. Since this contradicts our hypothesis,
we can conclude that $G$ cannot contain a cycle as above.
Consider a vertex $x\in X$. Since $|X|\leq 12k$ and $\textrm{d}(x)\geq 13k$,
we know that the number of neighbors of $x$ in $C$ is at least $k$. Without
loss of generality, let $C=\\{1,2,\cdots,n-i\\}$, and assume that the vertices
are labeled in the order in which they appear on the cycle. Let
$w_{1},\cdots,w_{k}$ be distinct neighbors of $x$ in $C$. Then since $G$ has
independence number less than $k$, there exists two vertices $w_{i}-1,w_{j}-1$
which are adjacent (subtraction is modulo $n-i$). Then $G$ contains a cycle
$x,w_{i},w_{i}+1,\cdots,w_{j}-1,w_{i}-1,w_{i}-2,\cdots,w_{j},x$ of length
$n-i+1$. ∎
In view of Proposition 3.1, we make the following definition.
###### Definition 3.2.
Let a contradicting cycle be a cycle containing $W$, of length $n-i$ for some
$1\leq i\leq 12k$, for which all the vertices not in this cycle have degree at
least $13k$.
Thus Proposition 3.1 is equivalent to saying that $G$ does not contain a
contradicting cycle (under the assumption that $G$ does not contain a cycle of
length $n-1$). By considering several cases, we will show that there always
exists a contradicting cycle, from which we can deduce a contradiction on our
assumption that there is no cycle of length $n-1$ in $G$. The next simple
proposition will provide a set-up for this argument.
###### Proposition 3.3.
$G$ contains at most $13k^{2}$ vertices of degree less than $13k$.
###### Proof.
Assume that there exists a set $U$ of at least $13k^{2}+1$ vertices of degree
less than $13k$, and let $G^{\prime}\subset G$ be the subgraph of $G$ induced
by $U$. Take a vertex of $G^{\prime}$ of degree less than $13k$, remove it and
all its neighbors from $G^{\prime}$, and repeat the process. This produces an
independent set of size at least $\lceil(13k^{2}+1)/13k\rceil=k+1$ which is a
contradiction. ∎
Assume that we are given a Hamilton cycle of $G$. Place the vertices of $G$ on
a circle in the plane according to the order they appear in the Hamilton cycle
and label the vertices by elements in $[n]$ accordingly. Consider the
$40k^{2}$ intervals
$[1+(i-1)\lfloor\frac{n}{40k^{2}}\rfloor,i\lfloor\frac{n}{40k^{2}}\rfloor]$
for $i=1,\cdots,40k^{2}$ consisting of consecutive vertices on the cycle. Take
the intervals which only consist of vertices not in $W$ of degree at least
$13k$. Let $t$ be the number of such intervals and let
$I_{1},I_{2},\cdots,I_{t}$ be these intervals (see Figure 3). By Proposition
3.3, the number of intervals which contain a vertex from $W$ or of degree less
than $13k$ is at most $|W|+13k^{2}$, and therefore
$t\geq 40k^{2}-13k^{2}-|W|\geq 7k^{2}.$
&
RedBlue$x_{2}$$x_{1}$$y_{2}$$y_{1}$$y_{2}$$y_{1}$$x_{2}$$x_{1}$
Figure 1: Intervals $I_{i}$, and the type of edges in the graph $H$.
For each interval $I_{j}$, let $I_{j}^{\prime}$ be the set of first at most
$k+1$ odd vertices in it (thus $I_{j}^{\prime}$ is the set of all odd vertices
in $I_{j}$ if $|I_{j}|\leq 2(k+1)$). If there exists an edge inside
$I_{j}^{\prime}$ then since $I_{j}^{\prime}$ lies in an interval of length at
most $2k+2$, we can find a contradicting cycle. Therefore $I_{j}^{\prime}$ is
an independent set of size at least
$\min\\{k+1,\lfloor\frac{n}{80k^{2}}\rfloor\\}$. However, since the
independence number of the graph is at most $k$, the first case
$|I_{j}^{\prime}|=k+1$ gives us a contradiction. Therefore, we may assume that
$|I_{j}^{\prime}|\leq k$, and thus $I_{j}^{\prime}$ lies in an interval of
length at most $2k$.
Consider an auxiliary graph $H$ on the vertex set $[t]$ so that $i,j$ are
adjacent if and only if there exists an edge between $I_{i}^{\prime}$ and
$I_{j}^{\prime}$. Furthermore, color the edges of $H$ into three colors
according to the following rule (see Figure 3).
1. (i)
Red if there exists $x_{1},x_{2}\in I_{i}^{\prime},y_{1},y_{2}\in
I_{j}^{\prime}$ such that $x_{1}<x_{2}$, $y_{1}<y_{2}$ and $x_{1}$ is adjacent
to $y_{1}$, and $x_{2}$ is adjacent to $y_{2}$.
2. (ii)
Blue if not colored red, and there exists $x_{1},x_{2}\in
I_{i}^{\prime},y_{1},y_{2}\in I_{j}^{\prime}$ such that $x_{1}<x_{2}$,
$y_{1}<y_{2}$ and $x_{1}$ is adjacent to $y_{2}$, and $x_{2}$ is adjacent to
$y_{1}$.
3. (iii)
Green if not colored red nor blue.
A red edge in the graph $H$ will give a cycle $x_{1}-y_{1}-x_{2}-y_{2}-x_{1}$,
see Figure 3. The length of the cycle is at least $n-4k$ since each
$I_{i}^{\prime}$ lies in an interval of length at most $2k$, and is at most
$n-2$ since there always exist vertices between $x_{1},x_{2}$ and between
$y_{1},y_{2}$. Moreover, the cycle contains the set $W$ since $W$ does not
intersect the intervals $I_{i}$. Therefore it is a contradicting cycle. Thus
we may assume that there does not exist red edges in $H$.
Consider the following drawing of the subgraph of $H$ induced by the blue
edges. First place all the vertices of the graph $G$ on the cycle along the
given order. A vertex of $H$, which corresponds to an interval $I_{i}$, will
be placed on the circle in the middle of the interval $I_{i}$. Draw a straight
line between $I_{i}$ and $I_{j}$ if there is a blue edge. Assume that there
exists a crossing in this drawing. Then this gives a situation as in Figure 3
which gives the cycle $x_{1}-y_{2}-x_{3}-y_{4}-y_{1}-x_{2}-y_{3}-x_{4}-x_{1}$.
This cycle has length at least $n-4\cdot 2k\geq n-8k$ and at most $n-4$, hence
is a contradicting cycle.
&
$y_{2}$$y_{1}$$y_{4}$$y_{3}$$x_{2}$$x_{1}$$x_{4}$$x_{3}$
Figure 2: Contradicting cycles for a red edge, and two crossing blue edges in
$H$.
Therefore, the subgraph of $H$ induced by blue edges form a planar graph. This
implies that there exists a subset of $[t]$ of size at least $t/5$ which does
not contain any blue edge (note that here we use the fact that every planar
graph is 5-colorable). By slightly abusing notation, we will only consider
these intervals, and relabel the intervals as $I_{1},\cdots,I_{s}$ where
$s\geq t/5>k^{2}$.
###### Lemma 3.4.
Let $a_{1},\cdots,a_{p}\in[s]$ be distinct integers and let $X_{a_{i}}\subset
I_{a_{i}}^{\prime}$ for all $i$. Then $X_{a_{1}}\cup\cdots\cup X_{a_{p}}$
contains an independent set of size at least
$\sum_{i=1}^{p}|X_{a_{i}}|-\frac{p(p-1)}{2}.$
###### Proof.
The proof of this lemma relies on a fact about green edges in the auxiliary
graph $H$. Assume that there exists a green edge between $i$ and $j$ in $H$.
Then by the definition, since the edge is neither red nor blue, we know that
there is no matching in $G$ of size 2 between $I_{i}^{\prime}$ and
$I_{j}^{\prime}$. Therefore there exists a vertex $v$ which covers all the
edges between $I_{i}^{\prime}$ and $I_{j}^{\prime}$.
Now consider the following process of constructing an independent set $J$.
Take $J=\emptyset$ in the beginning. At step $i$, add $X_{a_{i}}$ to the set
$J$. By the previous observation, for $j<i$, all the edges between $X_{a_{i}}$
and $X_{a_{j}}$ can be deleted by removing at most one vertex (either from
$X_{a_{i}}$ or $X_{a_{j}}$). Therefore $J\cup X_{a_{i}}$ can be made into an
independent set by removing at most $i-1$ vertices. By iterating the process,
we can obtain an independent set of size at least
$\sum_{i=1}^{p}\left(|X_{a_{i}}|-(i-1)\right)\geq\sum_{i=1}^{p}|X_{a_{i}}|-\frac{p(p-1)}{2}$.
∎
Remark. As mentioned before, this lemma already implies a weaker version of
Theorem 2.1 where the bound is replaced by $n=240k^{5/2}$. To see this, assume
that we have a graph on at least $240k^{5/2}$ vertices. Take
$X_{i}=I_{i}^{\prime}$ for $i=1,\cdots,\lceil k^{1/2}\rceil$ in this lemma and
notice that $|I_{i}^{\prime}|\geq\min\\{k+1,\lfloor 3k^{1/2}\rfloor\\}$. As we
have seen before, $|I_{i}^{\prime}|=k+1$ cannot happen. On the other hand,
$|I_{i}^{\prime}|=\lfloor 3k^{1/2}\rfloor$ implies the existence of an
independent set of size at least
$\lfloor 3k^{1/2}\rfloor\cdot\lceil k^{1/2}\rceil-\frac{\lceil
k^{1/2}\rceil(\lceil
k^{1/2}\rceil-1)}{2}\geq(3k^{1/2}-1)k^{1/2}-\frac{k+k^{1/2}}{2}>k,$
which gives a contradiction.
## 4 Proof of Theorem 2.1
In this section, we will prove Theorem 2.1 which says that there exists a
constant $c$ such that every Hamiltonian graph on $n\geq ck^{7/3}$ with
$\alpha(G)\leq k$ contains a cycle of length $n-1$. We will first focus on
proving the following relaxed statement: there exists $k_{0}$ such that for
$k\geq k_{0}$, every Hamiltonian graph on $n\geq 960k^{7/3}$ vertices with
$\alpha(G)\leq k$ contains a cycle of length $n-1$. Note that for the range
$k<k_{0}$, since there exists a constant $c^{\prime}$ such that
$c^{\prime}k^{7/3}\geq 240k^{5/2}$, by the remark at the end of the previous
section, the bound $n\geq c^{\prime}k^{7/3}$ will imply pancyclicity.
Therefore by taking $\max\\{960,c^{\prime}\\}$ as our final constant, the
result we prove in this section will in fact imply Theorem 2.1. By relaxing
the statement as above, we may assume that $k$ is large enough. This will
simplify many calculations. In particular, it allows us to ignore the floor
and ceiling signs in this section.
Now we prove the above relaxed statement using the tools we developed in the
previous section. Assume that $n\geq 960k^{7/3}$ and $k$ is large enough.
Recall that we have independent sets $I_{1}^{\prime},\cdots,I_{s}^{\prime}$
such that $s>k^{2}$ and
$|I_{i}^{\prime}|\geq\lfloor\frac{n}{80k^{2}}\rfloor\geq 12k^{1/3}$ for all
$i$. For each $i$, let $M_{i}$ and $L_{i}$ be the smaller $|I_{i}^{\prime}|/2$
vertices and larger $|I_{i}^{\prime}|/2$ vertices of $I_{i}^{\prime}$ in the
cycle order given in the previous section, and call them as the main set and
leftover set, respectively. Note that $M_{i}$ and $L_{i}$ both have size at
least $6k^{1/3}$. For a vertex $v$, call a set $M_{j}$ (or an index $j$) as a
neighboring main set of $v$ if $v$ contains a neighbor in $M_{j}$.
###### Lemma 4.1.
There exists a subcollection of indices $S\subset[s]$ such that the following
holds. For every $i\in S$, the set $M_{i}$ contains at least $3k^{1/3}$
vertices which each have at least $k$ neighboring main sets whose indices lie
in $S$.
###### Proof.
In order to find the set of indices $S$ described in the statement, consider
the process of removing the main sets which do not satisfy the condition one
by one. If the process ends before we run out of sets, then the remaining
indices will satisfy the condition.
Let $J=\emptyset$. Pick the first set $M_{i}$ which has been removed. It
contains at most $3k^{1/3}$ vertices which have at least $k$ neighboring main
sets. Since there are at least $6k^{1/3}$ vertices in $M_{i}$, we can pick
$3k^{1/3}$ vertices in $M_{i}$ which have less than $k$ neighboring main sets
and add them to $J$. For each such vertex added to $J$, remove all the
neighboring main sets of it. In this way, at each step we will increase the
size of $J$ by $3k^{1/3}$ and remove at most $1+(k-1)\cdot 3k^{1/3}$ main
sets. Now pick the first main sets among the remaining ones, and repeat what
we have done to further increase $J$.
Assume that in the end, there are no remaining sets (if this is not the case,
then we have found our set $S$). Note that $J$ is an independent set by
construction, and since $s>k^{2}$, the size of it will be at least
$3k^{1/3}\cdot\frac{k^{2}}{1+3k^{1/3}\cdot(k-1)}>k.$
This gives a contradiction and concludes the proof since the independence
number of the graph is at most $k$. ∎
From now on we will only consider sets which have indices in $S$. Let a semi-
triangle be a sequence of three indices $(p,q,r)$ in $S$ which lies in
clockwise order on the cycle, and satisfies either one of the following two
conditions (see Figure 3).
1. (i)
Type A : there exists $x_{1},x_{2}\in I_{p}^{\prime},y_{1},y_{2}\in
I_{q}^{\prime},z_{1},z_{2}\in I_{r}^{\prime}$ such that
$x_{1}<x_{2},y_{1}<y_{2},z_{1}<z_{2}$ and
$\\{x_{1},z_{1}\\},\\{x_{2},y_{1}\\},\\{y_{2},z_{2}\\}\in E(G)$. Moreover,
there exists at least one set $I_{i}^{\prime}$ with $i\in S$ in the arc
starting at $p$ and ending at $q$ (traverse clockwise).
2. (ii)
Type B : there exists $x_{1},x_{2}\in I_{p}^{\prime},y_{1},y_{2}\in
I_{q}^{\prime},z_{1},z_{2}\in I_{r}^{\prime}$ such that
$x_{1}<x_{2},y_{1}<y_{2},z_{1}<z_{2}$ and
$\\{x_{1},y_{1}\\},\\{x_{2},z_{1}\\},\\{y_{2},z_{2}\\}\in E(G)$.
Note that $(p,q,r)$ being a semi-triangle does not necessarily imply that
$(q,r,p)$ is also a semi-triangle. Semi-triangles are constructed so that
‘chords’ intersect in a predescribed way. This arrangement of chords will
allow us to find contradicting cycles, once we are given certain semi-
triangles in our graph. As an instance, one can see that a semi-triangle of
Type B contains a cycle $x_{1}-y_{1}-x_{2}-z_{1}-y_{2}-z_{2}-x_{1}$, see
Figure 3. Recall that each set $I_{j}^{\prime}$ lies in a consecutive interval
of length at most $2k$, and thus the length of the cycle is at least $n-6k$.
Moreover, since each set $I_{j}^{\prime}$ is defined as the set of odd
vertices in $I_{j}$, the length of the cycle is at most $n-3$ (it must miss
vertices between $x_{1}$ and $x_{2}$, $y_{1}$ and $y_{2}$, and $z_{1}$ and
$z_{2}$). Finally, since all the intervals $I_{j}$ do not intersect $W$, the
cycle is a contradicting cycle. Therefore we may assume that no such semi-
triangle exists. We will later see that one can find a contradicting cycle
even if Type A semi-triangles intersect in a certain way.
Next lemma shows that the graph $G$ contains many semi-triangles of Type A.
&
$I_{r}^{\prime}$$I_{q}^{\prime}$$I_{p}^{\prime}$$z_{2}$$z_{1}$$y_{2}$$y_{1}$$x_{2}$$x_{1}$
Figure 3: Semi-triangles of Type A and Type B respectively.
###### Lemma 4.2.
Let $M_{p}$ be a fixed main set, and let $S^{\prime}\subset S$ be a set of
indices such that at least $k^{1/3}$ vertices in $M_{p}$ have at least $k/3$
neighboring main sets in $S^{\prime}$. Then there exists a semi-triangle
$(p,q_{1},q_{2})$ of Type A such that $q_{1},q_{2}\in S^{\prime}$.
###### Proof.
Let $M_{p}$ and $S^{\prime}$ be given as in the statement. Among the sets
$M_{x}$ with indices in $S^{\prime}$, let $M_{i}$ be the closest one to
$M_{p}$ in the clockwise direction. To make sure that we get a semi-triangle
of Type A, we will remove $i$ from $S^{\prime}$ and only consider the set
$S^{\prime\prime}=S^{\prime}\setminus\\{i\\}$. Thus we will have $k/3-1$
neighboring main sets in $S^{\prime\prime}$ for each of the given vertices.
Arbitrarily select $k^{1/3}$ vertices in $M_{p}$ which have at least $k/3-1$
neighboring main sets in $S^{\prime\prime}$. Since for large $k$ we have
$k^{1/3}\cdot k^{1/3}\leq(k/3)-1$, we can assign $k^{1/3}$ neighboring main
sets to each selected vertex so that the assigned sets are distinct for
different vertices. Then for a selected vertex $v\in M_{p}$, let $J_{v}$ be
the union of the leftover sets $L_{x}$ corresponding to the $k^{1/3}$ main
sets $M_{x}$ assigned to $v$. Since each set $L_{x}$ has size at least
$6k^{1/3}$, by Lemma 3.4, $J_{v}$ contains an independent set of size at least
$k^{1/3}\cdot 6k^{1/3}-k^{2/3}/2\geq(11/2)k^{2/3}$. Denote this independent
set by $J_{v}^{\prime}$.
&
$L_{q_{2}}$$M_{q_{2}}$$L_{q_{1}}$$M_{q_{1}}$$L_{p}$$M_{p}$$I_{i}^{\prime}$$v_{2}$$v_{1}$
Figure 4: Constructing semi-triangles, Type A and B, respectively.
Since the sets $J_{v}^{\prime}$ are disjoint for different vertices, we have
$|\bigcup_{v\in M_{p}}J_{v}^{\prime}|\geq(11/2)k^{2/3}\cdot k^{1/3}\geq k+1$.
Therefore, by the restriction on the independence number there exists an edge
between $J_{v_{1}}^{\prime}$ and $J_{v_{2}}^{\prime}$ for two distinct
vertices $v_{1}$ and $v_{2}$ (the edge cannot be within one set
$J_{v}^{\prime}$ since $J_{v}^{\prime}$ is an independent set for all $v$).
Let $M_{q_{1}}$ be the main set in which the neighborhood of $v_{1}$ lies in,
and similarly define $M_{q_{2}}$ so that there exists an edge between
$L_{q_{1}}$ and $L_{q_{2}}$. Depending on the relative position of
$M_{q_{1}},M_{q_{2}}$ and $M_{p}$ on the cycle, the edge $\\{v_{1},v_{2}\\}$
will give rise to a semi-triangle of Type A or B, see Figure 3 (note that the
additional condition for semi-triangle of Type A is satisfied because we
removed the index $i$ in the beginning). Since we know that there does not
exist a semi-triangle of Type B, it should be a semi-triangle of Type A. ∎
In particular, Lemma 4.2 implies the existence of a semi-triangle $(p,q,r)$ of
Type A. Let the length of a Type A semi-triangle be the number of sets
$I_{i}^{\prime}$ with $i\in S$ in the arc that starts at $p$ and ends at $q$
(traverse clockwise). Among all the semi-triangles of Type A consider the one
which has minimum length and let this semi-triangle be $(p,q,r)$. By
definition, every semi-triangle has length at least 1, and thus we know that
there exists an index in $S$ in the arc starting at $p$ and ending at $q$
(traverse clockwise). Let $i\in S$ be such an index which is closest to $p$
(see Figure 3).
&
$I_{r}^{\prime}$$I_{q}^{\prime}$$I_{p}^{\prime}$$I_{i}^{\prime}$$I_{j}^{\prime}$$I_{k}^{\prime}$
Figure 5: Two overlapping semi-triangles which give a contradicting cycle.
Now consider the set of indices $S_{1},S_{2},S_{3}\subset S$ such that $S_{1}$
is the set of indices between $p$ and $q$, $S_{2}$ is the set of indices
between $q$ and $r$, and $S_{3}$ is the set of indices between $r$ and $p$
along the circle, all in clockwise order (see Figure 3). By pigeonhole
principle and how we constructed the indices $S$ in Lemma 4.1, there exists at
least one set out of $S_{1},S_{2},S_{3}$ such that at least $k^{1/3}$ vertices
of $M_{i}$ have at least $k/3$ neighboring main sets inside it.
If this set is $S_{1}$, then by Lemma 4.2 there exists a Type A semi-triangle
which is completely contained in the arc between $p$ and $q$, and thus has
smaller length than the semi-triangle $(p,q,r)$. Since this is impossible, we
may assume that the set mentioned above is either $S_{2}$ or $S_{3}$. In
either of the cases, by Lemma 4.2 we can find a Type A semi-triangle $(i,j,k)$
which together with $(p,q,r)$ will give a contradicting cycle, see Figure 3
(recall that each set $I_{i}^{\prime}$ lies in a consecutive interval of
length at most $2k$, and thus the length of this cycle is at least $n-12k$ and
at most $n-6$). This shows that the assumption we made at the beginning on $G$
not containing a cycle of length $n-1$ cannot hold. Therefore we have proved
Theorem 2.1.
## 5 Concluding Remarks
In this paper we proved that there exists an absolute constant $c$ such that
if $G$ is a Hamiltonian graph with $n\geq ck^{7/3}$ vertices and
$\alpha(G)\leq k$, then $G$ is pancyclic. The main ingredient of the proof was
Theorem 1.2, which partially answers a question of Keevash and Sudakov, and
tells us that under the same condition as above, $G$ contains a cycle of
length $n-1$. It seems very likely that if one can answer Keevash and
Sudakov’s question, even for $n=\Omega(k^{2})$, then one can also resolve
Erdős’ question, by using a similar approach to that of Section 2 (see Theorem
2.1, which is a strengthened version of Theorem 1.2).
Acknowledgement. We would like to thank Peter Keevash for stimulating
discussions.
## References
* [1] D. Amar, I. Fournier, A. Germa, Pancyclism in Chvátal-Erdős graphs, Graphs Combin. 7 (1991), 101–112.
* [2] J.A. Bondy, Pancyclic graphs I, J. Combin. Theory Ser. B 11 (1971) 80–84.
* [3] J.A. Bondy, Pancyclic graphs: Recent results, infinite and finite sets, in : Colloq. Math. Soc. János Bolyai, Keszthely, Hungary, 1973, pp. 181–187.
* [4] V. Chvátal and P. Erdős, A note on Hamiltonian circuits, Discrete Math 2 (1972) 111-113.
* [5] P. Erdős, Some problems in graph theory, in: Hypergraph Seminar, Ohio State Univ., Columbus, Ohio, 1972, in: Lecture Notes in Math., vol. 411, Springer, Berlin, 1974, 187–190.
* [6] E. Flandrin, H. Li, A. Marczyk, and M. Wozniak, A note on pancyclism of highly connected graphs, Discrete Math. 286 (2004) 57–60.
* [7] B. Jackson, O. Ordaz, Chvátal-Erdős conditions for paths and cycles in graphs and digraphs: A survey, Discrete Math. 84 (1990), 241–254.
* [8] P. Keevash and B. Sudakov, Pancyclicity of Hamiltonian and highly connected graphs, J. Combin. Theory Ser. B 100 (2010), 456–467.
* [9] D. Lou, The Chvátal-Erdős condition for cycles in triangle-free graphs, Discrete Math. 152 (1996), 253–257.
|
arxiv-papers
| 2011-04-17T18:35:59 |
2024-09-04T02:49:18.298469
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Choongbum Lee and Benny Sudakov",
"submitter": "Choongbum Lee",
"url": "https://arxiv.org/abs/1104.3334"
}
|
1104.3360
|
# Motor-driven Dynamics of Cytoskeletal Filaments in Motility Assays
Shiladitya Banerjee Physics Department, Syracuse University, Syracuse, NY
13244, USA M. Cristina Marchetti Physics Department & Syracuse Biomaterials
Institute, Syracuse University, Syracuse, NY 13244, USA Kristian Müller-
Nedebock Institute of Theoretical Physics/Department of Physics, Stellenbosch
University, Matieland 7602, South Africa
###### Abstract
We model analytically the dynamics of a cytoskeletal filament in a motility
assay. The filament is described as rigid rod free to slide in two dimensions.
The motor proteins consist of polymeric tails tethered to the plane and
modeled as linear springs and motor heads that bind to the filament. As in
related models of rigid and soft two-state motors, the binding/unbinding
dynamics of the motor heads and the dependence of the transition rates on the
load exerted by the motor tails play a crucial role in controlling the
filament’s dynamics. Our work shows that the filament effectively behaves as a
self-propelled rod at long times, but with non-Markovian noise sources arising
from the coupling to the motor binding/unbinding dynamics. The effective
propulsion force of the filament and the active renormalization of the various
friction and diffusion constants are calculated in terms of microscopic motor
and filament parameters. These quantities could be probed by optical force
microscopy.
There has recently been renewed interest in motility assays where semiflexible
actin filaments are driven to slide over a “bed” of myosin molecular motors.
Recent experiments at high actin density have revealed that the collective
behavior of this simple active system is very rich, with propagating density
waves and large scale-swirling motion Schaller et al. (2010); Butt et al.
(2010), not unlike those observed in dense bacterial suspensions Copeland and
Weibel (2009). In an actin motility assay the polymeric tails of myosin motor
proteins are anchored to a surface, while their heads can bind to actin
filaments Riveline et al. (1998). Once bound, the motor head exerts forces and
drives the filament’s motion. This system provides possibly the simplest
realization of an active system that allows detailed semi-microscopic
modeling.
Stochastic models of the collective action of motor proteins on cytoskeletal
filaments in _one dimension_ have been considered before by several authors,
with emphasis on the acto-myosin system in muscles and on the mitotic spindle
Guérin et al. (2010a). When working against an elastic load, the motor
assemblies have been shown to drive periodic spontaneous activity in the form
of oscillatory instabilities, which in turn have been observed ubiquitously in
a variety of biological systems Jülicher and Prost (1997); Grill et al.
(2005); Günther and Kruse (2007); Vilfan and Frey (2005); Camalet and Jülicher
(2000). These instabilities arise in the model from the collective action of
the motors and the breaking of detailed balance in their dynamics and manifest
themselves as a negative effective friction of the filament. When free to
slide under the action of an external force, the filament can exhibit
bistability that manifests itself as hysteresis in the force velocity-curve
Jülicher and Prost (1995); Badoual et al. (2002). A large body of earlier work
has modeled the motors as rigid two-state systems attached to a backbone and
bound by the periodic potential exerted by the filament on the motor head
Jülicher and Prost (1995, 1997); Plaçais et al. (2009). In a second class of
models the motors have been modeled as flexible springs Gibbons et al. (2001);
Kraikivski et al. (2006). The motor heads bind to the filament and unbind at a
load-dependent rate. In this case the dynamic instability arises from the
dependence of the unbinding rate on the tension exerted by springs Brokaw
(1975); Vilfan et al. (1999); Hexner and Kafri (2009). Recent work by Guérin
et al. Guérin et al. (2010b) has generalized the two-state model by taking
into account the flexibility of the motors, showing that both models can be
obtained in a unified manner for different values of a parameter that compares
the stiffness of the motors to the stiffness of the periodic potential
provided by the filament.
In this paper we consider a model of a rigid filament free to slide in _two
dimensions_ under the driving action of motor proteins uniformly tethered to a
two-dimensional plane. The model considered is a modification of the
“crossbridge” model first introduced by Huxley in 1957 to describe motor-
induced contractile behavior of muscle fibers Huxley (1957). The motor
proteins’ polymeric tails are modeled as linear springs that pull back on the
bound motor heads. After attachment, the motor heads slide along the filament
at a velocity that depends on the load exerted by the flexible motor tails.
The sliding and subsequent detachment play the role of the motor’s power
stroke. The binding/unbinding dynamics of the motor heads and the dependence
of the transition rates on the load exerted by the motor tails play a crucial
role in controlling the dynamics of the fie, effectively yielding non-
Markovian noise sources on the filament. Related models have been studied
numerically Gibbons et al. (2001); Kraikivski et al. (2006); Vilfan (2009).
The results presented here are obtained by generalizing to two dimensions the
mean field approximation for the motor dynamics described for instance in Ref.
Grill et al. (2005). The mean-field theory neglects convective nonlinearities
in the equation for the probability of bound motors and correlations in the
motors on/off dynamics, but it is expected to be adequate on time scales large
compared to that of the motor on/off dynamics and for a large number of
motors. This is supported by the results of Jülicher and Prost (1995) for a
model of rigid two-state motors.
We begin by revisiting the one-dimensional problem. We discuss the steady-
state response of the filament to an external force and present new results on
the dynamics of fluctuations about the sliding steady state. The force-
velocity curve is evaluated analytically and exhibits bistability and
hysteresis, as obtained in Ref. Jülicher and Prost (1995) for a rigid two-
state motor model. A new result is an expression for the effective propulsion
force on the filament due to the motors in terms of physical parameters
characterizing the motor proteins. Next, we analyze the fluctuations about the
steady state by evaluating the mean-square displacement of the filament. We
show that the coupling to the motor binding/unbinding dynamics yields non-
Markovian noise sources with time correlations controlled by the duration of
the motors’ binding/unbindig cycle. Since the filament has a finite motor-
induced velocity even in the absence of applied force, the mean-square
displacement is ballistic at long time. The fluctuations of displacement about
this sliding state are, however, diffusive at long times with an enhanced
diffusion constant. This enhancement is controlled by the dependence of the
motors’ unbinding rate on the load exerted on the bound motors’ heads by the
tethered tails and vanishes for unloaded motors.
We then consider the case of a filament in two dimensions, to analyze the
effect of the coupling of translational and rotational degrees of freedom in
controlling the dynamics. At steady state, motors yield an effective
propulsion force along the long axis of the filament, as in one dimension, but
no effective torque. This is in contrast to phenomenological models considered
in the literature van Teeffelen and Löwen (2008) that have considered the
dynamics of active rod-like particles in the presence of both effective
internal forces and torques. As a result, in the steady-state the filament
slides along its long axis and the dynamics in this direction is essentially
one dimensional, with a motor-induced negative friction instability and
bistability and hysteresis in the response to an external force. Motors do
enhance both the transverse and the rotational friction coefficients of the
filament. The enhancement of rotational friction could be probed by measuring
the response to an external torque. Since the finite motor-induced propulsion
is along the filament axis, whose direction is in turn randomized by
rotational diffusion, the mean velocity of the filament is zero in the absence
of external force, unlike in the one-dimensional case. The mean square
displacement is therefore diffusive at long times, with behavior controlled by
the interplay of non-Markovian effects due to the coupling to motor dynamics
with coupled translational and rotational diffusions. The filament performs a
persistent random walk that consists of ballistic excursions at the motor-
induced propulsion speed, randomized by both rotational diffusion and the
motor binding/undinding dynamics. The crossover to the long-time diffusive
behavior is controlled by the interplay of motor-renormalized diffusion rate
and duration of the motor binding/unbinding cycle. The effective diffusion
constant is calculated in terms of microscopic motor and filament parameters.
Its dependence on activity, as characterized by the rate of ATP consumption,
could be probed in actin assays.
Finally, our work provides a microscopic justification of a simple model used
in the literature Baskaran and Marchetti (2008) that describes a cytoskeletal
filament interacting with motor proteins tethered to a plane as a “self-
propelled” rod, although it also shows that the effective noise is rendered
non-Markovian by the coupling to the motors’ binding/unbing dynamics. It also
provides microscopic expressions for the self-propulsion force and the various
friction coefficients in terms of motor and filament parameters and shows that
this effective model fails beyond a critical value of motor activity, where
the effective friction changes sign and the filament exhibits bistability and
hysteresis.
## I The Model
In our model the motor proteins are described as composed of polymeric tails
attached permanently to a two-dimensional fixed substrate and motor heads that
can bind reversibly to the filament. Once bound, a motor head moves along the
filament thereby stretching the tail. This gives rise to a load force on the
motor head and on the filament. Eventually excessive load leads to detachment
of the motor head.
### I.1 Filament dynamics
The actin filament is modeled as a rigid polar rod of length $L$ that can
slide in two dimensions. It is described by the position ${\bf r}$ of its
center of mass and a unit vector
${\bf\hat{u}}=\left(\cos(\theta),\sin(\theta)\right)$ directed along the rod’s
long axis _away_ from the polar direction of the rod, which is in turn defined
as the direction of motion of bound motors. In other words, bound motors move
along the rod in the direction $-{\bf\hat{u}}$. In contrast to most previous
work Jülicher and Prost (1997); Grill et al. (2005); Plaçais et al. (2009);
Guérin et al. (2010b), and given our interest in modeling actin motility
assays, we assume the substrate is fixed and consider the dynamics of the
filament. Our goal is to understand the role of the cooperative driving by
motors in controlling the coupled rotational and translational dynamics of the
rod.
The dynamics of the filament is described by coupled equations for the
translational and orientational degrees of freedom, given by
$\displaystyle{\bf\underline{\underline{\bm{\zeta}}}}\cdot\partial_{t}{\bf
r}={\bf F}_{\text{a}}+{\bf F}_{\text{ext}}+\bm{\eta}(t)\;,$ (1a)
$\displaystyle\zeta_{\theta}\partial_{t}\theta=T_{\text{a}}+T_{\text{ext}}+\eta_{\theta}(t)\;.$
(1b)
Here we have grouped the forces and torques into the effects due to the
motors, _i.e._ the activity, $\bf{F}_{\text{a}}$ and $T_{\text{a}}$, external
forces and torques $\bf{F}_{\text{ext}}$ an $T_{\text{ext}}$ and the
stochastic noise not due to motors. The friction tensor is given by
${\bf\underline{\underline{\bm{\zeta}}}}=\zeta_{\|}{\bf\hat{u}\hat{u}}+\zeta_{\perp}\left({\bf\underline{\underline{\bm{\delta}}}-\hat{u}\hat{u}}\right)$
with $\zeta_{\|}$ and $\zeta_{\perp}$ the friction coefficients for motion
longitudinal and transverse to the long direction of the rod, and
$\zeta_{\theta}$ is the rotational friction coefficient. For the case of a
long, thin rod of interest here, $\zeta_{\|}=\zeta_{\perp}/2$. The random
force $\bm{\eta}(t)$ and random torque $\eta_{\theta}(t)$ describe noise in
the system, including nonthermal noise sources. For simplicity we assume that
both $\bm{\eta}(t)$ and $\eta_{\theta}(t)$ describe Gaussian white noise, with
zero mean and correlations
$\langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle=2B_{ij}\delta(t-t^{\prime})$
and
$\langle\eta_{\theta}(t)\eta_{\theta}(t^{\prime})\rangle=2B_{\theta}\delta(t-t^{\prime})$,
where
$B_{ij}=B_{\|}\hat{u}_{i}\hat{u}_{j}+B_{\perp}\left(\delta_{ij}-\hat{u}_{i}\hat{u}_{i}\right)$.
### I.2 Individual motor dynamics
We model the interaction cycle of an individual motor protein with the
filament as shown in Fig. 1 for a one-dimensional system.
Figure 1: The figure shows the four steps of a motor cycle. In (a) a filament
is sliding with velocity $v$ over a uniform density of unbound motors with
tails tethered to the substrate. In (b) a motor attaches to the filament at a
position $s_{0}$ from the filament’s mid-point. The stretch of the motor tails
at the time of attachment is neglected. In (c) the motor has walked towards
the polar head of the filament, stretching the tails by an amount $\Delta$.
Finally, in (d) the bound motor detaches and relaxes instantaneously to its
unstretched state. The filament has undergone a net displacement in the
direction opposite to that of motor motion.
The tail of a specific motor is fixed at position ${\bf x}_{t}$ in the plane.
At a time $t_{0}$ the head of this motor attaches to a point on the filament.
The position of the motor head at the time of attachment is ${\bf
x}_{h}(t_{0})={\bf r}(t_{0})+s_{0}{\bf\hat{u}}(t_{0})$, where ${\bf r}(t_{0})$
and ${\bf\hat{u}}(t_{0})$ denote the position of the center of the filament
and its orientation $t=t_{0}$ and $s_{0}\in\left[-L/2,L/2\right]$ parametrizes
the distance of the point of attachment from the center of the filament (cf.
Fig. 1(b)). We assume that motor proteins will attach to parts of the filament
which are within a distance of the order of the size of the motor protein. The
stretch of the motor tail at the time of attachment is then of order of the
motor size and will be neglected, _i.e._ ${\bf x}_{h}(t_{0})-{\bf x}_{t}=0$,
or motors attach to the part of the filament directly overhead without any
initial stretch.
For $t>t_{0}$ the motor head remains attached to the filament and walks along
it towards the polar head ($-{\bf\hat{u}}$ direction) until detachment. The
tails, modeled as a linear spring of force constant $k$, exert a load ${\bf
f}=-k\bm{\Delta}(t,\tau;s_{0})$ on the head, where
$\bm{\Delta}(t,\tau;s_{0})={\bf x}_{h}(t)-{\bf x}_{t}$ is the stretch at time
$t$ of a motor protein that has been attached for a time $\tau$, _i.e._
$t=t_{0}+\tau$ (cf. Fig. 1(c)). Since we assume $\bm{\Delta}(t_{0})=0$, we can
also write
$\displaystyle\bm{\Delta}(t,\tau;s_{0})$ $\displaystyle=$ $\displaystyle{\bf
r}(t)-{\bf r}(t-\tau)+\sigma(t,\tau){\bf\hat{u}}(t)$ (2)
$\displaystyle+s_{0}\left[{\bf\hat{u}}(t)-{\bf\hat{u}}(t-\tau)\right]\;,$
where $\sigma(t,\tau)=s(t)-s(t-\tau)$ is the distance traveled along the
filament at time $t$ by a motor head that has been attached for a time $\tau$,
measured from the initial attachment position, $s_{0}$. The kinematic
constraint imposed by the condition of attachment requires
$\displaystyle\partial_{t}{\bm{\Delta}}(t,\tau;s_{0})$ $\displaystyle={\bf
v}(t)-{\bf v}(t-\tau)+{\bf\hat{u}}(t)\left[v_{m}(t)-v_{m}(t-\tau)\right]$ (3)
$\displaystyle+\bm{\Omega}(t)\sigma(t,\tau)+s_{0}\left[\bm{\Omega}(t)-\bm{\Omega}(t-\tau)\right]\;,$
where $\bm{\Omega}(t)=\partial_{t}{\bf\hat{u}}(t)=\dot{\theta}{\bf\hat{n}}(t)$
is the angular velocity of the rod and $v_{m}(t)=\partial_{t}s(t)$ the
velocity of the motor head along the filament. We have introduced a unit
vector ${\bf\hat{n}}={\bf\hat{z}}\times{\bf\hat{u}}$ normal to the long axis
of the filament. Then $({\bf\hat{z}},{\bf\hat{u}},{\bf\hat{n}})$ defines a
right-handed coordinate system with in-plane axes longitudinal and transverse
to the filament. We note that Eq. (3) can also be written as
$\displaystyle\partial_{t}{\bm{\Delta}}(t,\tau;s_{0})$ $\displaystyle+$
$\displaystyle\partial_{\tau}{\bm{\Delta}}(t,\tau;s_{0})={\bf
v}(t)+v_{m}(t){\bf\hat{u}}(t)$ (4)
$\displaystyle+\bm{\Omega}(t)\sigma(t,\tau)+s_{0}\bm{\Omega}(t)\;.$
While the motor remains bound, the dynamics of the motor head along the
filament is described by an overdamped equation of motion
$\zeta_{m}\dot{s}(t)=-f_{s}+{\bf\hat{u}}\cdot{\bf f}$ (5)
where $f_{s}>0$ is the stall force, defined as the force where the velocity
$v_{m}=\dot{s}$ of the loaded motor vanishes. Since motors move in the
$-{\bf\hat{u}}$ direction, generally $v_{m}=\dot{s}<0$. Letting
$f_{\|}={\bf\hat{u}}\cdot{\bf f}=-k\Delta_{\|}$, Eq. (5) can also be written
as
$v_{m}(t)=-v_{0}\left(1-\frac{f_{\|}(\Delta_{\|})}{f_{s}}\right)\;,$ (6)
where $v_{0}=f_{s}/\zeta_{m}\sim\Delta\mu>0$ is the load-free stepping
velocity, with $\Delta\mu$ the rate of ATP consumption. The motor velocity is
shown in Fig. (2) as a function of the load $f_{\|}$. The motor head velocity
also vanishes for $f_{\|}<-f_{d}$, when the motor detaches. The linear force-
velocity relation for an individual motor is consistent with experiments on
single kinesin molecules Svoboda and Block (1994).
Figure 2: The velocity $-v_{m}$ of a loaded motor head as a function of the
load $f_{\|}={\bf\hat{u}}\cdot\bm{\Delta}$. The figure shows the stall force
$f_{s}$ where $v_{m}=0$ and the detachment force $-f_{d}$.
The active force and torque on the filament due to an individual bound motor
can then be expressed in terms of these quantities as
$\displaystyle{\bf f}_{a}(t,\tau;s_{0})=-k\bm{\Delta}(t,\tau;s_{0})\;,$ (7a)
$\displaystyle\tau_{a}(t,\tau;s_{0})=-{\bf\hat{z}}\cdot\left[(s_{0}+\sigma(t,\tau)){\bf\hat{u}}(t)\times
k\bm{\Delta}(t,\tau;s_{0})\right]\;.$ (7b)
Finally, after traveling along the filament for a time $\tau_{\text{detach}}$,
the motor head detaches and the head position relaxes instantaneously back to
the fixed position ${\bf x}_{t}$ of the tail.
We note that we shall not be considering the possibility of direct
interactions of motors with each other. We have also not considered stochastic
aspects of the motor motion along the filament (Eq. (5)).
### I.3 Motor binding and unbinding
Next we need to describe the stochastic binding/unbinding dynamics of the
motor heads. We assume the motor tails are attached to the substrate with a
homogeneous surface density $\rho_{m}$, such that for a rod of length $L$ and
width $b$ a maximum of $N=\rho_{m}Lb$ motors can be bound at any given time.
Following Guérin et al. Guérin et al. (2010b), we denote by ${\cal
P}_{b}(t,\tau;s_{0})$ the probability that a motor head that has attached at
$s_{0}$ at a time $t_{0}$, has remained attached for a duration $\tau$ at time
$t$. For simplicity in the following we assume that the probability that a
motor attaches at any point along the filament is uniform, i.e., ${\cal
P}_{b}(t,\tau;s_{0})=\frac{1}{L}P_{b}(t,\tau)$. We further assume that when
motors unbind they relax instantaneously to the unstretched state. The time
evolution of the binding probability is then given by
$\displaystyle\partial_{t}{P}_{b}(t,\tau)+\partial_{\tau}{P}_{b}(t,\tau)=$
$\displaystyle-\langle\omega_{u}(\bm{\Delta}(\tau))\rangle_{s_{0}}{P}_{b}(t,\tau)$
$\displaystyle+\omega_{b}\delta(\tau)p_{u}(t)\;,$ (8)
where $p_{u}(t)$ is the probability that a motor be unbound at time $t$. The
probability distribution is normalized according to
$\int_{0}^{\infty}d\tau\int_{-L/2}^{L/2}ds_{0}~{}{\cal
P}_{b}(t,\tau;s_{0})+p_{u}(t)=1\;.$ (9)
In Eq. (I.3), $\omega_{u}(\bm{\Delta}(\tau))$ and $\omega_{b}$ are the rates
at which a motor head with tails stretched by an amount $\bm{\Delta}(t,\tau)$
unbinds from and binds to the filament, respectively. The binding rate
$\omega_{b}$ will be assumed to be constant. In contrast, the unbinding rate
$\omega_{u}$ is a strong function of the stretch of the motor tails, that has
to be obtained by solving Eq. (4), with initial condition
$\Delta(t=0,\tau)=0$. We will see below that the nonlinear dependence of the
unbinding rate plays an important role in controlling the filament dynamics.
In two dimensions the unbinding rate $\omega_{u}$ also depends on the initial
attachment point $s_{0}$ along the filament. To be consistent with our ansatz
that the probability that the motor attaches at any point along the filament
is uniform, we have replaced the rate in Eq. (I.3) with its mean value
$\langle\omega_{u}\rangle_{s_{0}}$, where
$\langle...\rangle_{s_{0}}=\int_{-L/2}^{L/2}\frac{ds}{L}...$ denotes an
average over the initial attachment points.
The unbinding rate is controlled by the work done by the force (load) acting
on the motor head, which in turn is a linear function of the stretch
$\bm{\Delta}$. A form that has been used extensively in the literature for
one-dimensinal models is an exponential,
$\omega_{u}=\omega_{0}e^{\alpha|\Delta|}$, where $\omega_{0}$ is the unbinding
rate of an unloaded motor and $\alpha$ is a characteristic length scale that
control the maximum stretch of the tails above which the motor unbinds
111$\alpha$ can be estimated to be equal to $ka/k_{B}T$, where $a$ is a
microscopic length scale of the order of a few nm. Experiments are carried out
at room temperatures which leads to $k_{B}T\sim pNnm$.. The exponential form
represents an approximation for the result of a detailed calculation of the
average time that a motor moving along a polar filament spends attached to the
filament as a function of a tangentially applied load Parmeggiani et al.
(2001) and is consistent with experiments on kinesin Visscher et al. (1999).
This form can easily be generalized to to the case of a filament sliding in
two dimensions where the motor load had both components tangential and
transverse to the filament. It is, however, shown in the Appendix that within
the mean-field approximation used below the exponential form yields a steady-
state stretch $\Delta$ that saturates to a finite value at large velocity $v$
of the filament. This is unphysical as it does not incorporate the cutoff
described by the detachment force $f_{d}$ in Fig. 2. For this reason in the
mean-field treatment described below we use a parabolic form for the unbinding
rate as a function of stretch,
$\omega_{u}(\bm{\Delta})=\omega_{0}\left[1+\alpha^{2}|\bm{\Delta}|^{2}\right]\;,$
(10)
where for simplicity we have assumed an isotropic dependence on the magnitude
of the stretch in terms of a single length scale, $\alpha^{-1}$. An explicit
comparison of the two expressions for the unbinding rates is given in the
Appendix.
The total active force and torque on the filament averaged over the original
positions and the times of attachment can be written as
$\displaystyle{\bf F}_{a}(t)=-Nk\int_{0}^{\infty}d\tau~{}\langle
P_{b}(t,\tau)~{}\bm{\Delta}(t,\tau;s_{0})\rangle_{s_{0}}\;,$ (11a)
$\displaystyle T_{a}(t)=-Nk\int_{0}^{\infty}d\tau~{}\langle
P_{b}(t,\tau)~{}{\bf\hat{z}}\cdot\left[(s_{0}+\sigma(t,\tau)){\bf\hat{u}}(t)\times\bm{\Delta}(t,\tau;s_{0})\right]\rangle_{s_{0}}\;.$
(11b)
## II Mean field approximation
To proceed, we introduce several approximations for the motor dynamics. First,
we restrict ourselves to the dynamics on times scales large compared to the
attachment time $\tau$ of individual motors. For $t\gg\tau$ we approximate
$\displaystyle\sigma(t,\tau)\simeq v_{m}(t)\tau\;,$ (12a)
$\displaystyle\bm{\Delta}(t,\tau;s_{0})\simeq\left[{\bf
v}(t)+v_{m}(t){\bf\hat{u}}(t)+s_{0}\bm{\Omega}(t)\right]\tau\;.$ (12b)
This approximation becomes exact for steady states where the filament and
motor velocities are independent of time. We also stress that in Eqs. (12a)
and (12b) $\sigma$ and $\bm{\Delta}$ are still nonlinear functions of $\tau$
due to the dependence of $v_{m}$ on the load force.
Secondly, we recall that we have assumed that the attachment positions $s_{0}$
are uniformly distributed along the filament and can be treated as independent
of the residence times $\tau$. Finally, we make a mean field assumption on the
probability distribution of attachment times, which is chosen of the form
$P(t,\tau)=\delta(\tau-\tau_{\text{MF}})p_{b}(t)$, with $p_{b}(t)$ the
probability that a motor be attached at time $t$ regardless of the its
attachment time. The mean-field value of the attachment time is determined by
requiring
$\tau_{\text{MF}}=\left[\langle\omega_{u}\left(\Delta(\tau_{\text{MF}})\right)\rangle_{s_{0}}\right]^{-1}\;.$
(13)
In previous literature a similar mean field assumption has been stated in
terms of the stretch, $\bm{\Delta}$ Grill et al. (2005); Günther and Kruse
(2007). In the present problem, however, where filaments can slide in two
dimensions, it is necessary to restate the mean-field theory in terms of the
residence time $\tau$ as the active forces and torques depend on both the
stretch $\bm{\Delta}$ of the motor tails and the distance $\sigma$ traveled by
a bound motor head along the filament. These two quantities are in turn both
controlled by a single stochastic variable, identified with the residence time
$\tau$. The rate of change of the probability $p_{b}(t)$ that a motor be bound
at time $t$ is then described by the equation
$\partial_{t}{p}_{b}(t)=-\tau_{\text{MF}}^{-1}p_{b}(t)+\omega_{b}\left[1-p_{b}(t)\right]\;,$
(14)
The mean field active force and torque due to the motors are then given by
$\displaystyle{\bf
F}_{a}^{\text{MF}}=-kN\langle\bm{\Delta}(t,\tau_{\text{MF}};s_{0})p_{b}(t)\rangle_{s_{0}}\;,$
(15) $\displaystyle T_{a}^{\text{MF}}=-kN\langle
p_{b}(t)~{}{\bf\hat{z}}\cdot\left[(s_{0}+\sigma(t,\tau_{\text{MF}})){\bf\hat{u}}(t)\times\bm{\Delta}(t,\tau_{\text{MF}};s_{0})\right]\rangle_{s_{0}}\;.$
(16)
In the following we will work in the mean-field approximation and remove the
label MF from the various quantities.
## III Active Filament Sliding in One Dimension
We first consider the simplest theoretical realization of a motility assay
experiment, where the actin filament is sliding over a one dimensional track
of tethered motor proteins. A closely related model, where the filament is
elastically coupled to a network, has been used extensively in the literature
to study the onset of spontaneous oscillations arising from the collective
action of the bound motors Jülicher and Prost (1995, 1997); Grill et al.
(2005). Previous studies of freely sliding filaments, as appropriate for the
modeling of motility assays, have also been carried out both analytically and
numerically Hexner and Kafri (2009). Our work contains some new results on the
response to an external force of a filament free to slide under the action of
active crosslinkers and also on the filament fluctuations.
The Langevin equation for the center of mass coordinate $x$ of the filament is
given by
$\zeta\dot{x}=F_{\text{a}}(t)+F_{\text{ext}}+\eta(t)\;,$ (17)
where $\dot{x}$ is the center-of-mass velocity of the filament and the mean-
field active force is given by
$F_{a}^{\text{MF}}(t)=-kNp_{b}(t)\Delta(\dot{x},\tau)\;.$ (18)
In one dimension the dependence on $s_{0}$ drops out and Eq. (12b) simply
gives $\Delta\simeq(\dot{x}+v_{m})\tau$. Substituting Eq. (6) for $v_{m}$, we
can solve for $\Delta$ as a function of $\dot{x}$ and $\tau$,
$\Delta(\dot{x},\tau)=\frac{(\dot{x}-v_{0})/\omega_{0}}{\tilde{\tau}^{-1}+\epsilon}\;,$
(19)
and Eq. (13) for the mean attachment time becomes
$\tilde{\tau}^{-1}(\dot{x})=1+\frac{(\dot{x}-v_{0})^{2}\alpha^{2}}{\left[\tilde{\tau}^{-1}(\dot{x})+\epsilon\right]^{2}\omega_{0}^{2}}\;,$
(20)
where $\tilde{\tau}=\omega_{0}\tau$ and $\epsilon=kv_{0}/f_{s}\omega_{0}$. The
parameter $\epsilon$ is the ratio of the length $\ell_{0}=v_{0}/\omega_{0}$
traveled by an unloaded motor that remains attached for a time
$\omega_{0}^{-1}$ to the stretch $\delta_{s}=f_{s}/k$ of the motor tails at
the stall force, $f_{s}$. Typical values for these length scales and the
parameter $\epsilon$ are given in Table 1.
Parameters | Myosin-II | Kinesin
---|---|---
$\ell_{0}$ | $\sim 2\text{ nm}$ | $\sim 8\text{ nm}$
$\delta_{s}$ | $\sim 1\text{ nm}$ | $\sim 25\text{ nm}$
$\epsilon$ | $\sim 2$ | $\sim 0.32$
Table 1: Typical values of the length scales $\ell_{0}=v_{0}/\omega_{0}$ and
$\delta_{s}=f_{s}/k$ introduced in the text and the ratio $\epsilon$ for
myosin II and kinesin. The parameters are taken from Refs. Howard (2001) and
Gibbons et al. (2001).
It is convenient to rewrite the mean residence time $\tilde{\tau}$ as
$\tilde{\tau}^{-1}=1+\frac{(u-1)^{2}\nu^{2}}{\left[\tilde{\tau}^{-1}+\epsilon\right]^{2}}\;,$
(21)
where $u=\dot{x}/v_{0}$ and we have introduced a dimensionless parameter
$\nu=\ell\alpha$ that controls the dependence of the unbinding rate on the
load exerted on the bound heads by the stretched motor tails, with
$\frac{1}{\ell}=\frac{1}{\ell_{0}}+\frac{1}{\delta_{s}}$ (22)
the geometric mean of the two length scales introduced earlier. For stiff
motors, with $\epsilon\gg 1$ or $\ell_{0}\gg\delta_{s}$, $\ell\sim\delta_{s}$,
while for floppy, easy to stretch motors, corresponding to $\epsilon\ll 1$ or
$\ell_{0}\ll\delta_{s}$, $\ell\sim\ell_{0}$. Setting $\nu=0$ corresponds to
neglecting the load dependence of the unbinding rate. The exact solution to
Eq. (21) for the mean residence time $\tilde{\tau}(\dot{x})$ as a function of
the filament velocity can be determined and is discussed in the Appendix.
Clearly $\tau$ has a maximum value at $\dot{x}=v_{0}$, where
$\tau=\omega_{0}^{-1}$ and decays rapidly as $|\dot{x}-v_{0}|$ grows.
### III.1 Steady State and its Stability
We begin by characterizing the steady state dynamics of the filament in the
absence of noise. Incorporating for generality an external force
$F_{\text{ext}}$, the steady state velocity $v$ of the filament is obtained
from the solution of the nonlinear equation
$\zeta v=F_{\text{ext}}+F_{a}(v)$ (23)
where $F_{a}(v)=-kNp_{bs}(v)\Delta(v)$. The steady state stretch $\Delta(v)$
is given by Eq. (19) with $\dot{x}=v$ and
$p_{bs}(v)=\frac{\omega_{b}\tau(v)}{1+\tau(v)\omega_{b}}\;,$ (24)
with $\tau(v)$ given by Eq. (21) for $\dot{x}=v$. To gain some insight in the
behavior of the system, we expand the active force as $F_{a}(v)\simeq
F_{p}+\left(\frac{\partial F_{a}}{\partial v}\right)_{v=0}v+{\cal O}(v^{2})$,
with $F_{p}=F_{a}(v=0)$. Retaining only terms linear in $v$ this gives a
steady state force/velocity relation of the form
$\displaystyle(\zeta+\zeta_{a})v=F_{\text{ext}}+F_{p}$ (25)
with a filament “propulsion” force $F_{p}$
$F_{p}=\frac{Np_{bs0}k\ell_{0}}{\epsilon+\tilde{\tau}_{0}^{-1}}\;,$ (26)
where $p_{bs0}=r/[r+(1-r)\tilde{\tau}_{0}^{-1}]$, with
$r=\omega_{b}/(\omega_{0}+\omega_{b})$ the duty ratio, and
$\tilde{\tau}_{0}=\tilde{\tau}(v=0)$. The active contribution
$\zeta_{a}=-\left(\frac{\partial F_{a}}{\partial v}\right)_{v=0}$ to the
friction is given by
$\displaystyle\zeta_{a}=Np_{bs0}\frac{k|\Delta_{0}|}{v_{0}}\left[1-\left(\frac{|\Delta_{0}|}{\ell_{0}}+p_{bs0}\frac{1-r}{r}\right)\frac{2\alpha^{2}\Delta_{0}^{2}\ell_{0}}{\ell_{0}+2\alpha^{2}|\Delta_{0}|^{3}}\right]\;,$
(27)
where $\Delta_{0}=\Delta(v=0)=-\ell_{0}/(\tilde{\tau}_{0}^{-1}+\epsilon)$. In
the absence of external force, the filament will slide at a velocity
$v_{s}=F_{p}/(\zeta+\zeta_{a})$ (28)
due to the action of the motor proteins. This motion is in the polar direction
of the filament and opposite to the direction of motion of bound motors along
the filament. Phenomenological models of motility assays have described the
actin filaments as “self-propelled” Brownian rods. Our model yields a
microscopic calculation of such a “self-propulsion” force $F_{p}$ in terms of
microscopic parameters characterizing the motor proteins. We note that
$-F_{p}$ can also be interpreted as the “stall force” of the filament, _i.e._
the value of $F_{\text{ext}}$ required to yield $v=0$. This is a quantity that
may be experimentally accessible using optical force microscopy.
If we neglect the load dependence of the unbinding rate by letting $\nu=0$,
the mean number of bound motors is simply $Nr$ and $F_{p}^{0}=Nrk\ell$, with
$\ell$ given by Eq. (22). In this limit the sliding velocity $v_{s}^{0}$ in
the absence of external force can be written as
$v_{s}^{0}=\frac{v_{0}}{1+\zeta/\zeta_{a}^{0}}\;.$ (29)
where the active friction $\zeta_{a}^{0}=Nrk\ell/v_{0}>0$ is always positive.
The sliding velocity vanishes when $v_{0}\rightarrow 0$ and it saturates to
its largest value $v_{0}$ when the number $Nr$ of bound motors becomes very
large and $\zeta_{a}^{0}\gg\zeta$. The behavior is controlled by the parameter
$\epsilon$. If the motors are easy to stretch, i.e., $\epsilon\ll 1$, then the
propulsion force is determined entirely by the elastic forces exerted by these
weak bound motors, with $F_{p}^{0}\simeq Nrk\ell_{0}$. On the other hand stiff
motors, with $\epsilon\gg 1$, stall before detaching. The propulsion force is
then controlled by the motor stall force, with $F_{\text{p}}^{0}\simeq
Nrf_{s}$.
The load-dependence of the unbinding rate changes qualitatively the behavior
of the system. In particular, the net friction $\zeta+\zeta_{a}$ can become
negative, rendering the steady state unstable. This instability was already
noted in Ref. Jülicher and Prost (1995) for a two-state model of active
linkers and in Ref. Guérin et al. (2010b) for a two state “soft” motor model.
The full nonlinear force-velocity curves are shown in Fig. 3 for various
values of the motor stiffness $k$, for parameters appropriate for acto-myosin
systems. In the steady state, as we increase the active parameter $k$ while
keeping the substrate friction $\zeta$ constant, the $F_{\text{ext}}-v$ curve
becomes non-monotonic, and two distinct regions of bistability emerge. To
understand the increase of the bistability region with motor stiffness, we
note that the active force is simply proportional to $k$, hence naively one
would indeed expect its effect to be more pronounced for stiff motors. The
detailed behavior is, however, determined by the interplay of the mean
residence time $\tau$ that motors spend bound to the filament and the stretch,
$\Delta$. Soft, floppy motors have large stretches, controlled mainly be the
length $\ell_{0}$ traveled by an unloaded motors advancing at speed $v_{0}$.
On the other hand, their residence time is small and the overall effect of the
active force remains small. In contrast, stiff motors have a small stretch, of
order of the stretch $\delta_{s}=f_{s}/k$ of a stalled motor, but long
residence times and are collectively capable of slowing down the filament and
even holding it in place against the action of the external force, driving the
negative friction instability. At even larger values of the external force
motors are effectively always unbound due to the fast sliding of the filament
and the velocity-force curve approaches the linear form obtained when no
motors are present. This behavior is best seen from Fig. 5.
Figure 3: (Color online) Force-velocity curves for $\zeta=0.002\
\text{pN}\text{nm}^{-1}\text{s}$ and various values of the motor stiffness
$k$, showing the transition to non-monotonicity as $k$ increases. The values
of the stiffness $k$ (in pN/nm) and the corresponding values for $\alpha^{-1}$
(in nm) and $\epsilon$ are as follows: $k=0$, $\alpha^{-1}=0$, $\epsilon=0$
(black dotted line); $k=1$ , $\alpha^{-1}=0.75$, $\epsilon=0.5$ (red dashed
line); $k=2$, $\alpha^{-1}=1.5$, $\epsilon=1$ (blue dashed-dotted line);
$k=8$, $\alpha^{-1}=6$, $\epsilon=4$ (black solid line). At high velocities
the curves merge into the linear curve $F_{\text{ext}}=\zeta v$ (black dotted
line), corresponding to the case where no motors are present. The remaining
parameters have the following values: $N=\rho_{m}Lb=100$, $v_{0}=1000\
\text{nm/s}$, $f_{s}=4\ \text{pN}$, $\omega_{0}=0.5\ (\text{ms})^{-1}$,
$r=0.06$.
The region of non-monotonicity of the force-velocity curve and associated
bistability can also be displayed as a phase diagram, as shown in Fig. 4. The
stiffness of myosins is about $5$ pN/nm and the actin filament friction was
estimated to be of order $0.003$ pNs/nm in Ref Riveline et al. (1998). In
actomyosin systems the negative friction instability should therefore be
observable in a range of experimentally relevant parameters. Kinesin motors
have floppier tails and a smaller stiffness of about $0.5$ pN/nm. In this case
bistability effects should be prevalent only at very low filament friction,
$\zeta\ll 0.001$ pNs/nm. A proper estimate of the region of parameters where
the instability may be observable is rendered difficult by the fact that the
onset of negative friction is also a strong function of the density of motors
tethered to the substrate, which in turn affects the value of the friction
$\zeta$. In general, we expect that a high motor density will be needed for
the instability to occur. On the other hand, if the density of motors is too
high, the friction $\zeta$ will be enhanced and the instability suppressed.
Figure 4: (Color online) ”Phase diagram” in $k$-$\zeta$ plane showing the
region where the $F_{\text{ext}}$-$v$ curves exhibit non-monotonic behavior
(blue shaded region) for $N=\rho_{m}Lb=100$ and $v_{0}=1\ \mu\text{m s}^{-1}$,
$f_{s}=4\ \text{pN}$, $\alpha/k=1.33\ \text{pN}$, $\omega_{0}=0.5\
(\text{ms})^{-1}$, $r=0.06$.
We stress that the force-velocity curves displayed in Fig. 3 have been
obtained by calculating $F_{\text{ext}}$ as a function of $v$. In an
experiment one would tune the applied force and measure the resulting
velocity. The system would not access the unstable regions of negative
friction, but rather follow the hysteretic path sketched in Fig. 5. The
discontinuous jump may occur at the boundary of the stability region, as shown
in the figure, or before such a boundary is reached, corresponding to what is
known as “early switching”.
Figure 5: The figure sketches the hysteretic behavior that may be obtained in
an experiment where an external force $F_{\text{ext}}$ is applied to a
filament in a motility assay. The response of the filament will generally
display two regions of hysteresis, at positive and negative forces.
To summarize, motors have two important effects on the steady state dynamics
of the filament. First, they make the filament self-propelled, in the sense
that in the absence of an external force the filament will slide at a velocity
$v_{s}$ given by Eq. (28). The value of $v_{s}$ increases with increasing
motor stiffness and of course vanishes for $v_{0}=0$, corresponding to the
vanishing of the rate of ATP consumption $\Delta\mu$. The sliding velocity
$v_{s}$ is shown in Fig. 6 as a function of the parameter $\epsilon$ inversely
proportional to the motor stall force for a few values of the maximum number
of motors that can bind to the filament.
Figure 6: The motor-induced sliding velocity $v_{s}$ of an actin filament in
the absence of external force is shown as a function of
$\epsilon=\ell_{0}/\delta_{s}$ for $N=10$ (dotted line), $N=25$ (dashed line),
$N=100$ (dashed-dotted line) and $N=500$ (solid line). We observe that
$v_{s}\rightarrow v_{0}$ for stiff motors as $N$ is increased. Parameter
values: $\zeta=0.002\ \text{pN }(nm)^{-1}\text{s}$, $r=0.06$, $\alpha/k=1.33\
\text{pN}$.
A second important effect of motor activity is the discontinuous and
hysteretic response to an external force displayed in Fig. 5. When
$F_{\text{ext}}=0$ the filament slides at the motor-induced velocity $v_{s}$.
If a small force $F_{\text{ext}}>0$ is applied, the filament velocity remains
an approximately linear function of the applied force, but with an effective
friction greatly enhanced by motor binding/unbinding. This enhancement of
friction is also termed in the literature as protein friction Tawada and
Sekimoto (1991). At high velocity, only a few motors are attached to the
filament and the filament velocity approaches the value it would have in the
absence of motors as the applied force is increased beyond a characteristic
value. When the external force is ramped down the filament velocity jumps to
the lower branch corresponding to a lower value of the force, resulting in
hysteresis.
### III.2 Fluctuation Dynamics
We now examine the dynamics of noise-induced fluctuations about the steady
state by letting $\delta\dot{x}=\dot{x}-v$, where $v$ is the steady state
velocity, given by the solution of Eq. (23) discussed in the previous section.
The dynamics of the fluctuation $\delta\dot{x}$ is then described by the
equation
$\displaystyle\zeta\delta\dot{x}=-kN\Delta(v)\delta
p_{b}-kNp_{bs}\delta\Delta+\eta(t)\;,$ (30)
where both $\delta\Delta=[\partial_{v}\Delta(v)]\delta\dot{x}$ and $\delta
p_{b}(t)$ depend on noise only implicitly through the velocity $\dot{x}$, with
$\partial_{t}\delta p_{b}=-\left[\frac{1}{\tau(v)}+\omega_{b}\right]\delta
p_{b}-p_{bs}(v)\frac{\partial}{\partial
v}\left[\frac{1}{\tau(v)}\right]\delta\dot{x}$ (31)
The random force $\eta(t)$ in Eq. (30) describes noise on the filament, with
$\langle\eta(t)\rangle=0$ and
$\langle\eta(t)\eta(t^{\prime})\rangle=2B\delta(t-t^{\prime})$. Noise can
arise in the system from a variety of sources, including the fluid through
which the filament moves and the motor on/off dynamics. For simplicity we
assume the spectrum is white, albeit with a non-thermal strength $B$. By
solving Eq. (31) with initial condition $\delta p_{b}(t=0)=0$ and substituting
in Eq. (30), we obtain a single equation for $\delta\dot{x}$,
$\left[\zeta+\zeta_{a}(v)\right]\delta\dot{x}(t)+\omega_{0}\zeta^{\prime}_{a}(v)\int_{0}^{t}dt^{\prime}\
e^{-\Omega(t-t^{\prime})}\delta\dot{x}(t^{\prime})=\eta(t)$ (32)
where we have introduced an effective frequency
$\Omega(v)=\tau^{-1}(v)+\omega_{b}$ and active frictions
$\displaystyle\zeta_{a}(v)=kNp_{bs}(v)\partial_{v}\Delta(v)$ (33)
$\displaystyle\zeta^{\prime}_{a}(v)=kNp_{bs}(v)\Delta(v)\frac{\partial}{\partial
v}\left(\frac{1}{\tilde{\tau}}\right)\;.$ (34)
In all the parameters defined above $v$ has to be replaced by the steady state
solution obtained in the previous section. The time scale $\Omega^{-1}$
represent the duration of the cycle of a loaded motor. Note that
$\zeta_{a}(v=0)=\zeta_{a}$, with $\zeta_{a}$ given by Eq. (27). It is evident
from Eq. (32) that motor dynamics yields a non-Markovian contribution to the
friction.
If we neglect the load dependence of the unbinding rate by letting $\nu=0$,
hence $\tau^{-1}=\omega_{0}$, then $\zeta_{a}(v)=\zeta_{a0}=Nrk\ell/v_{0}$ and
$\zeta^{\prime}_{a}(v)=0$. In this limit $\langle[\delta x(t)-\delta
x(0)]^{2}\rangle=2D_{a0}t$ and is diffusive _at all times_ , with an effective
diffusion constant $D_{a0}=\frac{B}{(\zeta+\zeta_{a0})^{2}}$.
When $\nu$ is finite we obtain
$\langle[\delta x(t)-\delta
x(0)]^{2}\rangle=2D_{a}t+4D_{a}\left[\frac{\zeta^{\prime}_{a}(v)\omega_{0}}{[\zeta+\zeta_{a}(v)]\Omega_{a}}\right]^{2}\left(t-\frac{1-e^{-\Omega_{a}t}}{\Omega_{a}}\right)\;,$
(35)
where $D_{a}=B/[\zeta+\zeta_{a}(v)]^{2}$ and
$\Omega_{a}(v)=\Omega(v)+\omega_{0}\zeta^{\prime}_{a}(v)/[\zeta+\zeta_{a}(v)]$.
The characteristic time scale $\Omega_{a}^{-1}$ controls the crossover from
ballistic behavior for $t\ll\Omega_{a}^{-1}$ to diffusive behavior for
$t\gg\Omega_{a}^{-1}$. It is determined by the smaller of two time scales:
$\Omega^{-1}$, defined after Eq. (32), that represents the duration of the
cycle of a loaded motor, and the active time
$(\omega_{0}\zeta^{\prime}_{a}/[\zeta+\zeta_{a}])^{-1}$ that represents the
correlation time for the effect of motor on/off dynamics on the filament. At
long times the mean-square displacement is always diffusive, with an effective
diffusion constant
$D_{\text{eff}}=D_{a}\left[1+\left(\frac{\zeta^{\prime}_{a}\omega_{0}}{[\zeta+\zeta_{a}(v)]\Omega_{a}}\right)^{2}\right]$
(36)
This result only describes the behavior of the system in the stable region,
where the effective friction remains positive. At the onset of negative
friction instability $\zeta+\zeta_{a}(v)\rightarrow 0$ and the effective
diffusivity diverges. In other words the instability is also associated with
large fluctuations in he rod’s displacements due to the cooperative motor
dynamics.
To leading order in $\nu$ the frequency $\Omega_{a}$ that controls the
crossover to diffusive behavior is simply
$\Omega\simeq\omega_{0}+\omega_{b}+{\mathcal{O}}(\nu^{2})$. For non-processive
motors such as myosins $\omega_{0}\gg\omega_{b}$ and $\Omega\sim\omega_{0}$.
The effective diffusion constant is given by
$D_{\text{eff}}\simeq
D_{a}\left[1+\frac{2\zeta^{2}\zeta_{a0}}{(\zeta+\zeta_{a0})^{3}}\left(\frac{v_{0}\alpha}{\omega_{0}(1+\epsilon)}\right)^{2}+{\cal}\left[(v_{0}\alpha/\omega)^{4}\right]\right]\;.$
(37)
This expression indicates that the enhancement of the diffusion constant comes
from the competition of the ballistic motor-driven motion of the filament at
speed $\sim v_{0}\zeta_{a0}/(\zeta+\zeta_{a0})$ and the randomization of such
motion by the motor on/off dynamics on time scales $\sim\omega_{0}^{-1}$. The
result is that the filament dynamics is diffusive at long times, but with an
enhanced diffusion constant.
Finally, we stress that the correlation function $\langle[\delta x(t)-\delta
x(0)]^{2}\rangle$ describes the fluctuations about the steady state value
$vt$. if we write $x(t)=vt+\delta x(t)$ the actual mean square displacement of
the center of mass of the rod is given by
$\langle(x(t)-x(0))^{2}\rangle=v^{2}t^{2}+\langle[\delta x(t)-\delta
x(0)]^{2}\rangle$ and is ballistic at long times in one dimension due to the
mean motion of the rod. In addition, due to nonlinearity of the Langevin
equation (17) the mean value $\langle x\rangle$ in the presence of noise will
in general differ from the steady state solution $vt$ obtained in the absence
of noise due to renormalization by fluctuations $\langle
F_{a}(\dot{x},t)\rangle-F_{a}(v,t)$. These fluctuations are neglected in mean
field theory.
## IV Active Filament Dynamics in Two Dimensions
In two dimensions the coupled translational and rotational dynamics of of the
filament is described by Eqs. (1a) and (1b). It is convenient to write the
instantaneous velocity of the center of the filament in terms of components
longitudinal and transverse to the long axis of the filament, $\dot{{\bf
r}}=V_{\|}{\bf\hat{u}}+V_{\perp}\bf{\hat{n}}$. Similarly the stretch is
written as $\bm{\Delta}=\Delta_{\|}{\bf\hat{u}}+\Delta_{\perp}{\bf\hat{n}}$,
where (see Eq. (12b))
$\displaystyle\Delta_{\|}={\bf\hat{u}}\cdot\bm{\Delta}=(V_{\|}+v_{m})\tau\;,$
(38a)
$\displaystyle\Delta_{\perp}={\bf\hat{n}}\cdot\bm{\Delta}=(V_{\perp}+s_{0}\dot{\theta})\tau\;.$
(38b)
It is then clear that $\Delta_{\|}$ has the same form as in one dimension
$\Delta_{\|}=\frac{(V_{\|}-v_{0})/\omega_{0}}{\tilde{\tau}^{-1}+\epsilon}\;,$
(39)
and the mean-field value of the attachment time $\tau$ is given by
$\tilde{\tau}^{-1}(V_{\|},V_{\perp},\dot{\theta})=1+\frac{(V_{\|}-v_{0})^{2}\alpha^{2}}{(\tilde{\tau}^{-1}+\epsilon)^{2}\omega_{0}^{2}}+\frac{V_{\perp}^{2}\tilde{\tau}^{2}\alpha^{2}}{\omega_{0}^{2}}+\frac{L^{2}\dot{\theta}^{2}\tilde{\tau}^{2}\alpha^{2}}{12\omega_{0}^{2}}\;,$
(40)
where we have carried out the average over $s_{0}$. Inserting these
expressions in Eqs. (15) and (16), the mean field active force and torque
exerted by bound motors on the filament can then be written as
$\displaystyle{\bf
F}_{a}=-kNp_{b}(t)\left[\frac{(V_{\|}-v_{0})/\omega_{0}}{\tilde{\tau}^{-1}+\epsilon}\hat{{\bf
u}}+V_{\perp}\tau\hat{{\bf n}}\right]\;,$ (41a) $\displaystyle
T_{a}=-kNp_{b}(t)\tau\left[\frac{L^{2}\dot{\theta}}{12}+V_{\perp}v_{m}\tau\right]\;.$
(41b)
### IV.1 Steady State and its stability
The steady state of the motor-driven filament in two dimensions in the absence
of noise is characterized by the center of mass velocity ${\bf
v}=v_{\|}{\bf\hat{u}}+v_{\perp}{\bf\hat{n}}$ and angular velocity
$\dot{\vartheta}$. In the absence of any external force or torque,
$\dot{\vartheta}$ and $v_{\perp}$ are identically zero, whereas the
longitudinal dynamics described by $v_{\|}$ is identical to that obtained in
one-dimension: the filament will slide along its long axis at a steady
longitudinal velocity $v_{\|}=F_{p}/(\zeta+\zeta_{a})$, with $F_{p}$ and
$\zeta_{a}$ given by Eqs. (26) and (27), respectively.
To gain some insight into the stability of the system under application of
external forces or torques, we expand ${\bf F}_{\text{a}}$ and $T_{\text{a}}$
to linear order in velocities ${\bf v}$ and $\dot{\vartheta}$ as, ${\bf
F}_{\text{a}}({\bf v},\dot{\vartheta})\simeq{\bf
F}_{p}+\left(\frac{\partial{\bf F}_{a}}{\partial
v_{\|}}\right)_{0}v_{\|}+\left(\frac{\partial{\bf F}_{a}}{\partial
v_{\perp}}\right)_{0}v_{\perp}+\left(\frac{\partial{\bf
F}_{a}}{\partial\dot{\vartheta}}\right)_{0}\dot{\vartheta}$, and $T_{a}({\bf
v},\dot{\vartheta})\simeq\left(\frac{\partial T_{a}}{\partial
v_{\|}}\right)_{0}v_{\|}+\left(\frac{\partial T_{a}}{\partial
v_{\perp}}\right)_{0}v_{\perp}+\left(\frac{\partial
T_{a}}{\partial\dot{\vartheta}}\right)_{0}\dot{\vartheta}$, where ${\bf
F}_{p}={\bf F}_{\text{a},0}=F_{p}\hat{{\bf u}}$, is the tangential propulsion
force due to the motors. The subscript ‘0’ indicates that the expressions are
evaluated at ${\bf v}=0$ and $\dot{\vartheta}=0$. This leads to steady state
force/velocity and torque/velocity relations of the form
$\displaystyle\left(\underline{\underline{{\bm{\zeta}}}}+\underline{\underline{{\bm{\zeta}}}}_{a}\right)\cdot{\bf
v}={\bf F}_{\text{ext}}+F_{p}\hat{{\bf u}}\;,$ (42a)
$\displaystyle\left(\zeta_{\theta}+\zeta_{\theta
a}\right)\dot{\vartheta}=T_{\text{ext}}-g_{a}v_{\perp}\;,$ (42b)
where we have introduced an active “momentum” $g_{a}$ given by
$g_{a}=-\left(\frac{\partial T_{a}}{\partial v_{\perp}}\right)_{0}$. The
active contributions to the longitudinal, transverse and rotational friction
coefficients are defined as $\zeta_{\|a}=-\hat{{\bf
u}}\cdot\left(\frac{\partial{\bf F}_{a}}{\partial v_{\|}}\right)_{0}$,
$\zeta_{\perp a}=-\hat{{\bf n}}\cdot\left(\frac{\partial{\bf F}_{a}}{\partial
v_{\perp}}\right)_{0}$, and $\zeta_{\theta a}=-\left(\frac{\partial
T_{a}}{\partial\dot{\vartheta}}\right)_{0}$. The longitudinal friction
coefficient $\zeta_{\|a}$ is identical to the active friction $\zeta_{a}$
given in Eq. (27) for a rod in one dimension, with $\Delta\to\Delta_{\|}$. The
transverse and rotational friction coefficients are enhanced by motor
activity. Their active components are given by
$\displaystyle\zeta_{\perp
a}=\frac{kNr\tau_{0}}{r+(1-r)\tilde{\tau}_{0}^{-1}}$ (43a)
$\displaystyle\zeta_{\theta
a}=\frac{kNr\tau_{0}L^{2}/12}{r+(1-r)\tilde{\tau}_{0}^{-1}}\;.$ (43b)
Finally we have,
$g_{a}=\frac{kNr\tau_{0}v_{0}\left(\tau_{0}+\epsilon|\Delta_{\|}^{0}|\right)}{r+(1-r)\tilde{\tau}_{0}^{-1}}$.
When the load dependence of the unbinding rate is neglected ($\nu=0$), all
friction coefficients are enhanced by motor activity. When the force/velocity
and torque/angular velocity curves are calculated to nonlinear order, we find
that the only instability is the negative longitudinal friction instability
obtained in one dimension. No instabilities are obtained in the angular
dynamics. We expect this will change if we include the semiflexibility of the
filament Kikuchi et al. (2009); Brangwynne et al. (2008).
### IV.2 Fluctuations around the steady state
We now examine the dynamics of noise-induced fluctuations about the steady
state by letting $\delta\dot{{\bf r}}=\dot{{\bf r}}-{\bf v}$ and
$\delta\dot{\theta}=\dot{\theta}-\dot{\vartheta}$ where ${\bf v}$ and
$\dot{\vartheta}$ are the steady state velocity and angular frequency in the
absence of external force and torque. As noted in the previous section when
${\bf F}_{\text{ext}}=0$ and $T_{\text{ext}}=0$, $v_{\|}=v\neq 0$, with $v$
given by the solution of Eq. (23), and $v_{\perp}=\dot{\vartheta}=0$.
Projecting velocity fluctuations longitudinal and transverse to the filament,
$\delta\dot{{\bf r}}={\bf\hat{u}}\delta V_{\|}+{\bf\hat{n}}\delta V_{\perp}$,
the dynamics of fluctuations is described by the coupled equations,
$\displaystyle\left[\zeta_{\|}+\zeta_{\|a}(v)\right]\delta
V_{\|}=-kN\Delta_{\|}(v)\delta p_{b}(t)+\eta_{\|}\;,$ (44a)
$\displaystyle\left[\zeta_{\perp}+\zeta_{\perp a}(v)\right]\delta
V_{\perp}=\eta_{\perp}\;,$ (44b)
$\displaystyle\left[\zeta_{\theta}+\zeta_{\theta
a}(v)\right]\delta\dot{\theta}=-kNp_{bs}(v)\tau(v)v_{m}(v)\delta
V_{\perp}+\eta_{\theta}\;,$ (44c)
with
$\left[\zeta_{\theta}+\zeta_{\theta
a}(v)\right]\delta\dot{p}_{b}=-\Omega(v)\delta
p_{b}-p_{bs}(v)\frac{\partial}{\partial v}\left[\frac{1}{\tau(v)}\right]\delta
V_{\|}\;,$ (45)
where the effective frequency $\Omega(v)=\tau^{-1}(v)+\omega_{b}$ and the
longitudinal active friction $\zeta_{\|a}(v)$ are as in one dimension,
$\zeta_{\perp a}(v)=kNp_{bs}(v)\tau(v)$ and $\zeta_{\theta
a}(v)=kNp_{bs}(v)\tau(v)L^{2}/12$. In all the parameters, $v\equiv v_{\|}$ has
to be replaced by the steady state solution obtained in one dimension in the
absence of external force or torque.
The time-correlation function of orientational fluctuations,
$\Delta\theta(t)=\delta\theta(t)-\delta\theta(0)$, can be calculated from Eqs.
(44b) and (44c), with the result
$\langle\Delta\theta(t)\Delta\theta(t^{\prime})\rangle=2D_{\theta a}\
\text{min}(t,t^{\prime})\;.$ (46)
The effective rotational diffusion constant is enhanced by the transverse
diffusivity and is given by
$D_{\theta a}(v)=\frac{B_{\theta}}{\left[\zeta_{\theta}+\zeta_{\theta
a}(v)\right]^{2}}+\frac{B_{\perp}/\ell_{p}^{2}(v)}{\left[\zeta_{\perp}+\zeta_{\perp
a}(v)\right]^{2}}$ (47)
with $\ell_{p}(v)=\left[\zeta_{\theta}+\zeta_{\theta
a}(v)\right]/kNp_{bs}(v)\tau(v)v_{m}(v)$. Using Eq. (46), one immediately
obtains the angular time-correlation function as Han et al. (2006),
$\langle{\bf\hat{u}}(t^{\prime})\cdot{\bf\hat{u}}(t^{\prime\prime})\rangle=e^{-D_{\theta
a}\left|t^{\prime}-t^{\prime\prime}\right|}\;.$ (48)
The fluctuations in the probability of bound motors are driven by their
coupling to the stochastic longitudinal dynamics of the filament. Assuming
$\delta p_{b}(0)=0$, we obtain
$\langle\delta p_{b}(t)\delta
p_{b}(t^{\prime})\rangle=\left(\frac{\zeta_{a}^{\prime}\omega_{0}}{v_{p}}\right)^{2}\frac{B_{\|}}{\Omega_{a}}\left[e^{-\Omega_{a}\left|t-t^{\prime}\right|}-e^{-\Omega_{a}\left(t+t^{\prime}\right)}\right],$
(49)
where
$\Omega_{a}(v)=\Omega(v)+\omega_{0}\frac{\zeta^{\prime}_{a}(v}{\zeta_{\|}+\zeta_{\|a}(v)}$,
$\zeta^{\prime}_{a}(v)=kNp_{bs}(v)\Delta_{\|}(v)\frac{\partial}{\partial
v}\left(\frac{1}{\tilde{\tau}}\right)$, and
$v_{p}(v)=Nk\Delta_{\|}(v)/\left[\zeta_{\|}+\zeta_{\|a}(v)\right]$ is a
longitudinal propulsion velocity. Notice that $v_{p}(v=0)=v_{s}/p_{bs0}$, with
$v_{s}$ given in Eq. (28). Finally, we can compute the correlation function of
the fluctuation $\delta\dot{{\bf r}}$ of the filament’s position. In the
laboratory frame the dynamics of $\delta\dot{{\bf r}}$ can be recast in the
form of a simple equation,
$\delta\dot{{\bf r}}=-v_{p}\delta p_{b}(t)\hat{{\bf
u}}+\left[\underline{\underline{{\bm{\zeta}}}}+\underline{\underline{{\bm{\zeta}}}}^{a}(v)\right]^{-1}\cdot{\bm{\eta}}$
(50)
Fluctuations in the probability of bound motors do not couple to orientational
fluctuations to linear order. It is then straightforward to calculate the
correlation function of displacement fluctuations, with the result
$\displaystyle\langle[\delta{\bf r}(t)-\delta{\bf
r}(0)]^{2}\rangle=2D_{\text{eff}}~{}t+\frac{D_{\|a}\zeta_{a}^{{}^{\prime}2}\omega_{0}^{2}/\Omega_{a}^{2}}{(D_{\theta
a}^{2}-\Omega_{a}^{2})(\zeta_{\|}+\zeta_{\|a})^{2}}\left[-(D_{\theta
a}+\Omega_{a})\left(1-e^{-2\Omega_{a}t}\right)+\frac{4\Omega_{a}^{2}}{D_{\theta
a}+\Omega_{a}}\left(1-e^{-(\Omega_{a}+D_{\theta a})t}\right)\right]$ (51)
where effective longitudinal and transverse diffusion constants have been
defined as
$\displaystyle D_{\|a}=B_{\|}/[\zeta_{\|}+\zeta_{\|a}(v)]^{2}\;,$ (52a)
$\displaystyle D_{\perp a}=B_{\perp}/[\zeta_{\perp}+\zeta_{\perp
a}(v)]^{2}\;.$ (52b)
Finally, using ${\bf r}(t)=\delta{\bf r}(t)+\int_{0}^{t}\
dt^{\prime}v\hat{{\bf u}}(t^{\prime})$, the mean square displacement (MSD) can
be written as,
$\langle[{\bf r}(t)-{\bf r}(0)]^{2}\rangle=\langle[\delta{\bf r}(t)-\delta{\bf
r}(0)]^{2}\rangle+\frac{v^{2}}{D_{\theta a}}\left[t-\frac{1-e^{-D_{\theta
a}t}}{D_{\theta a}}\right]\;.$ (53)
The MSD is controlled by the interplay of two time scales, the rotational
diffusion time, $D_{\theta a}^{-1}$, that is decreased by activity as compared
to its bare value, $D_{\theta}^{-1}$, and the time scale $\Omega_{a}^{-1}$,
which is turn controlled by the duration of the motor binding/unbinding cycle.
If $D_{\theta a}^{-1}\gg\Omega_{a}^{-1}$, which is indeed the case for
actomyosin systems 222A naive estimate for actin-myosin systems (neglecting
the load dependence of the unbinding rate) gives $\Omega_{a}^{0}\simeq\ 5\
ms^{-1}$ and $D_{\theta a}^{0}\simeq\ 0.17\ s^{-1}$ for $N=1$. then on times
$t\gg\Omega_{a}^{-1}$ the MSD is given by
$\langle[{\bf r}(t)-{\bf r}(0)]^{2}\rangle=\
2D_{\text{eff}}t+\frac{v^{2}}{D_{\theta a}}\left[t-\frac{1-e^{-D_{\theta
a}t}}{D_{\theta a}}\right]\;,$ (54)
with
$D_{\text{eff}}=D_{\|a}+D_{\perp a}+\frac{D_{\|a}\Omega_{a}}{D_{\theta
a}+\Omega_{a}}\left(\frac{\zeta^{\prime}_{a}\omega_{0}}{[\zeta_{\|}+\zeta_{\|a}(v)]\Omega_{a}}\right)^{2}\;.$
(55)
In other words the rod performs a persistent random walk consisting of
ballistic segments at speed $v$ randomized by rotational diffusion. The
behavior is diffusive both at short and long times, albeit with different
diffusion constants, $D_{\text{eff}}$ and $D_{\text{eff}}+v^{2}/(2D_{\theta
a})$, respectively. This is indeed the dynamics of a self-propelled rod. If
the noise strengths $B_{\|}$, $B_{\perp}$ and $B_{\theta}$ are negligible,
then Eq. (54) reduces to
$\langle[{\bf r}(t)-{\bf r}(0)]^{2}\rangle\simeq\ \frac{v^{2}}{D_{\theta
a}}\left[t-\frac{1-e^{-D_{\theta a}t}}{D_{\theta a}}\right]\;.$ (56)
and the MSD exhibits a crossover from ballistic behavior for $t\ll D_{\theta
a}^{-1}$ to diffusive at long times.
It is worthwhile to note that if one neglects load dependence of unbinding
rate by taking $\nu=0$, effective diffusivity at long time is enhanced with,
$D_{\text{eff}}^{0}=D_{\|a}^{0}+D_{\perp a}^{0}+(v^{0})^{2}/2D_{\theta
a}^{0}$, due to the interplay between ballistic motion driven by the tethered
motors and rotational diffusion, unlike the situation in one dimension.
## V Summary and Outlook
We have investigated the dynamics of a single cytoskeletal filament modeled as
a rigid rod interacting with tethered motor proteins in a motility assay in
two dimensions. Motor activity yields both an effective propulsion of the
filament along its long axis and a renormalization of all friction
coefficients. The longitudinal friction can change sign leading to an
instability in the filament’s response to external force, as demonstrated by
previous authors Jülicher and Prost (1995). The effective propulsion force and
filament velocity in the steady state are calculated in terms of microscopic
motor and filament parameters.
We also considered the fluctuations of the filament displacement about its
steady state value and demonstrated that the coupling to the binding/unbinding
dynamics of the the motors yields non-Markovian fluctuations and enhanced
diffusion. Future work will include the stochasticity in the motor
displacements and the semiflexibility of filaments, which is expected to lead
to buckling instabilities Karpeev et al. (2007) and anomalous fluctuations
Liverpool (2003).
###### Acknowledgements.
This work was supported at Syracuse by the National Science Foundation under a
Materials World Network award DMR-0806511 and in Stellenbosch by the National
Research Foundation under grant number UID 67512. MCM was also partly
supported on NSF-DMR-1004789 and NSF-DMR-0705105. We thank Aparna Baskaran,
Lee Boonzaaier and Tannie Liverpool for illuminating discussions. Finally, SB
and MCM thank the University of Stellenbosch for hospitality during the
completion of part of this work.
## Appendix A Solution of Mean-Field Equation
Here we discuss the solution of the mean-field equation (13) for the
attachment time $\tau$, For simplicity, we consider the one-dimensional case
in detail. The discussion is then easily generalized to two dimensions. The
mean-field equation for the residence time $\tau$ is rewritten here for
clarity:
$\tau_{MF}=\omega_{u}^{-1}(\Delta(\tau_{MF}))\;.$ (57)
The solution clearly depends on the form chosen to describe the dependence of
the motor unbinding rate on the stretch $\Delta$, in turn given by
$\Delta(\tau_{MF})=(\dot{x}-v_{0})/(\tau_{MF}^{-1}+\epsilon\omega_{0})$. The
mean-field equation must be inverted to determine $\tau_{MF}$ as a function of
the filament velocity $\dot{x}=v$.
Figure 7: Mean field attachment time $\tau_{\text{MF}}$ as a function of $v$
for parameter values appropriate for acto-myosin systems: $v_{0}=1000\
\text{nm s}^{-1}$, $k=10\ \text{pN nm}^{-1}$, $f_{s}=4\ \text{pN}$,
$\alpha^{-1}=7.5\ \text{nm}$, $\omega_{0}=0.5\ \text{(ms)}^{-1}$, $r=0.06$,
corresponding to $\epsilon=5$. The dashed line is the numerical solution of
Eq. (57) obtained using the exponential dependence of the unbinding rate on
the stretch. The solid line is obtained using the parabolic ansatz given in
Eq. (59).
For compactness we drop the label ‘MF’. It is clear that $\tau$ has a maximum
at $v=v_{0}$, where $\tau=\omega_{0}^{-1}$. This simply corresponds to the
fact that the time a motor protein spends attached to the actin filament is
largest when the motors’ tails are unstretched ($\Delta=0$) and the motors
advance at the unloaded motor velocity, $v_{0}$.
It is convenient to use the dimensionless variable and parameters introduced
in the text and write the stretch $\Delta$ as
$\Delta=\frac{(u-1)\ell_{0}}{\tilde{\omega}_{u}+1}\;,$ (58)
where $u=v/v_{0}$, $\tilde{\omega}_{u}=\omega_{u}/\omega_{0}$ and
$\ell_{0}=v_{0}/\omega_{0}$. A form commonly used in the literature is the
exponential form $\omega_{u}(\Delta)=\omega_{0}e^{\alpha|\Delta|}$, with
$\alpha^{-1}$ a characteristic length scale. The dimensionless combination
$\alpha\Delta$ can then be written in terms of the parameter
$\nu=\alpha\ell=\alpha\ell_{0}/(1+\epsilon)$ and setting $\nu=0$ corresponds
to neglecting the load dependence of the unbinding rate. The numerical
solution of Eq. (57) for the mean attachment time as a function of $v$ is
shown as a dashed line in Fig. 7 for parameter values appropriate for acto-
myosin systems. As expected it has a sharp maximum at $v=v_{0}$. At large $v$
the attachment time decays logarithmically with velocity. As a result, the
stretch is found to saturate at large velocity, as shown by the dashed curve
in Fig. 8. This behavior is unphysical as it does not incorporate the fact
that when the stretch exceeds a characteristic value of the order $f_{d}/k$,
the motor head simply detaches, as shown in Fig. 2. Instead of incorporating
this cutoff by hand, we have chosen to use a simple quadratic form for the
dependence of the unbinding rate on the stretch, given by
$\omega_{u}(\Delta)=\omega_{0}\left[1+\alpha^{2}\Delta^{2}\right]\;.$ (59)
With this form the mean field equation (57) can be solved analytically,
although the explicit solution is not terribly informative and will not be
given here. The resulting attachment time is shown as a solid line in Fig. 7.
The quadratic form reproduces the sharp maximum of $\tau$ at $v=v_{0}$ and
yields $\tau\sim v^{-3/2}$ at large $v$. The stretch then decays with
velocity, as shown in Fig. 8.
Figure 8: Stretch $\Delta$ as a function of velocity $v$ obtained using the
mean-field value of the attachment time displayed in Fig. 7. The parameter
values are the same as in Fig. 7. The dashed line is obtained using the
exponential dependence of the unbinding rate on the stretch. The solid line is
obtained using the parabolic ansatz given in Eq. (59).
## References
* Schaller et al. (2010) V. Schaller, C. Weber, C. Semmrich, E. Frey, and A. R. Bausch, Nature 467, 73 (2010).
* Butt et al. (2010) T. Butt, T. Mufti, A. Humayun, P. B. Rosenthal, S. Khan, S. Khan, and J. E. Molloy, J. Biol. Chem. 285, 4964 (2010).
* Copeland and Weibel (2009) M. F. Copeland and D. B. Weibel, Soft Matter 5, 1174 (2009).
* Riveline et al. (1998) D. Riveline, A. Ott, F. Jülicher, D. A. Winkelmann, O. Cardoso, J.-J. Lacapère, S. Magnúsdóttir, J. L. Viovy, L. Gorre-Talini, and J. Prost, Eur. Biophys. J. 27, 403 (1998).
* Guérin et al. (2010a) T. Guérin, J. Prost, P. Martin, and J.-F. Joanny, Curr. Op. Cell Biol. 22, 14 (2010a).
* Jülicher and Prost (1997) F. Jülicher and J. Prost, Phys. Rev. Lett. 78, 4510 (1997), URL http://link.aps.org/doi/10.1103/PhysRevLett.78.4510.
* Grill et al. (2005) S. W. Grill, K. Kruse, and F. Jülicher, Phys. Rev. Lett. 94, 108104 (2005), URL http://dx.doi.org/10.1103/PhysRevLett.94.108104.
* Günther and Kruse (2007) S. Günther and K. Kruse, New J. Phys. 9, 417 (2007), URL http://iopscience.iop.org/1367-2630/9/11/417/.
* Vilfan and Frey (2005) A. Vilfan and E. Frey, Journal of Physics: Condensed Matter 17, S3901 (2005), URL http://stacks.iop.org/0953-8984/17/i=47/a=018.
* Camalet and Jülicher (2000) S. Camalet and F. Jülicher, New Journal of Physics 2, 24 (2000), URL http://stacks.iop.org/1367-2630/2/i=1/a=324.
* Jülicher and Prost (1995) F. Jülicher and J. Prost, Phys. Rev. Lett. 75, 2618 (1995), URL http://link.aps.org/doi/10.1103/PhysRevLett.75.2618.
* Badoual et al. (2002) M. Badoual, F. Jülicher, and J. Prost, Proc. Natl. Acad. Sci. USA 99, 6696 (2002).
* Plaçais et al. (2009) P. Y. Plaçais, M. Balland, T. Guérin, J.-F. Joanny, and P. Martin, Phys. Rev. Lett. 103, 158102 (2009), URL http://link.aps.org/doi/10.1103/PhysRevLett.103.158102.
* Gibbons et al. (2001) F. Gibbons, J. F. Chauwin, M. Despósito, and J. V. José, Biophys. J. 80, 2515 (2001).
* Kraikivski et al. (2006) P. Kraikivski, R. Lipowsky, and J. Kierfeld, Phys. Rev. Lett. 96, 258103 (2006), URL http://link.aps.org/doi/10.1103/PhysRevLett.96.258103.
* Brokaw (1975) C. J. Brokaw, Proc. Natl. Acad. Sci. USA 72, 3102 (1975).
* Vilfan et al. (1999) A. Vilfan, E. Frey, and F. Schwabl, Europhys. Lett. 283, 45 (1999).
* Hexner and Kafri (2009) D. Hexner and Y. Kafri, Phys Biol 6, 036016 (2009), URL http://iopscience.iop.org/1478-3975/6/3/036016.
* Guérin et al. (2010b) T. Guérin, J. Prost, and J.-F. Joanny, Phys. Rev. Lett. 104, 248102 (2010b), URL http://prl.aps.org/abstract/PRL/v104/i24/e248102.
* Huxley (1957) A. F. Huxley, Prog. Biophys. Chem. 7, 255 (1957).
* Vilfan (2009) A. Vilfan, Biophys. J. 1130 1137, 2515 (2009).
* van Teeffelen and Löwen (2008) S. van Teeffelen and H. Löwen, Phys. Rev. E 78, 020101 (2008), URL http://pre.aps.org/abstract/PRE/v78/i2/e020101.
* Baskaran and Marchetti (2008) A. Baskaran and M. C. Marchetti, Phys. Rev. Lett. 101, 268101 (2008).
* Svoboda and Block (1994) K. Svoboda and S. M. Block, Cell 77, 773 (1994).
* Parmeggiani et al. (2001) A. Parmeggiani, F. Jülicher, L. Peliti, and J. Prost, Europhys. Lett. 56, 603 (2001), eprint cond-mat/0109187v1, URL http://arxiv.org/abs/cond-mat/0109187v1.
* Visscher et al. (1999) K. Visscher, M. J. Schnitzer, and S. M. Block, Nature 400, 184 (1999), URL http://www.nature.com/nature/journal/v400/n6740/abs/400184a0.html.
* Howard (2001) J. Howard, _Mechanics of Motor Proteins and the Cytoskeleton_ (Sinauer Associates, 2001), ISBN 0878933344, URL http://www.amazon.com/exec/obidos/redirect?tag=citeulike07-20&path=ASIN/0878933344.
* Tawada and Sekimoto (1991) K. Tawada and K. Sekimoto, Journal of Theoretical Biology 150, 193 (1991), ISSN 0022-5193, URL http://www.sciencedirect.com/science/article/B6WMD-4KDGR4D-5/2/906e9dbabbba82beff6bc6e5f982d9bd.
* Kikuchi et al. (2009) N. Kikuchi, A. Ehrlicher, D. Koch, J. A. Käs, S. Ramaswamy, and M. Rao, Proceedings of the National Academy of Sciences 106, 19776 (2009).
* Brangwynne et al. (2008) C. P. Brangwynne, G. H. Koenderink, F. C. MacKintosh, and D. A. Weitz, Phys. Rev. Lett. 100, 118104 (2008).
* Han et al. (2006) Y. Han, A. Alsayed, M. Nobili, J. Zhang, T. C. Lubensky, and A. G. Yodh, Science 314, 626 (2006), URL http://www.sciencemag.org/content/314/5799/626.short.
* Karpeev et al. (2007) D. Karpeev, I. S. Aranson, L. S. Tsimring, and H. G. Kaper, Phys. Rev. E 76, 051905 (2007).
* Liverpool (2003) T. B. Liverpool, Phys. Rev. E 67, 031909 (2003), URL http://pre.aps.org/abstract/PRE/v67/i3/e031909.
|
arxiv-papers
| 2011-04-17T23:32:09 |
2024-09-04T02:49:18.304945
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Shiladitya Banerjee, M. Cristina Marchetti and Kristian\n M\\\"uller-Nedebock",
"submitter": "Shiladitya Banerjee",
"url": "https://arxiv.org/abs/1104.3360"
}
|
1104.3369
|
# Hole burning in a nanomechanical resonator coupled to a Cooper pair box
C. Valverde valverde@unip.br A.T. Avelar B. Baseia Universidade Paulista,
Rod. BR 153, km 7, 74845-090 Goiânia, GO, Brazil. Universidade Estadual de
Goiás, Rod. BR 153, 3105, 75132-903 Anápolis, GO, Brazil. Instituto de
Física, Universidade Federal de Goiás, 74001-970 Goiânia, GO, Brazil.
###### Abstract
We propose a scheme to create holes in the statistical distribution of
excitations of a nanomechanical resonator. It employs a controllable coupling
between this system and a Cooper pair box. The success probability and the
fidelity are calculated and compared with those obtained in the atom-field
system via distinct schemes. As an application we show how to use the hole-
burning scheme to prepare (low excited) Fock states.
###### keywords:
Quantum state engineering , Superconducting circuits , Nanomechanical
Resonator, Cooper Pair Box
###### PACS:
03.67.Lx, 85.85.+j, 85.25.C, 32. 80. Bx , 42.50.Dv
††journal: …
,
## 1 Introduction
Nanomechanical resonators (NR) have been studied in a diversity of situations,
as for weak force detections [1], precision measurements [2], quantum
information processing [3], etc. The demonstration of the quantum nature of
mechanical and micromechanical devices is a pursued target; for example,
manifestations of purely nonclassical behavior in a linear resonator should
exhibit energy quantization, the appearance of Fock states, quantum limited
position-momentum uncertainty, superposition and entangled states, etc. NR can
now be fabricated with fundamental vibrational mode frequencies in the range
MHz – GHz [4, 5, 6]. Advances in the development of micromechanical devices
also raise the fundamental question of whether such systems that contain a
macroscopic number of atoms will exhibit quantum behavior. Due to their sizes,
quantum behavior in micromechanical systems will be strongly influenced by
interactions with the environment and the existence of an experimentally
accessible quantum regime will depend on the rate at which decoherence occurs
[7, 8]. One crucial step in the study of nanomechanical systems is the
engineering and detection of quantum effects of the mechanical modes. This can
be achieved by connecting the resonators with solid-state electronic devices
[9, 10, 11, 12, 13], such as a single-electron transistor. NR has also been
used to study quantum nondemolition measurement [13, 14, 15, 16], quantum
decoherence [12, 17], and macroscopic quantum coherence phenomena [18]. The
fast advance in the tecnique of fabrication in nanotecnology implied great
interest in the study of the NR system in view of its potential modern
applications, as a sensor, largely used in various domains, as in biology,
astronomy, quantum computation [19, 20], and more recently in quantum
information [3, 21, 22, 23, 24, 25, 26] to implement the quantum qubit [22],
multiqubit [27] and to explore cooling mechanisms [28, 29, 30, 31, 32, 33],
transducer techniques [34, 35, 36], and generation of nonclassical states, as
Fock [37], Schrödinger-“cat” [12, 38, 39], squeezed states [40, 41, 42, 43,
44], including intermediate and other superposition states [45, 46]. In
particular, NR coupled with superconducting charge qubits has been used to
generate entangled states [12, 38, 47, 48]. In a previous paper Zhou and Mizel
[43] proposed a scheme to create squeezed states in a NR coupled to Cooper
pair box (CPB) qubit; in it the NR-CPB coupling is controllable. Such a
control comes from the change of external parameters and plays an important
role in quantum computation, allowing us to set ON and OFF the interaction
between systems on demand.
Now, the storage of optical data and communications using basic processes
belonging to the domain of the quantum physics have been a subject of growing
interest in recent years [49]. Concerned with this interest, we present here a
feasible experimental scheme to create holes in the statistical distribution
of excitations of a coherent state previously prepared in a NR. In this
proposal the coupling between the NR and the CPB can be controlled
continuously by tuning two external biasing fluxes. The motivation is inspired
by early investigations on the production of new materials possessing holes in
their fluorescent spectra [50] and also inspired by previous works of ours, in
which we have used alternative systems and schemes to attain this goal [51,
52, 53]. The desired goal in producing holes with controlled positions in the
number space is their possible application in quantum computation, quantum
cryptography, and quantum communication. As argued in [52], these states are
potential candidates for optical data storage, each hole being associated with
some signal (say YES, $\left|1\right\rangle$, or $\left|+\right\rangle$) and
its absence being associated with an opposite signal (NO,
$\left|0\right\rangle$, or $\left|-\right\rangle$). Generation of such holes
has been treated in the contexts of cavity-QED [53] and traveling waves [54].
## 2 Model hamiltonian for the CPB-NR system
There exist in the literature a large number of devices using the SQUID-base,
where the CPB charge qubit consists of two superconducting Josephson junctions
in a loop. In the present model a CPB is coupled to a NR as shown in Fig. (1);
the scheme is inspired in the works by Jie-Qiao Liao et al. [23] and Zhou et
al. [43] where we have substituted each Josephson junction by two of them.
This creates a new configuration including a third loop. A superconducting CPB
charge qubit is adjusted via a voltage $V_{1}$ at the system input and a
capacitance $C_{1}$. We want the scheme ataining an efficient tunneling effect
for the Josephson energy. In Fig.(1) we observe three loops: one great loop
between two small ones. This makes it easier controlling the external
parameters of the system since the control mechanism includes the input
voltage $V_{1}$ plus three external fluxes $\Phi(\ell),$ $\Phi(r)$ and
$\Phi_{e}(t)$. In this way one can induce small neighboring loops _._ The
great loop contains the NR and its effective area in the center of the
apparatus changes as the NR oscillates, which creates an external flux
$\Phi_{e}(t)$ that provides the CPB-NR coupling to the system.
Figure 1: Model for the CPB-NMR coupling.
In this work we will assume the four Josephson junctions being identical, with
the same Josephson energy $E_{J}^{0}$, the same being assumed for the external
fluxes $\Phi(\ell)$ and $\Phi(r)$, i.e., with same magnitude, but opposite
sign: $\Phi(\ell)=-\Phi(r)=\Phi(x)$. In this way, we can write the Hamiltonian
describing the entire system as
$\hat{H}=\omega\hat{a}^{\dagger}\hat{a}+4E_{c}\left(N_{1}-\frac{1}{2}\right)\hat{\sigma}_{z}-4E_{J}^{0}\cos\left(\frac{\pi\Phi_{x}}{\Phi_{0}}\right)\cos\left(\frac{\pi\Phi_{e}}{\Phi_{0}}\right)\hat{\sigma}_{x},$
(1)
where $\hat{a}^{\dagger}(\hat{a})$ is the creation (annihilation) operator for
the excitation in the NR, corresponding with the frequency $\omega$ and mass
$m$; $E_{J}^{0}$ and $E_{c}$ are respectively the energy of each Josephson
junction and the charge energy of a single electron; $C_{1}$ and $C_{J}^{0}$
stand for the input capacitance and the capacitance of each Josephson tunel,
respectively $\Phi_{0}=h/2e$ is the quantum flux and $N_{1}=C_{1}V_{1}/2e$ is
the charge number in the input with the input voltage $V_{1}$. We have used
the Pauli matrices to describe our system operators, where the states
$\left|g\right\rangle$ and $\left|e\right\rangle$ (or 0 and 1) represent the
number of extra Cooper pairs in the superconduting island. We have:
$\hat{\sigma}_{z}=\left|g\right\rangle\left\langle
g\right|-\left|e\right\rangle\left\langle e\right|$,
$\hat{\sigma}_{x}=\left|g\right\rangle\left\langle
e\right|-\left|e\right\rangle\left\langle g\right|$ and
$E_{C}=e^{2}/\left(C_{1}+4C_{J}^{0}\right).$
The magnectic flux can be written as the sum of two terms,
$\Phi_{e}=\Phi_{b}+B\ell\hat{x}\text{ },$ (2)
where the first term $\Phi_{b}$ is the induced flux, corresponding to the
equilibrium position of the NR and the second term describes the contribution
due to the vibration of the NR; $B$ represents the magnectic field created in
the loop. We have assumed the displacement $\hat{x}$ described as
$\hat{x}=x_{0}(\hat{a}^{\dagger}+\hat{a})$, where $x_{0}=\sqrt{m\omega/2}$ is
the amplitude of the oscillation.
Substituting the Eq.(2) in Eq.(1) and controlling the flux $\Phi_{b}$ we can
adjust $\cos\left(\frac{\pi\Phi_{b}}{\Phi_{0}}\right)=0$ to obtain
$\hat{H}=\omega\hat{a}^{\dagger}\hat{a}+4E_{c}\left(N_{1}-\frac{1}{2}\right)\hat{\sigma}_{z}-4E_{J}^{0}\cos\left(\frac{\pi\Phi_{x}}{\Phi_{0}}\right)\sin\left(\frac{\pi
B\ell\hat{x}}{\Phi_{0}}\right)\hat{\sigma}_{x},$ (3)
and making the approximation $\pi B\ell x/\Phi_{0}<<1$ we find
$\hat{H}=\omega\hat{a}^{\dagger}\hat{a}+\frac{1}{2}\omega_{0}\hat{\sigma}_{z}+\lambda_{0}(\hat{a}^{\dagger}+\hat{a})\hat{\sigma}_{x},$
(4)
where the constant coupling
$\lambda_{0}=-4E_{J}^{0}\cos\left(\frac{\pi\Phi_{x}}{\Phi_{0}}\right)\left(\frac{\pi
B\ell x_{0}}{\Phi_{0}}\right)$ and the effective energy
$\omega_{0}=8E_{c}\left(N_{1}-\frac{1}{2}\right).$ In the rotating wave
approximation the above Hamiltonian results as
$\hat{H}=\omega\hat{a}^{\dagger}\hat{a}+\frac{1}{2}\omega_{0}\hat{\sigma}_{z}+\lambda_{0}(\hat{\sigma}_{+}\hat{a}+\hat{a}^{\dagger}\hat{\sigma}_{-}).$
(5)
Now, in the interaction picture the Hamiltonian is written as,
$\hat{H}_{I}=\hat{U}_{0}^{\dagger}\hat{H}\hat{U}_{0}-i\hbar\hat{U}_{0}^{\dagger}\frac{\partial\hat{U}_{0}}{\partial
t},$ where
$\hat{U}_{0}=\exp\left[-i\left(\omega\hat{a}^{\dagger}\hat{a}+\frac{\omega_{0}\hat{\sigma}_{z}}{2}\right)t\right]$
is the evoluion operator. Assuming the system operating under the resonant
condition, i.e., $\omega=\omega_{0}$, and setting
$\hat{\sigma}_{z}=\hat{\sigma}_{+}\hat{\sigma}_{-}-\hat{\sigma}_{-}\hat{\sigma}_{+}$
and $\hat{\sigma}_{\pm}=$ $\left(\hat{\sigma}_{x}\pm
i\hat{\sigma}_{y}\right)/2\ ,$ with
$\hat{\sigma}_{y}=(\left|e\right\rangle\left\langle
g\right|-\left|e\right\rangle\left\langle g\right|)/i$ the interaction
Hamiltonian is led to the abbreviated form,
$\hat{H}_{I}=\beta\left(\hat{a}^{\dagger}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}\right),$
(6)
where $\beta=-\lambda_{0},$ $\hat{\sigma}_{+}$ $(\hat{\sigma}_{-})$ is the
raising (lowering) operator for the CPB.
We note that the coupling constant $\beta$ can be controlled through the flux
$\Phi_{x}$, which influences the mentioned small loops __ in the left and
right places. Furthermore, we can control the gate charge $N_{1}$ via the gate
voltage $V_{1}$ syntonized to the coupling. It should be mentioned that the
energy $\omega_{0}$ depends on the induced flux $\Phi_{x}$. So, when we
syntonize the induced flux $\Phi_{x}$ the energy $\omega_{0}$ changes. To
avoid unnecessary transitions during these changes, we assume the changes in
the flux being slow enough to obey the adiabatic condition.
Next we show how to make holes in the statistical distribution of excitations
in the NR. We start from the CPB initially prepared in its ground state
$\left|CPB\right\rangle=\left|g\right\rangle,$ and the NR initially prepared
in the coherent state,
$\left|NR\right\rangle=\left|\text{$\alpha$}\right\rangle.$Then the state
$\left|\Psi\right\rangle$ that describes the intire system (CPB plus NR)
evolves as follows
$\left|\Psi_{NC}(t)\right\rangle=\hat{U}(t)\left|g\right\rangle\left|\alpha\right\rangle,$
(7)
where $\hat{U}(t)=\exp(-it\hat{H}_{I})$ is the (unitary) evolution operator
and $\hat{H}_{I}$ is the interaction Hamiltonian, given in Eq. (6).
Setting $\hat{\sigma}_{+}=\left|g\right\rangle\left\langle e\right|$ and
$\hat{\sigma}_{-}=\left|e\right\rangle\left\langle g\right|$ we obtain after
some algebra,
$\displaystyle\hat{U}(t)$ $\displaystyle=$ $\displaystyle\cos(\beta
t\sqrt{\hat{a}^{\dagger}\hat{a}+1})\left|g\right\rangle\left\langle
g\right|\text{ }+\text{ }\cos(\beta
t\sqrt{\hat{a}^{\dagger}\hat{a}})\left|e\right\rangle\left\langle e\right|$
(8) $\displaystyle-i\frac{\sin(\beta
t\sqrt{\hat{a}^{\dagger}\hat{a}+1})}{\sqrt{\hat{a}^{\dagger}\hat{a}+1}}\hat{a}\left|g\right\rangle\left\langle
e\right|\text{ }-i\frac{\sin(\beta
t\sqrt{\hat{a}^{\dagger}\hat{a}})}{\sqrt{\hat{a}^{\dagger}\hat{a}}}\hat{a}^{\dagger}\left|e\right\rangle\left\langle
g\right|.$
In this way, the evolved state in Eq.(7) becomes
$\left|\Psi_{NC}(t)\right\rangle=e^{-\frac{\left|\alpha\right|^{2}}{2}}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}[\cos(\omega_{n}\tau)\left|g,n\right\rangle\text{
}-i\sin(\omega_{n}\tau)\left|e,n+1\right\rangle],$ (9)
where $\omega_{n}=\beta\sqrt{n+1}$, If we detect the CPB in the state
$\left|g\right\rangle$ after a convenient time interval $\tau_{1}$ then the
state $\left|\Psi_{NC}(t)\right\rangle$ reads
$\left|\Psi_{NC}(\tau_{1})\right\rangle=\eta_{1}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}\cos(\omega_{n}\tau_{1})\left|n\right\rangle,$
(10)
where $\eta_{1}$ is a normalization factor. If we choose $\tau_{1}$ in a way
that $\beta\sqrt{n_{1}+1}\tau_{1}=\pi/2$, the component
$\left|n_{1}\right\rangle$ in the Eq.(10) is eliminated.
In a second step, supose that this first CPB is rapidly substituted by another
one, also in the initial state $\left|g\right\rangle$, that interacts with the
NR after the above detection. For the second CPB the initial state of the NR
is the state given in Eq.(10), produced by the detection of the first CPB in
$\left|g\right\rangle$. As result, the new CPB-NR system evolves to the state
$\left|\Psi_{NC}(\tau_{2})\right\rangle=\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}[\cos(\omega_{n}\tau_{2})\cos(\omega_{n}\tau_{1})\left|g,n\right\rangle-i\cos(\omega_{n}\tau_{1})\sin(\omega_{n}\tau_{2})\left|e,n+1\right\rangle].$
(11)
Next, the detection of the second CPB again in the state
$\left|g\right\rangle$ leads the entire system collapsing to the state
$\left|\Psi_{NC}(\tau_{2})\right\rangle=\eta_{2}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}[\cos(\omega_{n}\tau_{2})\cos(\omega_{n}\tau_{1})\left|n\right\rangle],$
(12)
where $\eta_{2}$ is a normalization factor. In this way, the choice
$\beta\sqrt{n_{2}+1}\tau_{2}=\pi/2$ makes a second hole, now in the component
$\left|n_{2}\right\rangle$.
By repeating this procedure $M$ times we obtain the generalized result for the
$M-th$ CPB detection as
$\left|\Psi_{NC}(\tau_{M})\right\rangle=\eta_{M}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}\prod\limits_{j=1}^{M}\cos(\omega_{n}\tau_{j})\left|n\right\rangle,$
(13)
where $\tau_{j}$ is the $j-th$ CPB-NR interaction time. According to the Eq
(13) the number of CPB being detected coincides with the number of holes
produced in the statistical distribution. In fact the Eq (13) allows one to
find the expression for the statistical distribution, $P_{n}=\left|\langle
n|\Psi_{NC}(\tau_{M})\right|^{2};$ a little algebra furnishes
$P_{n}=\frac{(\alpha^{2n}/n!)\prod_{j=1}^{M}cos^{2}(\omega_{n}\tau_{j})}{\sum_{m=0}^{\infty}(\alpha^{2m}/m!)\prod_{j=1}^{M}cos^{2}(\omega_{n}\tau_{j})},$
(14)
To illustrate results we have plotted the Fig.(2) showing the controlled
production of holes in the photon number distribution.
Figure 2: Holes in the photon number distribution, for $\alpha=2.0$, (a) at
$n_{1}=4$, for the $1^{st}$ step; (b) at $n_{1}=4$ and $n_{2}=1$, for the
$2^{nd}$ step; (c) at $n_{1}=4$, $n_{2}=1$ and $n_{3}=7$, for the $3^{rd}$
step.
The success probability to produce the desired state is given by
$P_{s}=e^{-\left|\alpha\right|^{2}}\sum_{m=0}^{\infty}(\alpha^{2m}/m!)\prod_{j=1}^{M}cos^{2}(\omega_{n}\tau_{j}).$
(15)
Note that the holes exhibited in Fig.(2)(a), 2(b), and 2(c) occur with success
probability of $9\%$, $4\%$, and $0.3\%$, respectively.
We can take advantage of the this procedure applying it to the engineering of
nonclassical states, e.g., to prepare Fock states [60] and their
superpositions [61]. To this end, we present two strategies: in the first we
eliminates the components on the left and right sides of a desired Fock state
$|N\rangle$, namely: $|N-1\rangle,$ $|N-2\rangle,...$and
$|N+1\rangle,|N+2\rangle,...;$ in the second one, we only eliminate the left
side components of a desired Fock state $|N\rangle$. In both cases, it is
convenient to consider the final state of the NR as,
$\left|\Psi_{NC}(\tau_{M})\right\rangle^{\prime}=\eta_{M}^{\prime}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}(-i)^{M}\prod\limits_{j=1}^{M}\sin(\omega_{n+j}\tau_{j})\left|n+M\right\rangle,$
(16)
which is easily obtained by detecting the Cooper pair box in the state
$|e\rangle$. The success probability $P_{s}^{\prime}$ to produce a Fock state
$|N\rangle$ reads
$P_{s}^{\prime}=e^{-\left|\alpha\right|^{2}}\sum_{m=0}^{\infty}(\alpha^{2m}/m!)\prod_{j=1}^{M}sin^{2}(\omega_{n+j}\tau_{j}).$
(17)
In the first strategy, we prepare Fock states $|N\rangle$ with $N=M$, i.e.,
the phonon-number $N$ coincides with the number of CPB detections $M$. The
fidelity of these states is given by the phonon number distribution at $P_{M}$
associated with the state $\left|\Psi_{NC}(\tau_{M})\right\rangle^{\prime},$
$P_{M}=\frac{\prod_{j=1}^{M}\sin^{2}(\sqrt{j}\beta\tau_{j})}{\sum_{n=0}^{\infty}(\alpha^{2n}/n!)\prod_{n=1}^{M}\sin^{2}(\sqrt{n+j}\beta\tau_{n})}.$
(18)
We note that, in this case the fidelity coincides with the $N-th$ component of
the statistical distribution $Pn$. The Fig.(3) shows the phonon-number
distribution exhibiting the creation of Fock state $|3\rangle$, $|4\rangle$,
and $|5\rangle$; all with fidelity of $99\%$, for an initial coherent state
with $\alpha=0.6$.
(a) N4 (b) N5 (c) N6
Figure 3: Phonon number distribution exhibiting the creation of Fock state:
(a) $|3\rangle$ ($P_{s}^{\prime}=17\%$), (b) $|4\rangle$
($P_{s}^{\prime}=11\%$), and (c) $|5\rangle$ ($P_{s}^{\prime}=7\%$); all with
fidelity of $99\%$ and initial coherent state with $\alpha=0.6$.
In the second strategy, we prepare Fock states $|N\rangle$ with $N=2M$ or
$2M-1$. The associated fidelity is also given by the Eq.(18). The Fig.(4)
shows the phonon-number distribution exhibiting the creation of Fock states
$|3\rangle$, $|4\rangle$, and $|5\rangle$, all them with same fidelity $99\%,$
for an initial coherent state with $\alpha=0.6$.
(a) N4 (b) N5 (c) N6
Figure 4: Phonon number distribution exhibiting the creation of Fock state:
(a) $|2\rangle$ ($P_{s}^{\prime}=17\%$), (b) $|3\rangle$
($P_{s}^{\prime}=1\%$), and (c) $|4\rangle$ ($P_{s}^{\prime}=0.3\%$); all with
fidelity of $98\%$ and initial coherent state with $\alpha=0.6$.
## 3 Conclusion
Concerning with the feasibility of the scheme, it is worth mentioning some
experimental values of parameters and characteristics of our system: the
maximum value of the coupling constant $\beta_{m\acute{a}x}\approx 45MHz$,
with $B\approx 0,1T$, $\ell=30\mu m$, $x_{0}=500fm$ and $E_{J}^{0}=5GHz$, with
$\omega_{0}=200\pi MHz$._[6, 23, 42, 43, 55, 56, 57, 58, 59]_. The expression
choosing the time spent to make a hole, $\beta\sqrt{n_{j}+1}\tau_{j}=\pi/2,$
funishes $\tau_{j}\simeq$ $0.3~{}ns,$ when assuming all the CPB previously
prepared at $t=0$. On the other hand, the decoherence times of the CPB and the
NR are respectively $500~{}ns$ and $160$ $\mu s$ [58]. Accordingly, one may
create about 1600 holes before the destructive action of decoherence. However,
when considering the success probability to detect all CPB in the state
$\left|g\right\rangle$, a more realistic estimation drastically reduces the
number of holes. A similar situation occurs in [51, 52, 53]_,_ using atom-
field system to make holes in the statistical distribution $P_{n}$ of a field
state; in this case, about $1\mu s$ is spent to create a hole whereas $1ms$ is
the decoherence time of a field state inside the cavity. So, comparing both
scenarios the present system is about 60% more efficient in comparison with
that using the atom-field system. Concernig with the generation of a Fock
state $\left|N\right\rangle$, it is convenient starting with a low excited
initial (coherent) state, which involves a low number of Fock components to be
deleted via our hole burning procedure. According to the Eq. (18) when one
must delete many components of the initial state to achieve the state
$\left|N\right\rangle$ this drastically reduces the success probability. As
consequence, this method will work only for small values of $N$ ( $N\lesssim
5$ ).
## 4 Acknowledgements
The authors thank the FAPEG (CV) and the CNPq (ATA, BB) for partially
supporting this work.
## References
* [1] Bocko M F and Onofrio R, 1996 Rev. Mod. Phys. 68 755
* [2] Munro W J et al. 2002 Phys. Rev. A 66 023819
* [3] Cleland A N and Geller M R 2004 Phys. Rev. Lett. 93 070501
* [4] Cleland A N and Roukes M L 1996 Appl. Phys. Lett. 69 2653
* [5] Carr D W, Evoy S, Sekaric L, Craighead H G and Parpia J M 1999 Appl. Phys. Lett. 75 920
* [6] Huang X M H, Zorman C A, Mehregany M, and Roukes M L 2003 Nature 421 496
* [7] Bose S, Jacobs K and Knight P L 1999 Phys. Rev. A 59 3204
* [8] Midtvedt D, Tarakanov Y and Kinaret J 2011 Nano Lett. 11 1439\.
* [9] Knobel R G and Cleland A N 2003 Nature London 424 291
* [10] LaHaye M D , Buu O, Camarota B and Schwab K C 2004 Science 304 74
* [11] Blencowe M 2004 Phys. Rep. 395 159
* [12] Armour A D, Blencowe M P and Schwab K C 2002 Phys. Rev. Lett. 88 148301
* [13] Irish E K and Schwab K 2003 Phys. Rev. B 68 155311
* [14] Santamore D H, Goan H -S, Milburn G J and Roukes M L 2004 Phys. Rev. A 70 052105
* [15] Santamore D H, Doherty A C and Cross M C 2004 Phys. Rev. B 70 144301
* [16] Buks E et al. 2007 arXiv:quant-ph/0610158v4
* [17] Wang Y D, Gao Y B, Sun C P 2004 Eur. Phys. J. B 40 321
* [18] Peano V and Thorwart M 2004 Phys. Rev. B 70 235401
* [19] Takei S, Galitski V M and Osborn K D 2011 arXiv:1104.0029v1
* [20] Liao J Q, Kuang L M 2010 arXiv:1008.1713v1
* [21] Tian L and Zoller P 2004 Phys. Rev. Lett. 93 266403
* [22] Zou X B and Mathis W 2004 Phys. Lett. A 324 484
* [23] Liao J Q, Wu Q Q and Kuang L M 2008 arXiv:0803.4317v1
* [24] Xue F et al. 2007 New J. Phys. 9 35
* [25] Geller M R and Cleland A N 2005 Phys. Rev. A 71 032311
* [26] Tian L and Carr S M 2006 Phys. Rev. B 74 125314
* [27] Wang Y D, Chesi S, Loss D and Bruder C 2010 Phys. Rev. B 81 104524
* [28] Martin I, Shnirman A, Tian L and Zoller P 2004 Phys. Rev. B 69 125339
* [29] Zhang P, Wang Y D and Sun C P 2002 Phys. Rev. Lett. 95 097204
* [30] Wilson-Rae I, Zoller P and Imamoglu A 2004 Phys. Rev. Lett. 92 075507
* [31] Naik A et al. 2006 Nature 443 193
* [32] Hopkins A, Jacobs K, Habib S and Schwab K 2003 Phys. Rev. B 68 235328
* [33] Y.D. Wang et al. 2009 Phys. Rev. B 80 144508
* [34] Hensinger W K et al. 2005 Phys. Rev. A 72 041405
* [35] Sun CP, Wei L F, Liu Y and Nori F 2006 Phys. Rev. A 73 022318
* [36] Milburn G J, Holmes C A, Kettle L M and Goan H S 2007 arXiv:cond-mat/0702512v1
* [37] Siewert J, Brandes T and Falci G 2005 arXiv:cond-mat/0509735v1
* [38] Tian L 2005 Phys. Rev. B 72 195411
* [39] Valverde C, Avelar A T and Baseia B 2011 arXiv:1104.2106v1
* [40] Rabl P, Shnirman A and Zoller P 2004 Phys. Rev. B 70 205304
* [41] Ruskov R, Schwab K and Korotkov A N 2005 Phys. Rev. B 71 235407
* [42] Xue F, Liu Y, Sun C P, Nori F 2007 Phys. Rev. B 76 064305
* [43] Zhou X X and Mizel A 2006 Phys. Rev. Lett. 97 267201
* [44] Suh J et al. 2010 Nano Lett. 10 3990\.
* [45] Valverde C, Avelar A T, Baseia B and Malbouisson J M C 2003 Phys. Lett. A 315 213
* [46] Valverde C and Baseia B 2004 Int. J. Quantum Inf. 2 421
* [47] Eisert J, Plenio M B, Bose S and Hartley J 2004 Phys. Rev. Lett. 93 190402
* [48] Bose S and Agarwal G S 2005 New J. Phys. 8 34
* [49] Blais A et al. 2004 Phys. Rev. A 69 062320
* [50] Moerner W E, Lenth W and Bjorklund G C 1988 in: W.E. Moerner (Ed.), Persistent Spectral Hole-Burning: Science and Applications, Springer, Berlin, p. 251
* [51] Malbouisson J M C and Baseia B 2001 Phys. Lett. A 290 214
* [52] Avelar A T and Baseia B 2004 Opt. Comm. 239 281
* [53] Avelar A T and Baseia B 2005 Phys. Rev. A 72 025801
* [54] Escher B M, Avelar A T, Filho T M R and Baseia B 2004 Phys Rev. A 70 025801
* [55] Wallraff A et al. 2005 Phys. Rev. Lett. 95 060501
* [56] Liao J Q and Kuang L M 2007 J. Phys. B: At. Mol. Opt. Phys. 40 1845
* [57] Xue F, Zhong L, Li Y and Sun C P 2007 Phys. Rev. B 75 033407
* [58] Chen G, Chen Z, Yu L and Liang J 2007 Phys. Rev. A. 76 024301
* [59] Liao J Q and Kuang L M 2008 Eur. Phys. J. B 63 79
* [60] Escher B M, Avelar A T and Baseia B 2005 Phys. Rev. A 72 045803
* [61] Aragao A, Avelar A T, Malbouisson J M C and Baseia B 2004 Phys. Lett. A 329 284
|
arxiv-papers
| 2011-04-18T01:42:52 |
2024-09-04T02:49:18.312799
|
{
"license": "Public Domain",
"authors": "C. Valverde, A.T. Avelar and B. Baseia",
"submitter": "Clodoaldo Valverde",
"url": "https://arxiv.org/abs/1104.3369"
}
|
1104.3498
|
# Mutual information rate and bounds for it
M. S. Baptista1, R. M. Rubinger2, E. R. V. Junior2, J. C. Sartorelli3, U.
Parlitz4, and C. Grebogi1,5 1 Institute for Complex Systems and Mathematical
Biology, SUPA, University of Aberdeen, AB24 3UE Aberdeen, United Kingdom
2 Federal University of Itajuba, Av. BPS 1303, Itajubá, Brazil
3 Institute of Physics, University of São Paulo, Rua do Matão, Travessa R,
187, 05508-090, São Paulo, Brazil
4 Biomedical Physics Group, Max Planck Institute for Dynamics and Self-
Organization, Am Fassberg 17, 37077 Göttingen, Germany
5 Freiburg Institute for Advanced Studies (FRIAS), University of Freiburg,
Albertstr. 19, 79104 Freiburg, Germany
###### Abstract
The amount of information exchanged per unit of time between two nodes in a
dynamical network or between two data sets is a powerful concept for analysing
complex systems. This quantity, known as the mutual information rate (MIR), is
calculated from the mutual information, which is rigorously defined only for
random systems. Moreover, the definition of mutual information is based on
probabilities of significant events. This work offers a simple alternative way
to calculate the MIR in dynamical (deterministic) networks or between two data
sets (not fully deterministic), and to calculate its upper and lower bounds
without having to calculate probabilities, but rather in terms of well known
and well defined quantities in dynamical systems. As possible applications of
our bounds, we study the relationship between synchronisation and the exchange
of information in a system of two coupled maps and in experimental networks of
coupled oscillators.
## I Introduction
Shannon’s entropy quantifies information shannon . It measures how much
uncertainty an observer has about an event being produced by a random system.
Another important concept in the theory of information is the mutual
information shannon . It measures how much uncertainty an observer has about
an event in a random system X after observing an event in a random system Y
(or vice-versa).
Mutual information is an important quantity because it quantifies not only
linear and non-linear interdependencies between two systems or data sets, but
also is a measure of how much information two systems exchange or two data
sets share. Due to these characteristics, it became a fundamental quantity to
understand the development and function of the brain sporns_TCS2004 ; roland ,
to characterise juergen_EPJ2009 ; palus and model complex systems
fraser_PRA1986 ; ulrich_kluwer1998 ; kantz_book or chaotic systems, and to
quantify the information capacity of a communication system haykin_book . When
constructing a model of a complex system, the first step is to understand
which are the most relevant variables to describe its behaviour. Mutual
information provides a way to identify those variables rossi .
However, the calculation of mutual information in dynamical networks or data
sets faces three main difficultiespaninski ; palus ; steuer ; papana . Mutual
information is rigorously defined for random memoryless processes, only. In
addition, its calculation involves probabilities of significant events and a
suitable space where probability is calculated. The events need to be
significant in the sense that they contain as much information about the
system as possible. But, defining significant events, for example the fact
that a variable has a value within some particular interval, is a difficult
task because the interval that provides significant events is not always
known. Finally, data sets have finite size. This prevents one from calculating
probabilities correctly. As a consequence, mutual information can often be
calculated with a bias, only paninski ; palus ; steuer ; papana .
In this work, we show how to calculate the amount of information exchanged per
unit of time [Eq. (3)], the so called mutual information rate (MIR), between
two arbitrary nodes (or group of nodes) in a dynamical network or between two
data sets. Each node representing a d-dimensional dynamical system with $d$
state variables. The trajectory of the network considering all the nodes in
the full phase space is called “attractor” and represented by $\Sigma$. Then,
we propose an alternative method, similar to the ones proposed in Refs.
baptista_PRE2008 ; baptista_PLOSONE2008 , to calculate significant upper and
lower bounds for the MIR in dynamical networks or between two data sets, in
terms of Lyapunov exponents, expansion rates, and capacity dimension. These
quantities can be calculated without the use of probabilistic measures. As
possible applications of our bounds calculation, we describe the relationship
between synchronisation and the exchange of information in small experimental
networks of coupled Double-Scroll circuits.
In previous works of Refs. baptista_PRE2008 ; baptista_PLOSONE2008 , we have
proposed an upper bound for the MIR in terms of the positive conditional
Lyapunov exponents of the synchronisation manifold. As a consequence, this
upper bound could only be calculated in special complex networks that allow
the existence of complete synchronisation. In the present work, the proposed
upper bound can be calculated to any system (complex networks and data sets)
that admits the calculation of Lyapunov exponents.
We assume that an observer can measure only one scalar time series for each
one of two chosen nodes. These two time series are denoted by $X$ and $Y$ and
they form a bidimensional set $\Sigma_{\Omega}=(X,Y)$, a projection of the
“attractor” into a bidimensional space denoted by $\Omega$. To calculate the
MIR in higher-dimensional projections $\Omega$, see Supplementary Information.
Assume that the space $\Omega$ is coarse-grained in a square grid of $N^{2}$
boxes with equal sides $\epsilon$, so $N=1/\epsilon$.
Mutual information is defined in the following way shannon . Given two random
variables, X and Y, each one produces events $i$ and $j$ with probabilities
$P_{X}(i)$ and $P_{Y}(j)$, respectively, the joint probability between these
events is represented by $P_{XY}(i,j)$. Then, mutual information is defined as
$I_{S}=H_{X}+H_{Y}-H_{XY}.$ (1)
$H_{X}$ = $-\sum_{i}P_{X}(i)\log{[P_{X}(i)]}$, $H_{Y}$ =
$-\sum_{j}P_{Y}(j)\log{[P_{Y}(j)]}$, and
$H_{XY}=-\sum_{i,j}P_{XY}(i,j)\log{[P_{XY}(i,j)]}$. For simplification in our
notation for the probabilities, we drop the subindexes X, Y, and XY, by making
$P_{X}(i)=P(i)$, $P_{Y}(j)=P(j)$, and $P_{XY}(i,j)=P(i,j)$. When using Eq. (1)
to calculate the mutual information between the dynamical variables $X$ and
$Y$, the probabilities appearing in Eq. (1) are defined such that $P(i)$ is
the probability of finding points in a column $i$ of the grid, $P(j)$ of
finding points in the row $j$ of the grid, and $P(i,j)$ the probability of
finding points where the column $i$ meets the line $j$ of the grid.
The MIR was firstly introduced by Shannon shannon as a “rate of actual
transmission” blanc and later more rigorously redefined in Refs.
dobrushin1959 ; gray1980 . It represents the mutual information exchanged
between two dynamical variables (correlated) per unit of time. To simplify the
calculation of the MIR, the two continuous dynamical variables are transformed
into two discrete symbolic sequences $X$ and $Y$. Then, the MIR is defined by
$MIR=\lim_{n\rightarrow\infty}\frac{I_{S}(n)}{n},$ (2)
where $I_{S}(n)$ represents the usual mutual information between the two
sequences $X$ and $Y$, calculated by considering words of length $n$.
The MIR is a fundamental quantity in science. Its maximal value gives the
information capacity between any two sources of information (no need for
stationarity, statistical stability, memoryless) verdu . Therefore,
alternative approaches for its calculation or for the calculation of bounds of
it are of vital relevance. Due to the limit to infinity in Eq. (2) and because
it is defined from probabilities, the MIR is not easy to be calculated
especially if one wants to calculate it from (chaotic) trajectories of a large
complex network or data sets. The difficulties faced to estimate the MIR from
dynamical systems and networks are similar to the ones faced in the
calculation of the Kolmogorov-Sinai entropy, $H_{KS}$ kolmogorov , (Shannon’s
entropy per unit of time). Because of these difficulties, the upper bound for
$H_{KS}$ proposed by Ruelle ruelle in terms of the Lyapunov exponents and
valid for smooth dynamical systems ($H_{KS}\leq\sum\lambda^{+}_{i}$, where
$\lambda^{+}_{i}$ represent all the $i$ positive Lyapunov exponents) or the
Pesin’s equality pesin ($H_{KS}=\sum\lambda^{+}_{i}$) proved in Ref.
ledrapier to be valid for the large class of systems that possess a SRB
measure, became so important in the theory of dynamical systems. Our upper
bound [Eq. (13)] is a result equivalent to the work of Ruelle.
## II Main results
One of the main results of this work (whose derivation can be seen in Sec.
III.2) is to show that, in dynamical networks or data sets with fast decay of
correlation, $I_{S}$ in Eq. (1) represents the amount of mutual information
between $X$ and $Y$ produced within a special time interval $T$, where $T$
represents the time for the dynamical network (or data sets) to lose its
memory from the initial state or the correlation to decay to zero. Correlation
in this work is not the usual linear correlation, but a non-linear correlation
defined in terms of the evolution of spatial probabilities, the quantity
$C(T)$ in Sec. III.1. Therefore, the mutual information rate (MIR), between
the dynamical variables $X$ and $Y$ (or two data sets) can be estimated by
$MIR=\frac{I_{S}}{T}$ (3)
In systems that present sensitivity to initial conditions, e.g. chaotic
systems, predictions are only possible for times smaller than this time $T$.
This time has other meanings. It is the expected time necessary for a set of
points belonging to an $\epsilon$-square box in $\Omega$ to spread over
$\Sigma_{\Omega}$ and it is of the order of the shortest Poincaré return time
for a point to leave a box and return to it gao ; baptista_PLA2010 . It can be
estimated by
$T\approx\frac{1}{\lambda_{1}}\log{\left[\frac{1}{\epsilon}\right]}.$ (4)
where $\lambda_{1}$ is the largest positive Lyapunov exponent measured in
$\Sigma_{\Omega}$. Chaotic systems present the mixing property (see Sec.
III.1), and as a consequence the correlation $C(t)$ always decays to zero,
surely after an infinitely long time. The correlation of chaotic systems can
also decay to zero for sufficiently large but finite $t=T$ (see Supplementary
Information). $T$ can be interpreted to be the minimum time required for a
system to satisfy the conditions to be considered mixing. Some examples of
physical systems that are proved to be mixing and have exponentially fast
decay of correlation are nonequilibrium steady-state nonequilibrium , Lorenz
gases (models of diffusive transport of light particles in a network of
heavier particles) sinai_1970 , and billiards young_2001 . An example of a
“real world” physical complex system that presents exponentially fast decay of
correlation is plasma turbulence baptista_PHYSICAA2001 . We do not expect that
data coming from a “real world” complex system is rigorously mixing and has an
exponentially fast decay of correlation. But, we expect that the data has a
sufficiently fast decay of correlation (e.g. stretched exponential decay or
polynomially fast decays), implying that the system has sufficiently high
sensitivity to initial conditions and as a consequence $C(t)\cong 0$, for a
reasonably small and finite time $t=T$.
The other two main results of our work are presented in Eqs. (5) and (7),
whose derivations are presented in Sec. III.3. The upper bound for the MIR is
given by
$I_{C}=\lambda_{1}-\lambda_{2}=\lambda_{1}(2-D),$ (5)
where $\lambda_{1}$ and $\lambda_{2}$ (positive defined) represent the largest
and the second largest Lyapunov exponent measured in $\Sigma_{\Omega}$, if
both exponents are positive. If the $i$-largest exponent is negative, then we
set $\lambda_{i}=0$. If the set $\Sigma_{\Omega}$ represents a periodic orbit,
$I_{C}=0$, and therefore there is no information being exchanged. The quantity
$D$ is defined as
$D=-\frac{\log{(N_{C}(t=T))}}{\log{(\epsilon)}},$ (6)
where $N_{C}(t=T)$ is the number of boxes that would be covered by fictitious
points at time $T$. At time $t=0$, these fictitious points are confined in an
$\epsilon$-square box. They expand not only exponentially fast in both
directions according to the two positive Lyapunov exponents, but expand
forming a compact set, a set with no “holes”. At $t=T$, they spread over
$\Sigma_{\Omega}$.
The lower bound for the MIR is given by
$I_{C}^{l}=\lambda_{1}(2-\tilde{D}_{0}),$ (7)
where $\tilde{D}_{0}$ represents the capacity dimension of the set
$\Sigma_{\Omega}$
$\tilde{D}_{0}={\lim_{\epsilon\rightarrow
0}}\left[-\frac{\log{(\tilde{N}_{C}(\epsilon))}}{\log{(\epsilon)}}\right],$
(8)
where $\tilde{N}_{C}$ represents the number of boxes in $\Omega$ that are
occupied by points of $\Sigma_{\Omega}$.
$D$ is defined in a way similar to the capacity dimension, thought it is not
the capacity dimension. In fact, $D\leq\tilde{D}_{0}$, because $\tilde{D}_{0}$
measures the change in the number of occupied boxes in $\Omega$ as the space
resolution varies, whereas $D$ measures the relative number of boxes with a
certain fixed resolution $\epsilon$ that would be occupied by the fictitious
points (in $\Omega$) after being iterated for a time $T$. As a consequence,
the empty space in $\Omega$ that is not occupied by $\Sigma_{\Omega}$ does not
contribute to the calculation of $\tilde{D}_{0}$, whereas it contributes to
the calculation of the quantity $D$. In addition, $N_{C}\geq\tilde{N}_{C}$
(for any $\epsilon$), because while the fictitious points form a compact set
expanding with the same ratio as the one for which the real points expand
(ratio provided by the Lyapunov exponents), the real set of points
$\Sigma_{\Omega}$ might not occupy many boxes.
## III Methods
### III.1 Mixing, correlation decay and invariant measures
Denote by $F^{T}(x)$ a mixing transformation that represents how a point
$x\in\Sigma_{\Omega}$ is mapped after a time $T$ into $\Sigma_{\Omega}$, and
let $\rho(x)$ to represent the probability of finding a point of
$\Sigma_{\Omega}$ in $x$ (natural invariant density). Let $I^{\prime}_{1}$
represent a region in $\Omega$. Then, $\mu(I^{\prime}_{1})=\int\rho(x)dx$, for
$x\in I^{\prime}_{1}$ represents the probability measure of the region
$I^{\prime}_{1}$. Given two square boxes $I^{\prime}_{1}\in\Omega$ and
$I^{\prime}_{2}\in\Omega$, if $F^{T}$ is a mixing transformation, then for a
sufficiently large $T$, we have that the correlation
$C(T)=\mu[F^{-T}(I^{\prime}_{1})\cap
I^{\prime}_{2}]-\mu[I^{\prime}_{1}]\mu[I^{\prime}_{2}]$, decays to zero, the
probability of having a point in $I^{\prime}_{1}$ that is mapped to
$I^{\prime}_{2}$ is equal to the probability of being in $I^{\prime}_{1}$
times the probability of being in $I^{\prime}_{2}$. That is typically what
happens in random processes.
If the measure $\mu(\Sigma_{\Omega})$ is invariant, then
$\mu([F^{-T}(\Sigma_{\Omega})]=\mu(\Sigma_{\Omega})$. Mixing and ergodic
systems produce measures that are invariant.
### III.2 Derivation of the mutual information rate (MIR) in dynamical
networks and data sets
We consider that the dynamical networks or data sets to be analysed present
either the mixing property or have fast decay of correlations, and their
probability measure is time invariant. If a system that is mixing for a time
interval $T$ is observed (sampled) once every time interval $T$, then the
probabilities generated by these snapshot observations behave as if they were
independent, and the system behaves as if it were a random process. This is so
because if a system is mixing for a time interval $T$, then the correlation
$C(T)$ decays to zero for this time interval. For systems that have some decay
of correlation, surely the correlation decays to zero after an infinite time
interval. But, this time interval can also be finite, as shown in
Supplementary Information.
Consider now that we have experimental points and they are sampled once every
time interval $T$. The probability
$\tilde{P}_{XY}(i,j)\rightarrow\tilde{P}_{XY}(k,l)$ of the sampled trajectory
to follow a given itinerary, for example to fall in the box with coordinates
$(i,j)$ and then be iterated to the box $(k,l)$ depends exclusively on the
probabilities of being at the box $(i,j)$, represented by
$\tilde{P}_{XY}(i,j)$, and being at the box $(k,l)$, represented by
$\tilde{P}_{XY}(k,l)$. Therefore, for the sampled trajectory,
$\tilde{P}_{XY}(i,j)\rightarrow\tilde{P}_{XY}(k,l)=\tilde{P}_{XY}(i,j)\tilde{P}_{XY}(k,l)$.
Analogously, the probability $\tilde{P}_{X}(i)\rightarrow\tilde{P}_{Y}(j)$ of
the sampled trajectory to fall in the column (or line) $i$ of the grid and
then be iterated to the column (or line) $j$ is given by
$\tilde{P}_{X}(i)\rightarrow\tilde{P}_{Y}(j)=\tilde{P}_{X}(i)\tilde{P}_{Y}(j)$.
The MIR of the experimental non-sampled trajectory points can be calculated
from the mutual information $\tilde{I}_{S}$ of the sampled trajectory points
that follow itineraries of length $n$:
$MIR=\lim_{n\rightarrow\infty}\frac{\tilde{I}_{S}(n)}{nT},$ (9)
Due to the absence of correlations of the sampled trajectory points, the
mutual information for these points following itineraries of length $n$ can be
written as
$\tilde{I}_{S}(n)=n[\tilde{H}_{X}(n=1)+\tilde{H}_{Y}(n=1)-\tilde{H}_{XY}(n=1)],$
(10)
where $\tilde{H}_{X}(n=1)$ =
$-\sum_{i}\tilde{P}_{X}(i)\log{[\tilde{P}_{X}(i)]}$, $\tilde{H}_{Y}(n=1)$ =
$-\sum_{j}\tilde{P}_{Y}(j)\log{[\tilde{P}_{Y}(j)]}$, and
$\tilde{H}_{XY}(n=1)=-\sum_{i,j}\tilde{P}_{XY}(i,j)\log{[\tilde{P}_{XY}(i,j)]}$,
and $\tilde{P}_{X}(i)$, $\tilde{P}_{Y}(j)$, and $\tilde{P}_{XY}(i,j)$
represent the probability of the sampled trajectory points to fall in the line
$i$ of the grid, in the column $j$ of the grid, and in the box $(i,j)$ of the
grid, respectively.
Due to the time invariance of the set $\Sigma_{\Omega}$ assumed to exist, the
probability measure of the non-sampled trajectory is equal to the probability
measure of the sampled trajectory. If a system that has a time invariant
measure is observed (sampled) once every time interval $T$, the observed set
has the same natural invariant density and probability measure of the original
set. As a consequence, if $\Sigma_{\Omega}$ has a time invariant measure, the
probabilities $P(i)$, $P(j)$, and $P(i,j)$ (used to calculate $I_{S}$) are
equal to $\tilde{P}_{X}(i)$, $\tilde{P}_{Y}(j)$, and $\tilde{P}_{XY}(i,j)$.
Consequently, $\tilde{H}_{X}(n=1)=H_{X}$, $\tilde{H}_{Y}(n=1)=H_{Y}$, and
$\tilde{H}_{XY}(n=1)=H_{XY}$, and therefore $\tilde{I}_{S}(n)=nI_{S}(n)$.
Substituting into Eq. (9), we finally arrive to
$MIR=\frac{I_{S}}{T}$ (11)
where $I_{S}$ between two nodes is calculated from Eq. (1).
Therefore, in order to calculate the MIR, we need to estimate the time $T$ for
which the correlation of the system approaches zero and the probabilities
$P(i)$, $P(j)$, $P(i,j)$ of the experimental non-sampled experimental points
to fall in the line $i$ of the grid, in the column $j$ of the grid, and in the
box $(i,j)$ of the grid, respectively.
### III.3 Derivation of an upper ($I_{C}$) and lower ($I_{C}^{l}$) bounds for
the MIR
Consider that our attractor $\Sigma$ is generated by a 2d expanding system
that possess 2 positive Lyapunov exponents $\lambda_{1}$ and $\lambda_{2}$,
with $\lambda_{1}\geq\lambda_{2}$. $\Sigma\in\Omega$. Imagine a box whose
sides are oriented along the orthogonal basis used to calculate the Lyapunov
exponents. Then, points inside the box spread out after a time interval $t$ to
$\epsilon\sqrt{2}\exp^{\lambda_{1}t}$ along the direction from which
$\lambda_{1}$ is calculated. At $t=T$,
$\epsilon\sqrt{2}\exp^{\lambda_{1}T}=L$, which provides $T$ in Eq. (4), since
$L=\sqrt{2}$. These points spread after a time interval $t$ to
$\epsilon\sqrt{2}\exp^{\lambda_{2}t}$ along the direction from which
$\lambda_{2}$ is calculated. After an interval of time $t=T$, these points
spread out over the set $\Sigma_{\Omega}$. We require that for $t\leq T$, the
distance between these points only increases: the system is expanding.
Imagine that at $t=T$, fictitious points initially in a square box occupy an
area of
$\epsilon\sqrt{2}\exp^{\lambda_{2}T}L=2\epsilon^{2}\exp^{(\lambda_{2}+\lambda_{1})T}$.
Then, the number of boxes of sides $\epsilon$ that contain fictitious points
can be calculated by
$N_{C}=2\epsilon^{2}\exp^{(\lambda_{1}+\lambda_{2})T}/2\epsilon^{2}=\exp^{(\lambda_{1}+\lambda_{2})T}$.
From Eq. (4), $N=\exp^{\lambda_{1}T}$, since $N=1/\epsilon$.
We denote with a lower-case format, the probabilities $p(i)$, $p(j)$, and
$p(i,j)$ with which fictitious points occupy the grid in $\Omega$. If these
fictitious points spread uniformly forming a compact set whose probabilities
of finding points in each fictitious box is equal, then $p(i)=1/N$
($=\frac{1}{N_{C}}\frac{N_{C}}{N}$), $p(j)=1/N$, and $p(i,j)=1/N_{C}$. Let us
denote the Shannon’s entropy of the probabilities $p(i,j)$, $p(i)$ and $p(j)$
as $h_{X}$, $h_{Y}$, and $h_{XY}$. The mutual information of the fictitious
trajectories after evolving a time interval $T$ can be calculated by
$I_{S}^{u}=h_{X}+h_{Y}-h_{XY}$. Since, $p(i)=p(j)=1/N$ and $p(i,j)=1/N_{C}$,
then $I_{S}^{u}=2\log{(N)}-\log{(N_{C})}$. At $t=T$, we have that
$N=\exp^{\lambda_{1}T}$ and $N_{C}=\exp^{(\lambda_{1}+\lambda_{2})T}$, leading
us to $I_{S}^{u}=(\lambda_{1}-\lambda_{2})T$. Therefore, defining,
$I_{C}=I_{S}^{u}/T$, we arrive at $I_{C}=\lambda_{1}-\lambda_{2}$.
We defining $D$ as
$D=-\frac{\log{(N_{C}(t=T))}}{\log{(\epsilon)}},$ (12)
where $N_{C}(t=T)$ being the number of boxes that would be covered by
fictitious points at time $T$. At time $t=0$, these fictitious points are
confined in an $\epsilon$-square box. They expand not only exponentially fast
in both directions according to the two positive Lyapunov exponents, but
expand forming a compact set, a set with no “holes”. At $t=T$, they spread
over $\Sigma_{\Omega}$.
Using $\epsilon=\exp^{-\lambda_{1}T}$ and
$N_{C}=\exp^{(\lambda_{1}+\lambda_{2})T}$ in Eq. (12), we arrive at
$D=1+\frac{\lambda_{2}}{\lambda_{1}}$, and therefore, we can write that
$I_{C}=\lambda_{1}-\lambda_{2}=\lambda_{1}(2-D),$ (13)
To calculate the maximal possible MIR, of a random independent process, we
assume that the expansion of points is uniform only along the columns and
lines of the grid defined in the space $\Omega$, i.e., $P(i)=P(j)=1/N$, (which
maximises $H_{X}$ and $H_{Y}$), and we allow $P(i,j)$ to be not uniform
(minimising $H_{XY}$) for all $i$ and $j$, then
$I_{S}(\epsilon)=-2\log{(\epsilon)}+\sum_{i,j}P(i,j)\log{[P(i,j)]}.$ (14)
Since $T(\epsilon)=-1/\lambda_{1}\log{(\epsilon)}$, dividing $I_{S}(\epsilon)$
by $T(\epsilon)$, taking the limit of $\epsilon\rightarrow 0$, and reminding
that the information dimension of the set $\Sigma_{\Omega}$ in the space
$\Omega$ is defined as $\tilde{D}_{1}$=$\lim_{\epsilon\rightarrow
0}\frac{\sum_{i,j}P(i,j)\log{[P(i,j)]}}{\log{(\epsilon)}}$, we obtain that the
MIR is given by
$I_{S}/T=\lambda_{1}(2-\tilde{D}_{1}).$ (15)
Since $\tilde{D}_{1}\leq\tilde{D}_{0}$ (for any value of $\epsilon$), then
$\lambda_{1}(2-\tilde{D}_{1})\geq\lambda_{1}(2-\tilde{D}_{0})$, which means
that a lower bound for the maximal MIR [provided by Eq. (15)] is given by
$I_{C}^{l}=\lambda_{1}(2-\tilde{D}_{0}),$ (16)
But $D\leq\tilde{D}_{0}$ (for any value of $\epsilon$), and therefore $I_{C}$
is an upper bound for $I_{C}^{l}$.
To show why $I_{C}$ is an upper bound for the maximal possible MIR, assume
that the real points $\Sigma_{\Omega}$ occupy the space $\Omega$ uniformly. If
$\tilde{N}_{C}>N$, there are many boxes being occupied. It is to be expected
that the probability of finding a point in a line or column of the grid is
$P(i)=P(j)\cong 1/N$, and $P(i,j)\cong 1/\tilde{N}_{C}$. In such a case,
$MIR\cong I_{C}^{l}$, which implies that $I_{C}\geq MIR$. If
$\tilde{N}_{C}<N$, there are only few boxes being sparsely occupied. The
probability of finding a point in a line or column of the grid is
$P(i)=P(j)\cong 1/\tilde{N}_{C}$, and $P(i,j)\cong 1/\tilde{N}_{C}$. There are
$\tilde{N}_{C}$ lines and columns being occupied by points in the grid. In
such a case, $I_{S}\cong
2\log{(\tilde{N}_{C})}-\log{(\tilde{N}_{C})}\cong\log{(\tilde{N}_{C})}$.
Comparing with $I_{S}^{u}=2\log{(N)}-\log{(N_{C})}$, and since
$\tilde{N}_{C}<N$ and $N_{C}\geq\tilde{N}_{C}$, then we conclude that
$I_{S}^{u}\geq I_{S}$, which implies that $I_{C}\geq MIR$.
Notice that if $P(i,j)=p(i,j)=1/N_{C}$ and $\tilde{D}_{1}=\tilde{D}_{0}$, then
$I_{S}/T=I_{C}^{l}=I_{C}$.
### III.4 Expansion rates
In order to extend our approach for the treatment of data sets coming from
networks whose equations of motion are unknown, or for higher-dimensional
networks and complex systems which might be neither rigorously chaotic nor
fully deterministic, or for experimental data that contains noise and few
sampling points, we write our bounds in terms of expansion rates defined in
this work by
$e_{k}(t)=1/\tilde{N}_{C}\sum_{i=1}^{\tilde{N}_{C}}\frac{1}{t}log{[L_{k}^{i}(t)]},$
(17)
where we consider $k=1,2$. $L^{i}_{1}(t)$ measures the largest growth rate of
nearby points. In practice, it is calculated by
$L^{i}_{1}(t)=\frac{\Delta}{\delta}$, with $\delta$ representing the largest
distance between pair of points in an $\epsilon$-square box $i$ and $\Delta$
representing the largest distance between pair of the points that were
initially in the $\epsilon$-square box but have spread out for an interval of
time $t$. $L^{i}_{2}(t)$ measures how an area enclosing points grows. In
practice, it is calculated by $L^{i}_{2}(t)=\frac{A}{\epsilon^{2}}$, with
$\epsilon^{2}$ representing the area occupied by points in an
$\epsilon$-square box, and $A$ the area occupied by these points after
spreading out for a time interval $t$. There are $\tilde{N}_{C}$ boxes
occupied by points which are taken into consideration in the calculation of
$L_{k}^{i}(t)$. An order-$k$ expansion rate, $e_{k}(t)$, measures on average
how a hypercube of dimension $k$ exponentially grows after an interval of time
$t$. So, $e_{1}$ measures the largest growth rate of nearby points, a quantity
closely related to the largest finite-time Lyapunov exponent celso1994 . And
$e_{2}$ measures how an area enclosing points grows, a quantity closely
related to the sum of the two largest positive Lyapunov exponents. In terms of
expansion rates, Eqs. (4) and (13) read
$T=\frac{1}{e_{1}}\log{\left[\frac{1}{\epsilon}\right]}$ and
$I_{C}={e_{1}}(2-D)$, respectively, and Eqs. (12) and (16) read
$D(t)=\frac{e_{2}(t)}{e_{1}(t)}$ and $I_{C}^{l}=e_{1}(2-\tilde{D}_{0})$,
respectively.
From the way we have defined expansion rates, we expect that
$e_{k}\leq\sum_{i=1}^{k}\lambda_{i}$. Because of the finite time interval and
the finite size of the regions of points considered, regions of points that
present large derivatives, contributing largely to the Lyapunov exponents,
contribute less to the expansion rates. If a system has constant derivative
(hyperbolic) and has constant natural measure, then
$e_{k}=\sum_{i=1}^{k}\lambda_{i}$.
There are many reasons for using expansion rates in the way we have defined
them in order to calculate bounds for the MIR. Firstly, because they can be
easily experimentally estimated whereas Lyapunov exponents demand huge
computational efforts. Secondly, because of the macroscopic nature of the
expansion rates, they might be more appropriate to treat data coming from
complex systems that contains large amounts of noise, data that have points
that are not (arbitrarily) close as formally required for a proper calculation
of the Lyapunov exponents. Thirdly, expansion rates can be well defined for
data sets containing very few data points: the fewer points a data set
contains, the larger the regions of size $\epsilon$ need to be and the shorter
the time $T$ is. Finally, expansion rates are defined in a similar way to
finite-time Lyapunov exponents and thus some algorithms used to calculate
Lyapunov exponents can be used to calculate our defined expansion rates.
## IV Applications
### IV.1 MIR and its bounds in two coupled chaotic maps
To illustrate the use of our bounds, we consider the following two
bidirectionally coupled maps
$\displaystyle X^{(1)}_{n+1}$ $\displaystyle=$ $\displaystyle
2X^{(1)}_{n}+\rho X^{(1)^{2}}_{n}+\sigma(X^{(2)}_{n}-X^{(1)}_{n}),\mbox{mod
1}$ $\displaystyle X^{(2)}_{n+1}$ $\displaystyle=$ $\displaystyle
2X^{(2)}_{n}+\rho X^{(2)^{2}}_{n}+\sigma(X^{(1)}_{n}-X^{(2)}_{n}),\mbox{mod
1}$ (18)
where $X_{n}^{(i)}\in[0,1]$. If $\rho=0$, the map is piecewise-linear and
quadratic, otherwise. We are interested in measuring the exchange of
information between $X^{(1)}$ and $X^{(2)}$. The space $\Omega$ is a square of
sides 1. The Lyapunov exponents measured in the space $\Omega$ are the
Lyapunov exponents of the set $\Sigma_{\Omega}$ that is the chaotic attractor
generated by Eqs. (18).
The quantities $I_{S}/T$, $I_{C}$, and $I_{C}^{l}$ are shown in Fig. 1 as we
vary $\sigma$ for $\rho=0$ (A) and $\rho=0.1$ (B). We calculate $I_{S}$ using
in Eq. (1) the probabilities $P(i,j)$ in which points from a trajectory
composed of $2,000,000$ samples fall in boxes of sides $\epsilon$=1/500 and
the probabilities $P(i)$ and $P(j)$ that the points visit the intervals
$[(i-1)\epsilon,i\epsilon[$ of the variable $X_{n}^{(1)}$ or
$[(j-1)\epsilon,j\epsilon[$ of the variable $X_{n}^{(2)}$, respectively, for
$i,j=1,\ldots,N$. When computing $I_{S}/T$, the quantity $T$ was estimated by
Eq. (4). Indeed for most values of $\sigma$, $I_{C}\geq I_{S}/T$ and
$I_{C}^{l}\leq I_{S}/T$.
Figure 1: [Color online] Results for two coupled maps. $I_{S}/T$ [Eq. (11)] as
(green online) filled circles, $I_{C}$ [Eq. (13)] as the (red online) thick
line, and $I_{C}^{l}$ [Eq. (16)] as the (brown online) crosses. In (A)
$\rho=0$ and in (B) $\rho=0.1$. The units of $I_{S}/T$, $I_{C}$, and
$I_{C}^{l}$ are [bits/iteration].
For $\sigma=0$ there is no coupling, and therefore the two maps are
independent from each other. There is no information being exchanged. In fact,
$I_{C}=0$ and $I_{C}^{l}\cong 0$ in both figures, since $D=\tilde{D}_{0}=2$,
meaning that the attractor $\Sigma_{\Omega}$ fully occupies the space
$\Omega$. This is a remarkable property of our bounds: to identify that there
is no information being exchanged when the two maps are independent. Complete
synchronisation is achieved and $I_{C}$ is maximal, for $\sigma>0.5$ (A) and
for $\sigma\geq 0.55$ (B). A consequence of the fact that $D=\tilde{D}_{0}=1$,
and therefore, $I_{C}=I_{C}^{l}=\lambda_{1}$. The reason is because for this
situation this coupled system is simply the shift map, a map with constant
natural measure; therefore $P(i)=P(j)$ and $P(i,j)$ are constant for all $i$
and $j$. As usually happens when one estimates the mutual information by
partitioning the phase space with a grid having a finite resolution and data
sets possessing a finite number of points, $I_{S}$ is typically larger than
zero, even when there is no information being exchanged ($\sigma=0$). Even
when there is complete synchronisation, we find non-zero off-diagonal terms in
the matrix for the joint probabilities causing $I_{S}$ to be smaller than it
should be. Due to numerical errors, $X^{(1)}\cong X^{(2)}$, and points that
should be occupying boxes with two corners exactly along a diagonal line in
the subspace $\Omega$ end up occupying boxes located off-diagonal and that
have at least three corners off-diagonal. The estimation of the lower bound
$I_{C}^{l}$ suffers from the same problems.
Our upper bound $I_{C}$ is calculated assuming that there is a fictitious
dynamics expanding points (and producing probabilities) not only exponentially
fast but also uniformly. The “experimental” numerical points from Eqs. (18)
expand exponentially fast, but not uniformly. Most of the time the trajectory
remains in 4 points: (0,0), (1,1), (1,0), (0,1). That is the main reason of
why $I_{C}$ is much larger than the estimated real value of the $MIR$, for
some coupling strengths. If a two nodes in a dynamical network, such as two
neurons in a brain, behave in the same way the fictitious dynamics does, these
nodes would be able to exchange the largest possible amount of information.
We would like to point out that one of the main advantages of calculating
upper bounds for the MIR ($I_{S}/T$) using Eq. (13) instead of actually
calculating $I_{S}/T$ is that we can reproduce the curves for $I_{C}$ using
much less number of points (1000 points) than the ones ($2,000,000$) used to
calculate the curve for $I_{S}/T$. If $\rho=0$, $I_{C}=-\ln{(1-\sigma)}$ can
be calculated since $\lambda_{1}=\ln{(2)}$ and $\lambda_{2}=\ln{(2-2\sigma)}$.
### IV.2 MIR and its bounds in experimental networks of Double-Scroll
circuits
We illustrate our approach for the treatment of data sets using a network
formed by an inductorless version of the Double-Scroll circuit
inductorless_chua . We consider four networks of bidirectionally diffusively
coupled circuits. Topology I represents two bidirectionally coupled circuits,
Topology II, three circuits coupled in an open-ended array, Topology III, four
circuits coupled in an open-ended array, and Topology IV, coupled in an closed
array. We choose two circuits in the different networks (one connection apart)
and collect from each circuit a time-series of 79980 points, with a sampling
rate of $\delta=80.000$ samples/s. The measured variable is the voltage across
one of the circuit capacitors, which is normalised in order to make the space
$\Omega$ to be a square of sides 1. Such normalisation does not alter the
quantities that we calculate. The following results provide the exchange of
information between these two chosen circuits. The values of $\epsilon$ and
$t$ used to course-grain the space $\Omega$ and to calculate $e_{2}$ in Eq.
(17) are the ones that minimises $|N_{C}(T,e_{2})-\tilde{N}_{C}(\epsilon)|$
and at the same time satisfy $N_{C}(T,e_{2})\geq\tilde{N}_{C}(\epsilon)$,
where $N_{C}(T,e_{2})=\exp^{Te_{2}(t)}$ represents the number of fictitious
boxes covering the set $\Sigma_{\Omega}$ in a compact fashion, when $t=T$.
This optimisation excludes some non-significant points that make the expansion
rate of fictitious points to be much larger than it should be. In other words,
we require that $e_{2}$ describes well the way most of the points spread. We
consider that $t$ used to calculate $e_{k}$ in Eq. (17) is the time for points
initially in an $\epsilon$-side box to spread to 0.8$L$. That guarantee that
nearby points in $\Sigma_{\Omega}$ are expanding in both directions within the
time interval $[0,T]$. Using $0.4L<t<0.8L$ produces already similar results.
If $t>0.8L$, the set $\Sigma_{\Omega}$ might not be only expanding. $T$ might
be overestimated.
Figure 2: [Color online] Results for experimental networks of Double-Scroll
circuits. On the left-side upper corner pictograms represent how the circuits
(filled circles) are bidirectionally coupled. $I_{S}/T_{k}$ as (green online)
filled circles, $I_{C}$ as the (red online) thick line, and $I_{C}^{l}$ as the
(brown online) squares, for a varying coupling resistance $R$. The unit of
these quantities shown in these figures is (kbits/s). (A) Topology I, (B)
Topology II, (C) topology III, and (D) Topology IV. In all figures,
$\tilde{D}_{0}$ increases smoothly from 1.25 to 1.95 as $R$ varies from
0.1k$\Omega$ to 5k$\Omega$. The line on the top of the figure represents the
interval of resistance values responsible to induce almost synchronisation
(AS) and phase synchronisation (PS).
$I_{S}$ has been estimated by the method in Ref. kraskov . Since we assume
that the space $\Omega$ where mutual information is being measured is 2D, we
will compare our results by considering in the method of Ref. kraskov a 2D
space formed by the two collected scalar signals. In the method of Ref.
kraskov the phase space is partitioned in regions that contain 30 points of
the continuous trajectory. Since that these regions do not have equal areas
(as it is done to calculate $I_{C}$ and $I_{C}^{l}$), in order to estimate $T$
we need to imagine a box of sides $\epsilon_{k}$, such that its area
$\epsilon_{k}^{2}$ contains in average 30 points. The area occupied by the set
$\Sigma_{\Omega}$ is approximately given by $\epsilon^{2}\tilde{N}_{C}$, where
$\tilde{N}_{C}$ is the number of occupied boxes. Assuming that the 79980
experimental data points occupy the space $\Omega$ uniformly, then on average
30 points would occupy an area of $\frac{30}{79980}\epsilon^{2}\tilde{N}_{C}$.
The square root of this area is the side of the imaginary box that would
occupy 30 points. So,
$\epsilon_{k}=\sqrt{\frac{30}{79980}\tilde{N}_{C}}\epsilon$. Then, in the
following, the “exact” value of the MIR will be considered to be given by
$I_{S}/T_{k}$, where $T_{k}$ is estimated by
$T_{k}=-\frac{1}{e_{1}}\log{(\epsilon_{k})}$.
The three main characteristics of the curves for the quantities $I_{S}/T_{k}$,
$I_{C}$, and $I_{C}^{l}$ (appearing in Fig. 2) with respect to the coupling
strength are that (i) as the coupling resistance becomes smaller, the coupling
strength connecting the circuits becomes larger, and the level of
synchronisation increases followed by an increase in $I_{S}/T_{k}$, $I_{C}$,
and $I_{C}^{l}$, (ii) all curves are close, (iii) and as expected, for most of
the resistance values, $I_{C}>I_{S}/T_{k}$ and $I_{C}^{l}\leq I_{S}/T_{k}$.
The two main synchronous phenomena appearing in these networks are almost
synchronisation (AS) femat_PLA1999 , when the circuits are almost completely
synchronous, and phase synchronisation (PS) juergen_book . For the circuits
considered in Fig. 2, AS appears for the interval $R\in[0,3]$ and PS appears
for the interval $R\in[3,3.5]$. Within this region of resistance values the
exchange of information between the circuits becomes large. PS was detected by
using the technique from Refs. baptista_PHYSICAD2006 ; pereira_PRE2007 .
### IV.3 MIR and its upper bound in stochastic systems
To analytically demonstrate that the quantities $I_{C}$ and $I_{S}/T$ can be
well calculated in stochastic systems, we consider the following stochastic
dynamical toy model illustrated in Fig. 3. In it points within a small box of
sides $\epsilon$ (represented by the filled square in Fig. 3(A)) located in
the centre of the subspace $\Omega$ are mapped after one iteration of the
dynamics to 12 other neighbouring boxes. Some points remain in the initial
box. The points that leave the initial box go to 4 boxes along the diagonal
line and 8 boxes off-diagonal along the transverse direction. Boxes along the
diagonal are represented by the filled squares in Fig. 3(B) and off-diagonal
boxes by filled circles. At the second iteration, the points occupy other
neighbouring boxes, as illustrated in Fig. 3(C), and at the time $n=T$ the
points do not spread any longer, but are somehow reinjected inside the region
of the attractor. We consider that this system is completely stochastic, in
the sense that no one can precisely determine the location of where an initial
condition will be mapped. The only information is that points inside a smaller
region are mapped to a larger region.
At the iteration $n$, there will be $N_{d}=2^{1+n}+1$ boxes occupied along the
diagonal (filled squares in Fig. 3) and $N_{t}=2nN_{d}-C(\tilde{n})$ (filled
circles in Fig. 3) boxes occupied off-diagonal (along the transverse
direction), where $C(\tilde{n})=0$ for $\tilde{n}$=0, and $C(\tilde{n})>0$ for
$\tilde{n}\geq 1$ and $\tilde{n}=n-T-\alpha$. $\alpha$ is a small number of
iterations representing the time difference between the time $T$ for the
points in the diagonal to reach the boundary of the space $\Omega$ and the
time for the points in the off-diagonal to reach this boundary. The border
effect can be ignored when the expansion along the diagonal direction is much
faster than along the transverse direction.
Figure 3: (A) A small box representing a set of initial conditions. After one
iteration of the system, the points that leave the initial box in (A) go to 4
boxes along the diagonal line [filled squares in (B)] and 8 boxes off-diagonal
(along the transverse direction) [filled circles in (B)]. At the second
iteration, the points occupy other neighbouring boxes as illustrated in (C)
and after an interval of time $n=T$ the points do not spread any longer (D).
At the iteration $n$, there will be
$N_{C}=2^{1+n}+1+(2^{1+n}+1)2n-C(\tilde{n})$ boxes occupied by points. In the
following calculations we consider that $N_{C}\cong 2^{1+n}(1+2n)$. We assume
that the subspace $\Omega$ is a square whose sides have length 1, and that
$\Sigma\in\Omega$, so $L=\sqrt{2}$. For $n>T$, the attractor does not grow any
longer along the off-diagonal direction. The time $n=T$, for the points to
spread over the attractor $\Sigma$, can be calculated by the time it takes for
points to visit all the boxes along the diagonal. Thus, we need to satisfy
$N_{d}\epsilon\sqrt{2}=\sqrt{2}$. Ignoring the 1 appearing in the expression
for $N_{d}$ due to the initial box in the estimation for the value of $T$, we
arrive that $T>\frac{\log{(1/\epsilon)}}{\log{(2)}}-1$. This stochastic system
is discrete. In order to take into consideration the initial box in the
calculation of $T$, we pick the first integer that is larger than
$\frac{\log{(1/\epsilon)}}{\log{(2)}}-1$, leading $T$ to be the largest
integer that satisfies
$T<-\frac{\log{(\epsilon)}}{\log{(2)}}.$ (19)
The largest Lyapunov exponent or the order-1 expansion rate of this stochastic
toy model can be calculated by $N_{d}(n)\exp^{\lambda_{1}}=N_{d}(n+1)$, which
take us to
$\lambda_{1}=\log{(2)}.$ (20)
Therefore, Eq. (19) can be rewritten as
$T=-\frac{\log{(\epsilon)}}{\lambda_{1}}$.
The quantity $D$ can be calculated by $D=\frac{\log{(N_{C})}}{\log{(N)}}$,
with $n=T$. Neglecting $C(\tilde{n})$ and the 1 appearing in $N_{C}$ due to
the initial box, we have that $N_{C}\cong 2^{1+T}[1+2^{T}]$. Substituting in
the definition of $D$, we obtain
$D=\frac{(1+T)\log{(2)}+\log{(1+2^{T})}}{-\log{(\epsilon)}}$. Using $T$ from
Eq. (19), we arrive at
$D=1+r,$ (21)
where
$r=-\frac{\log{(2)}}{\log{(\epsilon)}}-\frac{\log{(1+2^{T})}}{\log{(\epsilon)}}$
(22)
Placing $D$ and $\lambda_{1}$ in $I_{C}=\lambda_{1}(2-D)$, give us
$I_{C}=\log{(2)}(1-r).$ (23)
Let us now calculate $I_{S}/T$. Ignoring the border effect, and assuming that
the expansion of points is uniform, then $P(i,j)=1/N_{C}$ and
$P(i)=P(j)=1/N=\epsilon$. At the iteration $n=T$, we have that
$I_{S}=-2\log{(\epsilon)}-\log{(N_{C})}$. Since $N_{C}\cong 2^{1+T}[1+2^{T}]$,
we can write that $I_{S}=-2\log{(\epsilon)}-(1+T)\log{(2)}-\log{(1+2^{T})}$.
Placing $T$ from Eq. (19) into $I_{S}$ takes us to
$I_{S}=-\log{(2)}-\log{(\epsilon)}-\log{(1+2^{T})}$. Finally, dividing $I_{S}$
by $T$, we arrive that
$\displaystyle\frac{I_{S}}{T}$ $\displaystyle=$
$\displaystyle\log{(2)}\left[1+\frac{\log{(2)}}{\log{(\epsilon)}}+\frac{\log{(1+2^{T})}}{\log{(\epsilon)}}\right]$
(24) $\displaystyle=$ $\displaystyle\log{(2)}(1-r).$
As expected from the way we have constructed this model, Eq. (24) and (23) are
equal and $I_{C}=\frac{I_{S}}{T}$.
Had we included the border effect in the calculation of $I_{C}$, denote the
value by $I_{C}^{b}$, we would have typically obtained that $I_{C}^{b}\geq
I_{C}$, since $\lambda_{2}$ calculated considering a finite space $\Omega$
would be either smaller or equal than the value obtained by neglecting the
border effect. Had we included the border effect in the calculation of
$I_{S}/T$, denote the value by $I_{S}^{b}/T$, typically we would expect that
the probabilities $P(i,j)$ would not be constant. That is because the points
that leave the subspace $\Omega$ would be randomly reinjected back to
$\Omega$. We would conclude that $I_{S}^{b}/T\leq I_{S}/T$. Therefore, had we
included the border effect, we would have obtained that $I_{C}^{b}\geq
I_{S}^{b}/T$.
The way we have constructed this stochastic toy model results in $D\cong 1$.
This is because the spreading of points along the diagonal direction is much
faster than the spreading of points along the off-diagonal transverse
direction. In other words, the second largest Lyapunov exponent,
$\lambda_{2}$, is close to zero. Stochastic toy models which produce larger
$\lambda_{2}$, one could consider that the spreading along the transverse
direction is given by $N_{t}=N_{d}2^{\alpha n}-C(\tilde{n})$, with
$\alpha\in[0,1]$.
### IV.4 Expansion rates for noisy data with few sampling points
In terms of the order-1 expansion rate, $e_{1}$, our quantities read
$I_{C}={e_{1}}(2-D)$,
$T=\frac{1}{e_{1}}\log{\left[\frac{1}{\epsilon}\right]}$, and
$I_{C}^{l}=e_{1}(2-\tilde{D}_{0})$. In order to show that our expansion rate
can be used to calculate these quantities, we consider that the experimental
system is uni-dimensional and has a constant probability measure. Additive
noise is assumed to be bounded with maximal amplitude $\eta$, and having
constant density.
Our order-1 expansion rate is defined as
$e_{1}(t)=1/\tilde{N}_{C}\sum_{i=1}^{\tilde{N}_{C}}\frac{1}{t}\log{[L_{1}^{i}(t)]}.$
(25)
where $L_{1}^{i}(t)$ measures the largest growth rate of nearby points. Since
all it matters is the largest distance between points, it can be estimated
even when the experimental data set has very few data points. Since, in this
example, we consider that the experimental noisy points have constant uniform
probability distribution, $e_{1}(t)$ can be calculated by
$e_{1}(t)=\frac{1}{t}\log{\left[\frac{\Delta+2\eta}{\delta+2\eta}\right]}.$
(26)
where $\delta+2\eta$ represents the largest distance between pair of
experimental noisy points in an $\epsilon$-square box and $\Delta+2\eta$
represents the largest distance between pair of the points that were initially
in the $\epsilon$-square box but have spread out for an interval of time $t$.
The experimental system (without noise) is responsible to make points that are
at most $\delta$ apart from each other to spread to at most to $\Delta$ apart
from each other. This points spread out exponentially fast according to the
largest positive Lyapunov exponent $\lambda_{1}$ by
$\Delta=\delta\exp^{\lambda_{1}t}.$ (27)
Substituting Eq. (27) in (26), and expanding $\log$ to first order, we obtain
that $e_{1}=\lambda_{1}$, and therefore, our expansion rate can be used to
estimate Lyapunov exponents.
## V Supplementary Information
### V.1 Decay of correlation and First Poincaré Returns
As rigorously shown in young , the decay with time of the correlation, $C(t)$,
is proportional to the decay with time of the density of the first Poincaré
recurrences, $\rho(t,\epsilon)$, which measures the probability with which a
trajectory returns to an $\epsilon$-interval after $t$ iterations. Therefore,
if $\rho(t,\epsilon)$ decays with $t$, for example exponentially fast, $C(t)$
will decay with $t$ exponentially fast, as well. The relationship between
$C(t)$ and $\rho(t)$ can be simply understood in chaotic systems with one
expanding direction (one positive Lyapunov exponent). As shown in
baptista_CHAOS2009 , the “local” decay of correlation (measured in the
$\epsilon$-interval) is given by
$C(t,\epsilon)\leq\mu(\epsilon)\rho(t,\epsilon)-\mu(\epsilon)^{2}$, where
$\mu(\epsilon)$ is the probability measure of a chaotic trajectory to visit
the $\epsilon$-interval. Consider the shift map $x_{n+1}=2x_{n},\mbox{mod 1}$.
For this map, $\mu(\epsilon)=\epsilon$ and there are an infinite number of
possible intervals that makes $C(t,\epsilon)=0$, for a finite $t$. These
intervals are the cells of a Markov partition. As recently demonstrated by [P.
Pinto, I. Labouriau, M. S. Baptista], in piecewise-linear systems as the shift
map, if $\epsilon$ is a cell in an order-$t$ Markov partition and
$\rho(t,\epsilon)>0$, then $\rho(t,\epsilon)=2^{-t}$ and by the way a Markov
partition is constructed we have that $\epsilon=2^{-t}$. Since that
$\epsilon=\mu(\epsilon)=2^{-t}$, we arrive at that $C(t,\epsilon)\leq 0$, for
a special finite time $t$. Notice that $\epsilon=2^{-t}$ can be rewritten as
$-\ln{(\epsilon)}=t\ln{(2)}$. Since for this map, the largest Lyapunov
exponent is equal to $\lambda_{1}=\ln{(2)}$, then
$t=-\frac{1}{\lambda_{1}}\ln{(\epsilon)}$, which is exactly equal to the
quantity $T$, the time interval responsible to make the system to lose its
memory from the initial condition and that can be calculated by the time that
makes points inside an initial $\epsilon$-interval to spread over the whole
phase space, in this case $[0,1]$.
### V.2 $I_{C}$, and $I_{C}^{l}$ in larger networks and higher-dimensional
subspaces $\Sigma_{\Omega}$
Imagine a network formed by $K$ coupled oscillators. Uncoupled, each
oscillator possesses a certain amount of positive Lyapunov exponents, one
zero, and the others are negative. Each oscillator has dimension $d$. Assume
that the only information available from the network are two $Q$ dimensional
measurements, or a scalar signal that is reconstructed to a $Q$-dimensional
embedding space. So, the subspace $\Sigma_{\Omega}$ has dimension $2Q$, and
each subspace of a node (or group of nodes) has dimension $Q$. To be
consistent with our previous equations, we assume that we measure
$M_{\Omega}=2Q$ positive Lyapunov exponents on the projection
$\Sigma_{\Omega}$. If $M_{\Omega}\neq 2Q$, then in the following equations
$2Q$ should be replaced by $M_{\Omega}$, naturally assuming that
$M_{\Omega}\leq 2Q$.
In analogy with the derivation of $I_{C}$ and $I_{C}^{l}$ in a bidimensional
projection, we assume that if the spreading of initial conditions is uniform
in the subspace $\Omega$. Then, $P(i)=\frac{1}{N^{Q}}$ represents the
probability of finding trajectory points in $Q$-dimensional space of one node
(or a group of nodes) and $P(i,j)=\frac{1}{N_{C}}$ represents the
probabilities of finding trajectory points in the $2Q$-dimensional composed
subspace constructed by two nodes (or two groups of nodes) in the subspace
$\Omega$. Additionally, we consider that the hypothetical number of occupied
boxes $N_{C}$ will be given by
$N_{C}(T)=\exp^{T(\sum_{i=1}^{2Q}\lambda_{i})}$. Then, we have that
$T=1/\lambda_{1}\log{(N)}$, which lead us to
$I_{C}=\lambda_{1}(2Q-D).$ (28)
Similarly to the way we have derived $I_{C}^{l}$ in a bidimensional
projection, if $\Sigma_{\Omega}$ has more than 2 positive Lyapunov exponents,
then
$I_{C}^{l}=\lambda_{1}(2Q-\tilde{D}_{0}).$ (29)
To write Eq. (28) in terms of the positive Lyapunov exponents, we first extend
the calculation of the quantity $D$ to higher-dimensional subspaces that have
dimensionality 2Q,
$D=1+\sum_{i=2}^{2Q}\frac{\lambda_{i}}{\lambda_{1}},$ (30)
where $\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\ldots\geq\lambda_{2Q}$ are
the Lyapunov exponents measured on the subspace $\Omega$. To derive this
equation we only consider that the hypothetical number of occupied boxes
$N_{C}$ is given by $N_{C}(T)=\exp^{T(\sum_{i=2}^{2Q}\lambda_{i})}$.
We then substitute $D$ as a function of these exponents (Eq. (30)) in Eq.
(28). We arrive at
$I_{C}=(2Q-1)\lambda_{1}-\sum_{i=2}^{2Q}\lambda_{i}.$ (31)
### V.3 $I_{C}$ as a function of the positive Lyapunov exponents of the
network
Consider a network whose attractor $\Sigma$ possesses $M$ positive Lyapunov
exponents, denoted by $\tilde{\lambda}_{i}$, $i=1,\ldots,M$. For a typical
subspace $\Omega$, $\lambda_{1}$ measured on $\Omega$ is equal to the largest
Lyapunov exponent of the network. Just for the sake of simplicity, assume that
the nodes in the network are sufficiently well connected so that in a typical
measurement with a finite number of observations this property holds, i.e.,
$\tilde{\lambda}_{1}=\lambda_{1}$. But, if measurements provide that
$\tilde{\lambda}_{1}>>\lambda_{1}$, the next arguments apply as well, if one
replaces $\tilde{\lambda}_{1}$ appearing in the further calculations by the
smallest Lyapunov exponent, say, $\tilde{\lambda_{k}}$, of the network that is
still larger than $\lambda_{1}$, and then, substitute $\tilde{\lambda}_{2}$ by
$\tilde{\lambda_{k+1}}$, and so on. As before, consider that $M_{\Omega}=2Q$.
Then, for an arbitrary subspace $\Omega$,
$\sum_{i=2}^{2Q}\lambda_{i}\leq\sum_{i=2}^{2Q}\tilde{\lambda}_{i}$, since a
projection cannot make the Lyapunov exponents larger, but only smaller or
equal.
Defining
$\tilde{I}_{C}=(2Q-1)\lambda_{1}-\sum_{i=2}^{2Q}\tilde{\lambda}_{i}.$ (32)
Since $\sum_{i=2}^{2Q}\lambda_{i}\leq\sum_{i=2}^{2Q}\tilde{\lambda}_{i}$, it
is easy to see that
$\tilde{I}_{C}\leq I_{C}.$ (33)
So, $I_{C}$, measured on the subspace $\Sigma_{\Omega}$ and a function of the
$2Q$ largest positive Lyapunov exponents measured in $\Sigma_{\Omega}$, is an
upper bound for $\tilde{I}_{C}$, a quantity defined by the $2Q$ largest
positive Lyapunov exponents of the attractor $\Sigma$ of the network.
Therefore, if the Lyapunov exponents of a network are know, the quantity
$\tilde{I}_{C}$ can be used as a way to estimate how much is the MIR between
two measurements of this network, measurements that form the subspace
$\Omega$.
Notice that $I_{C}$ depends on the projection chosen (the subspace $\Omega$)
and on its dimension, whereas $\tilde{I}_{C}$ depends on the dimension of the
subspace $\Sigma_{\Omega}$ (the number 2Q of positive Lyapunov exponents). The
same happens for the mutual information between random variables that depend
on the projection considered.
Equation (32) is important because it allows us to obtain an estimation for
the value of $I_{C}$ analytically. As an example, imagine the following
network of coupled maps with a constant Jacobian
$X^{(i)}_{n+1}=2X^{(i)}_{n}+\sigma\sum_{j=1}^{K}{\mathbf{A}}_{ij}(X^{(j)}_{n}-X^{(i)}_{n}),\mbox{mod
1},$ (34)
where $X\in[0,1]$ and ${\mathbf{A}}$ represents the connecting adjacent
matrix. If node $j$ connects to node $i$, then ${\mathbf{A}}_{ij}=1$, and 0
otherwise.
Assume that the nodes are connected all-to-all. Then, the $K$ positive
Lyapunov exponents of this network are: $\tilde{\lambda}_{1}=\log{(2)}$ and
$\tilde{\lambda}_{i}=\log{2[1+\sigma]}$, with $i=2,K$. Assume also that the
subspace $\Omega$ has dimension $2Q$ and that $2Q$ positive Lyapunov exponents
are observed in this space and that $\tilde{\lambda}_{1}=\lambda_{1}$.
Substituting these Lyapunov exponents in Eq. (32), we arrive at
$\tilde{I}_{C}=(2Q-1)\log{(1+\sigma)}.$ (35)
We conclude that there are two ways for $\tilde{I}_{C}$ to increase. Either
one considers larger measurable subspaces $\Omega$ or one increases the
coupling between the nodes. This suggests that the larger the coupling
strength is the more information is exchanged between groups of nodes.
For arbitrary topologies, one can also derive analytical formulas for
$\tilde{I}_{C}$ in this network, since $\tilde{\lambda}_{i}$ for $i>2$ can be
calculated from $\tilde{\lambda}_{2}$ baptista_PLA2010c . One arrives at
$\tilde{\lambda}_{i}(\omega_{i}\sigma/2)=\tilde{\lambda}_{2}(\sigma),$ (36)
where $\omega_{i}$ is the $i$th largest eigenvalue (in absolute value) of the
Laplacian matrix
${\mathbf{L}}_{ij}={\mathbf{A}}_{ij}+\mathbb{I}\sum_{j}{\mathbf{A}}_{ij}$.
## VI Conclusions
Concluding, we have shown a procedure to calculate mutual information rate
(MIR) between two nodes (or groups of nodes) in dynamical networks and data
sets that are either mixing, or present fast decay of correlations, or have
sensitivity to initial conditions, and have proposed significant upper
($I_{C}$) and lower ($I_{C}^{l}$) bounds for it, in terms of the Lyapunov
exponents, the expansion rates, and the capacity dimension. Since our upper
bound is calculated from Lyapunov exponents or expansion rates, it can be used
to estimate the MIR between data sets that have different sampling rates or
experimental resolution (e.g. the rise of the ocean level and the average
temperature of the Earth), or between systems possessing a different number of
events. Additionally, Lyapunov exponents can be accurately calculated even
when data sets are corrupted by noise of large amplitude (observational
additive noise) mera ; gao2006 or when the system generating the data suffers
from parameter alterations (“experimental drift”) stefanski . Our bounds link
information (the MIR) and the dynamical behaviour of the system being observed
with synchronisation, since the more synchronous two nodes are, the smaller
$\lambda_{2}$ and $D_{0}$ will be. This link can be of great help in
establishing whether two nodes in a dynamical network or in a complex system
not only exchange information but also have linear or non-linear
interdependences, since the approaches to measure the level of synchronisation
between two systems are reasonably well known and are been widely used. If
variables are synchronous in a time-lag fashion juergen_book , it was shown in
Ref. blanc that the MIR is independent of the delay between the two
processes. The upper bound for the MIR could be calculated by measuring the
Lyapunov exponents of the network (see Supplementary Information), which are
also invariant to time-delays between the variables.
Acknowledgments M. S. Baptista was partially supported by the Northern
Research Partnership (NRP) and Alexander von Humboldt foundation. M. S.
Baptista would like to thank A. Politi for discussions concerning Lyapunov
exponents. R.M. Rubinger, E.R. V. Junior and J.C. Sartorelli thanks the
Brazilian agencies CAPES, CNPq, FAPEMIG, and FAPESP.
## References
* (1) Shannon CE (1948) Bell System Technical Journal 27: 379-423.
* (2) Strong SP, Koberle R, de Ruyter van Steveninck RR, Bialek W (1998) Phys. Rev. Lett. 80: 197-200.
* (3) Sporns O, Chialvo DR, Kaiser M, Hilgetag CC (2004) Trends in Cognitive Sciences 8: 418-425.
* (4) Palus M, Komárek V, Procházka T, et al. (2001) IEEE Engineering in Medicice and Biology Sep/Oct: 65-71.
* (5) Donges JF, Zou Y, Marwan N, and Kurths J (2009) Eur. Phys. J. 174: 157-179.
* (6) Fraser AM and Swinney HL (1986) Phys. Rev. A 33: 1134-1140.
* (7) Kantz H and Schreiber T (2004) Nonlinear Time Series Analysis, Cambridge University Press.
* (8) Parlitz U (1998) Nonlinear Time-Series Analysis, in Nonlinear Modelling - Advanced Black-Box techniques, Kluwer Academic Publishers.
* (9) Haykin S (2001) Communication Systems, John Wiley $\&$ Sons.
* (10) Rossi F, Lendasse A, François D, Wertz V, and Verleysen M (2006) Chemometrics and Intellingent Laboratory Systems, 80: 215-226.
* (11) Paninski L (2003) Neural Computation 15: 1191-1253.
* (12) Steuer R, Kurths J, Daub CO, et al. (2002) Bioinformatics 18: S231-S240.
* (13) Papana A, Kugiumtzis D, and Larsson PG (2009) Int. J. Bifurcation and Chaos 19: 4197-4215.
* (14) Baptista MS and Kurths J (2008) Phys. Rev. E 77: 026205-1-026205-13.
* (15) Baptista MS, de Carvalho JX, Hussein MS (2008) PloS ONE 3: e3479.
* (16) Blanc JL, Pezard L, and Lesne A (2011) Phys. Rev. E 84: 036214-1-036214-9.
* (17) Dobrushin RL (1959) Usp. Mat. Nauk. 14: 3-104; transl: Amer. Math. Soc. Translations, series 2 33: 323-438.
* (18) Gray RM and Kieffer JC (1980) IEEE Transations on Information theory IT-26: 412-421.
* (19) Verdú S (1994) IEEE Trans. Information Theory, 40, 1147-1157.
* (20) Kolmogorov AN (1959) Dokl. Akad. Nauk SSSR 119: 861-864; 124: 754-755.
* (21) Ruelle D (1978) Bol. Soc. Bras. Mat. 9: 83-87.
* (22) Pesin YaB (1977) Russ. Math. Surveys 32: 55-114.
* (23) Ledrappier F and Strelcyn JM (1982) Ergod. Theory Dyn. Syst. 2: 203-219.
* (24) Gao JB (1999) Phys. Rev. Lett. 83: 3178-3181.
* (25) Baptista MS, Eulalie N, Pinto PRF, et al. (2010) Phys. Lett. A 374: 1135-1140.
* (26) Eckmann JP (2003) arXiv:304043.
* (27) Sinai YaG (1970) Russ. Math. Surv. 25: 137-189.
* (28) Chernov N and Young LS (2001) Encycl. of Math. Sc., Math. Phys. II, 101: 89-120.
* (29) Baptista MS, Caldas IL, Heller MVAP, Ferreira AA 301: 150-162.
* (30) Dawson S, Grebogi C, Sauer T, and Yorke JA (1994) Phys. Rev. Lett. 73: 1927-1930.
* (31) Albuquerque HA, Rubinger RM, Rech PC, (2007) Physics D 233: 66-72.
* (32) Kraskov A, Stogbauer H, and Grassberger P (2004) Phys. Rev. E 69: 066138-1-066138-16.
* (33) Femat R and Solís-Perales G (1999) Phys. Lett. A 262: 50-60.
* (34) Pikovsky A, Rosenblum M, and Kurths J (2001) Synchronization: A Universal Concept in Nonlinear Sciences, Cambridge University Press.
* (35) Baptista MS, Pereira T, and Kurths J (2006) Physica D 216: 260-268.
* (36) Pereira T, Baptista MS, and Kurths J, Phys. Rev. E (2007) 75: 026216-1-026216-12.
* (37) Mera ME and Morán M (2009) Phys. Rev E 80: 016207-1-016207-8.
* (38) Gao JB, Hu J, Tung WW, and Cao YH, Phys. Rev. E 74: 066204-1-066204-9.
* (39) Stefański A (2008) Journal of Theoretical and Applied Mechanics 46: 665-678.
* (40) Young LS (1999) Israel Journal of Mathematics 110: 153-188.
* (41) Baptista MS, Maranhão DM, Sartorelli JC (2009) Chaos 19: 043115-1-043115-10.
* (42) Baptista MS, Kakmeni FM, Magno GL, Hussein MS (2011) Phys. Lett. A 375: 1309-1318.
|
arxiv-papers
| 2011-04-18T14:38:03 |
2024-09-04T02:49:18.319491
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "M. S. Baptista, R. M. Rubinger, E. R. V. Junior, J. C. Sartorelli, U.\n Parlitz, and C. Grebogi",
"submitter": "Murilo Baptista S.",
"url": "https://arxiv.org/abs/1104.3498"
}
|
1104.3639
|
# Variance Control in Weak Value Measurement Pointers
A. D. Parks and J. E. Gray Electromagnetic and Sensor Systems Department,
Naval Surface Warfare Center, Dahlgren, VA 22448, USA
(April 11, 2011)
###### Abstract
The variance of an arbitrary pointer observable is considered for the general
case that a complex weak value is measured using a complex valued pointer
state. For the typical cases where the pointer observable is either its
position or momentum, the associated expressions for the pointer’s variance
after the measurement contain a term proportional to the product of the weak
value’s imaginary part with the rate of change of the third central moment of
position relative to the initial pointer state just prior to the time of the
measurement interaction when position is the observable - or with the initial
pointer state’s third central moment of momentum when momentum is the
observable. These terms provide a means for controlling pointer position and
momentum variance and identify control conditions which - when satisfied - can
yield variances that are smaller after the measurement than they were before
the measurement. Measurement sensitivities which are useful for estimating
weak value measurement accuracies are also briefly discussed.
###### pacs:
03.65.-w, 03.65.Ca, 03.65.Ta, 06.20.Dk
LABEL:FirstPage1 LABEL:LastPage#1102
## I Introduction
The weak value $A_{w}$ of a quantum mechanical observable $A$ was introduced
by Aharonov et al A1 ; A2 ; A3 a quarter century ago. This quantity is the
statistical result of a standard measurement procedure performed upon a pre-
and post-selected (PPS) ensemble of quantum systems when the interaction
between the measurement apparatus and each system is sufficiently weak, i.e.
when it is a weak measurement. Unlike a standard strong measurement of $A$
which significantly disturbs the measured system (i.e. it ”collapses” the
wavefunction), a weak measurement of $A$ for a PPS system does not appreciably
disturb the quantum system and yields $A_{w}$ as the observable’s measured
value. The peculiar nature of the virtually undisturbed quantum reality that
exists between the boundaries defined by the PPS states is revealed in the
eccentric characteristics of $A_{w}$, namely that $A_{w}$ can be complex
valued and that the real part $\operatorname{Re}A_{w}$ of $A_{w}$ can lie far
outside the eigenvalue spectral limits of $\widehat{A}$. While the
interpretation of weak values remains somewhat controversial, experiments have
verified several of the interesting unusual properties predicted by weak value
theory R ; P ; RL ; W ; H ; Y ; D .
The pointer of a measurement apparatus is fundamental to the theory of quantum
measurement because the values of measured observables are determined from the
pointer’s properties (e.g. from the pointer’s mean position). Understanding
these properties has become more important in recent years - in large part due
to the increased interest in the theory of weak measurements and weak value
theory. The properties of pointers associated with weak value measurements
have been studied - for example - by Johansen J , Aharonov and Botero AB ,
Jozsa Jo , Di Lorenzo and Egues DE , and Cho et al C .
The purpose of this paper is to extend Jozsa’s work Jo to obtain the general
expression for the variance associated with an arbitrary pointer observable
when a complex valued pointer state is used to measure a complex weak value
$A_{w}$. For the typical cases where position or momentum are the pointer
observables, the associated expressions each contain a variance control term.
This term is proportional to the product of the imaginary part
$\operatorname{Im}A_{w}$ of $A_{w}$ with the rate of change of the third
central moment of position relative to the initial pointer state just prior to
measurement when the observable is position - or with the initial pointer
state’s third central moment of momentum when momentum is the observable.
Control conditions associated with these terms are identified which - if
satisfied - can yield pointer position and momentum variances after a
measurement that are smaller than they were prior to the measurement. These
results are used to briefly discuss sensitivities associated with weak value
measurements.
## II Weak Measurements and Weak Values
For the reader’s convenience, this section provides a brief review of weak
measurement and weak value theory. For additional details the reader is
invited to consult references A1 ; A2 ; A3 ; A4 .
Weak measurements arise in the von Neumann description of a quantum
measurement at time $t_{0}$ of a time-independent observable $A$ that
describes a quantum system in an initial fixed pre-selected state
$\left|\psi_{i}\right\rangle={\displaystyle\sum\nolimits_{J}}c_{j}\left|a_{j}\right\rangle$
at $t_{0}$, where the set $J$ indexes the eigenstates
$\left|a_{j}\right\rangle$ of $\widehat{A}$. In this description, the
Hamiltonian for the interaction between the measurement apparatus and the
quantum system is
$\widehat{H}=\gamma(t)\widehat{A}\widehat{p}.$
Here $\gamma(t)=\gamma\delta(t-t_{0})$ defines the strength of the
measurement’s impulsive coupling interaction at $t_{0}$ and $\widehat{p}$ is
the momentum operator for the pointer of the measurement apparatus which is in
the initial normalized state $\left|\phi\right\rangle$. Let $\widehat{q}$ be
the pointer’s position operator that is conjugate to $\widehat{p}$.
Prior to the measurement the pre-selected system and the pointer are in the
tensor product state $\left|\psi_{i}\right\rangle\left|\phi\right\rangle$.
Immediately following the interaction the combined system is in the state
$\left|\Phi\right\rangle=e^{-\frac{i}{\hbar}\int\widehat{H}dt}\left|\psi_{i}\right\rangle\left|\phi\right\rangle={\displaystyle\sum\nolimits_{J}}c_{j}e^{-\frac{i}{\hbar}\gamma
a_{j}\widehat{p}}\left|a_{j}\right\rangle\left|\phi\right\rangle,$
where use has been made of the fact that
$\int\widehat{H}dt=\gamma\widehat{A}\widehat{p}$. The exponential factor in
this equation is the translation operator $\widehat{S}\left(\gamma
a_{j}\right)$ for $\left|\phi\right\rangle$ in its $q$-representation. It is
defined by the action $\left\langle q\right|\widehat{S}\left(\gamma
a_{j}\right)\left|\phi\right\rangle$ which translates the pointer’s
wavefunction over a distance $\gamma a_{j}$ parallel to the $q$-axis. The
$q$-representation of the combined system and pointer state is
$\left\langle
q\right|\left.\Phi\right\rangle={\displaystyle\sum\nolimits_{J}}c_{j}\left\langle
q\right|\widehat{S}\left(\gamma
a_{j}\right)\left|\phi\right\rangle\left|a_{j}\right\rangle.$
When the measurement interaction is strong, the quantum system is appreciably
disturbed and its state ”collapses” with probability $\left|c_{n}\right|^{2}$
to an eigenstate $\left|a_{n}\right\rangle$ leaving the pointer in the state
$\left\langle q\right|\widehat{S}\left(\gamma
a_{n}\right)\left|\phi\right\rangle$. Strong measurements of an ensemble of
identically prepared systems yield $\gamma\left\langle
A\right\rangle\equiv\gamma\left\langle\psi_{i}\right|\widehat{A}\left|\psi_{i}\right\rangle$
as the centroid of the associated pointer probability distribution with
$\left\langle A\right\rangle$ as the measured value of $\widehat{A}$.
A weak measurement of $\widehat{A}$ occurs when the interaction strength
$\gamma$ is sufficiently small so that the system is essentially undisturbed
and the uncertainty $\Delta q$ is much larger than $\widehat{A}$’s eigenvalue
separation. In this case, the pointer distribution is the superposition of
broad overlapping $\left|\left\langle q\right|\widehat{S}\left(\gamma
a_{j}\right)\left|\phi\right\rangle\right|^{2}$ terms. Although a single
measurement provides little information about $\widehat{A}$, many repetitions
allow the centroid of the distribution to be determined to any desired
accuracy.
If a system is post-selected after a weak measurement is performed, then the
resulting pointer state is
$\left|\Psi\right\rangle\equiv\left\langle\psi_{f}\right|\left.\Phi\right\rangle={\displaystyle\sum\nolimits_{J}}c_{j}^{\prime\ast}c_{j}\widehat{S}\left(\gamma
a_{j}\right)\left|\phi\right\rangle,$
where
$\left|\psi_{f}\right\rangle={\displaystyle\sum\nolimits_{J}}c_{j}^{\prime}\left|a_{j}\right\rangle$,
$\left\langle\psi_{f}\right|\left.\psi_{i}\right\rangle\neq 0$, is the post-
selected state at $t_{0}$. Since
$\widehat{S}\left(\gamma
a_{j}\right)={\displaystyle\sum\limits_{m=0}^{\infty}}\frac{\left(-i\gamma
a_{j}\widehat{p}/\hbar\right)^{m}}{m!},$
then
$\left|\Psi\right\rangle={\displaystyle\sum\nolimits_{J}}c_{j}^{\prime\ast}c_{j}\left\\{1-\frac{i}{\hbar}\gamma
A_{w}\widehat{p}+{\displaystyle\sum\limits_{m=2}^{\infty}}\frac{\left(-i\gamma\widehat{p}/\hbar\right)^{m}}{m!}\left(A^{m}\right)_{w}\right\\}\left|\phi\right\rangle\approx\left\\{{\displaystyle\sum\nolimits_{J}}c_{j}^{\prime\ast}c_{j}\right\\}e^{-\frac{i}{\hbar}\gamma
A_{w}\widehat{p}}\left|\phi\right\rangle$
in which case
$\left|\Psi\right\rangle\approx\left\langle\psi_{f}\right|\left.\psi_{i}\right\rangle\widehat{S}\left(\gamma
A_{w}\right)\left|\phi\right\rangle.$
Here
$\left(A^{m}\right)_{w}=\frac{{\displaystyle\sum\nolimits_{J}}c_{j}^{\prime\ast}c_{j}a_{j}^{m}}{{\displaystyle\sum\nolimits_{J}}c_{j}^{\prime\ast}c_{j}}=\frac{\left\langle\psi_{f}\right|\widehat{A}^{m}\left|\psi_{i}\right\rangle}{\left\langle\psi_{f}\right|\left.\psi_{i}\right\rangle},$
with the weak value $A_{w}$ of $\widehat{A}$ defined by
$A_{w}\equiv\left(A^{1}\right)_{w}=\frac{\left\langle\psi_{f}\right|\widehat{A}\left|\psi_{i}\right\rangle}{\left\langle\psi_{f}\right|\left.\psi_{i}\right\rangle}.$
(1)
From this expression it is obvious that $A_{w}$ is - in general - a complex
valued quantity that can be calculated directly from theory and that when the
PPS states are nearly orthogonal $\operatorname{Re}A_{w}$ can lie far outside
$\widehat{A}$’s eigenvalue spectral limits.
For the general case where both $A_{w}$ and $\phi\left(q\right)$ are complex
valued, the mean pointer position and momentum after a measurement are given
by Jo
$\left\langle\Psi\right|\widehat{q}\left|\Psi\right\rangle=\left\langle\phi\right|\widehat{q}\left|\phi\right\rangle+\gamma\operatorname{Re}A_{w}+\left(\frac{\gamma}{\hbar}\right)\operatorname{Im}A_{w}\left(m\frac{d\Delta_{\phi}^{2}q}{dt}\right)$
(2)
and
$\left\langle\Psi\right|\widehat{p}\left|\Psi\right\rangle=\left\langle\phi\right|\widehat{p}\left|\phi\right\rangle+2\left(\frac{\gamma}{\hbar}\right)\operatorname{Im}A_{w}\left(\Delta_{\phi}^{2}p\right),$
(3)
respectively. Here $m$ is the mass of the pointer, $\Delta_{\phi}^{2}p$ is the
pointer’s initial momentum variance, and the time derivative of
$\Delta_{\phi}^{2}q$ is the rate of change of the initial pointer position
variance just prior to $t_{0}$.
## III Pointer Variance
The mean value of an arbitrary pointer observable $M$ after a measurement of
$A_{w}$ is Jo
$\displaystyle\left\langle\Psi\right|\widehat{M}\left|\Psi\right\rangle$
$\displaystyle=\left\langle\phi\right|\widehat{M}\left|\phi\right\rangle-i\left(\frac{\gamma}{\hbar}\right)\operatorname{Re}A_{w}\left\langle\phi\right|\left[\widehat{M},\widehat{p}\right]\left|\phi\right\rangle+$
(4)
$\displaystyle\left(\frac{\gamma}{\hbar}\right)\operatorname{Im}A_{w}\left(\left\langle\phi\right|\left\\{\widehat{M},\widehat{p}\right\\}\left|\phi\right\rangle-2\left\langle\phi\right|\widehat{M}\left|\phi\right\rangle\left\langle\phi\right|\widehat{p}\left|\phi\right\rangle\right),$
where
$\left\\{\widehat{M},\widehat{p}\right\\}=\widehat{M}\widehat{p}+\widehat{p}\widehat{M}$.
Note that eq.(4) reduces to eq.(3) when $\widehat{M}=\widehat{p}$ and that it
is also in complete agreement with eq.(2) when $\widehat{M}=\widehat{q}$ since
$\left[\widehat{q},\widehat{p}\right]=i\hbar$ and the equations of motion for
$\left\langle\phi\right|\widehat{q}\left|\phi\right\rangle$ and
$\left\langle\phi\right|\widehat{q}^{2}\left|\phi\right\rangle$ yield
$\left\langle\phi\right|\left\\{\widehat{q},\widehat{p}\right\\}\left|\phi\right\rangle=m\frac{d\left\langle\phi\right|\widehat{q}^{2}\left|\phi\right\rangle}{dt}$
(5)
and
$2\left\langle\phi\right|\widehat{q}\left|\phi\right\rangle\left\langle\phi\right|\widehat{p}\left|\phi\right\rangle=m\frac{d\left\langle\phi\right|\widehat{q}\left|\phi\right\rangle^{2}}{dt}.$
(6)
Here the time derivatives are rates of change of the corresponding quantities
just prior to the interaction time $t_{0}$.
The pointer variance for $M$ is easily determined from eq.(4) by subtracting
its square from the expression obtained from eq.(4) when $\widehat{M}$ is
replaced by $\widehat{M}^{2}$. Retaining terms through first order in
$\left(\frac{\gamma}{\hbar}\right)$ yields the following result:
$\Delta_{\Psi}^{2}M=\Delta_{\phi}^{2}M-i\left(\frac{\gamma}{\hbar}\right)\operatorname{Re}A_{w}\mathcal{F}\left(\widehat{M}\right)+\left(\frac{\gamma}{\hbar}\right)\operatorname{Im}A_{w}\mathcal{G}\left(\widehat{M}\right).$
(7)
Here $\Delta_{\phi}^{2}M$ and $\Delta_{\Psi}^{2}M$ are the initial and final
variances, respectively,
$\mathcal{F}\left(\widehat{M}\right)\equiv\left\langle\phi\right|\left[\widehat{M}^{2},\widehat{p}\right]\left|\phi\right\rangle-2\left\langle\phi\right|\widehat{M}\left|\phi\right\rangle\left\langle\phi\right|\left[\widehat{M},\widehat{p}\right]\left|\phi\right\rangle,$
and
$\mathcal{G}\left(\widehat{M}\right)\equiv\left\langle\phi\right|\left\\{\widehat{M}^{2},\widehat{p}\right\\}\left|\phi\right\rangle-2\left\langle\phi\right|\widehat{M}\left|\phi\right\rangle\left\langle\phi\right|\left\\{\widehat{M},\widehat{p}\right\\}\left|\phi\right\rangle-2\left\langle\phi\right|\widehat{p}\left|\phi\right\rangle\left(\Delta_{\phi}^{2}M-\left\langle\phi\right|\widehat{M}\left|\phi\right\rangle^{2}\right).$
(8)
As anticipated from eq.(4), eq.(7) clearly shows that for such a measurement
the pointer variance associated with an arbitrary pointer observable is also
generally effected by both the real and imaginary parts of the weak value.
However, for the typical cases of interest where $\widehat{M}=\widehat{q}$ or
$\widehat{M}=\widehat{p}$, the pointer’s variance is independent of
$\operatorname{Re}A_{w}$ because
$\mathcal{F}\left(\widehat{q}\right)=0=\mathcal{F}\left(\widehat{p}\right).$
Here use has been made of the facts that
$\left[\widehat{q},\widehat{p}\right]=i\hbar$ and
$\left[\widehat{q}^{2},\widehat{p}\right]=2i\hbar\widehat{q}$. Consequently,
for these cases eq.(7) can be written as
$\Delta_{\Psi}^{2}M=\Delta_{\phi}^{2}M+\left(\frac{\gamma}{\hbar}\right)\operatorname{Im}A_{w}\mathcal{G}\left(\widehat{M}\right),\text{
}M=q,p.$ (9)
Now consider $\mathcal{G}\left(\widehat{M}\right)$ in more detail. When
$\widehat{M}=\widehat{q}$, then eq.(8) becomes
$\mathcal{G}\left(\widehat{q}\right)=\left\langle\phi\right|\left\\{\widehat{q}^{2},\widehat{p}\right\\}\left|\phi\right\rangle-2\left\langle\phi\right|\widehat{q}\left|\phi\right\rangle\left\langle\phi\right|\left\\{\widehat{q},\widehat{p}\right\\}\left|\phi\right\rangle-2\left\langle\phi\right|\widehat{p}\left|\phi\right\rangle(\Delta_{\phi}^{2}q-\left\langle\phi\right|\widehat{q}\left|\phi\right\rangle^{2}).$
(10)
From the equation of motion for
$\left\langle\phi\right|\widehat{q}^{3}\left|\phi\right\rangle$ it is found
that
$\frac{d\left\langle\phi\right|\widehat{q}^{3}\left|\phi\right\rangle}{dt}=-\frac{i}{\hbar}\left\langle\phi\right|\left[\widehat{q}^{3},\widehat{H}\right]\left|\phi\right\rangle=-\frac{i}{2m\hbar}\left\langle\phi\right|\left[\widehat{q}^{3},\widehat{p}^{2}\right]\left|\phi\right\rangle=\frac{3}{2m}\left\langle\phi\right|\left\\{\widehat{q}^{2},\widehat{p}\right\\}\left|\phi\right\rangle,$
where $\widehat{H}=\frac{\widehat{p}^{2}}{2m}+V(\widehat{q})$ is the pointer’s
Hamiltonian operator. Applying this result - along with eqs.(5) and (6) - to
eq.(10) yields
$\mathcal{G}\left(\widehat{q}\right)=\frac{2m}{3}\frac{dq_{3}}{dt},$
so that eq.(9) can be compactly written as
$\Delta_{\Psi}^{2}q=\Delta_{\phi}^{2}q+\frac{2\gamma
m}{3\hbar}\operatorname{Im}A_{w}\left(\frac{dq_{3}}{dt}\right).$
Here
$q_{3}\equiv\left\langle\phi\right|\left(\widehat{q}-\left\langle\phi\right|\widehat{q}\left|\phi\right\rangle\right)^{3}\left|\phi\right\rangle$
is the third central moment of $\widehat{q}$ relative to the initial pointer
state and its time derivative is the rate of change of $q_{3}$ just prior to
$t_{0}$.
When $\widehat{M}=\widehat{p}$, then eq.(8) becomes
$\mathcal{G}\left(\widehat{p}\right)=2\left[\left\langle\phi\right|\widehat{p}^{3}\left|\phi\right\rangle-3\left\langle\phi\right|\widehat{p}\left|\phi\right\rangle\left\langle\phi\right|\widehat{p}^{2}\left|\phi\right\rangle+2\left\langle\phi\right|\widehat{p}\left|\phi\right\rangle^{3}\right]=2p_{3},$
where
$p_{3}\equiv\left\langle\phi\right|\left(\widehat{p}-\left\langle\phi\right|\widehat{p}\left|\phi\right\rangle\right)^{3}\left|\phi\right\rangle$
is the third central moment of $\widehat{p}$ relative to the pointer’s initial
state, and eq.(9) assumes the form
$\Delta_{\Psi}^{2}p=\Delta_{\phi}^{2}p+2\left(\frac{\gamma}{\hbar}\right)\operatorname{Im}A_{w}\left(p_{3}\right).$
The quantities $q_{3}$ and $p_{3}$ provide measures of the skewness of the
pointer position and momentum probability distribution profiles. If the
pointer position profile’s skewness is fixed, then $\frac{dq_{3}}{dt}=0$ and
$\Delta_{\Psi}^{2}q=\Delta_{\phi}^{2}q$. Otherwise, $\Delta_{\Psi}^{2}q$ can
be manipulated through the judicious selection of the control term
$\operatorname{Im}A_{w}\left(\frac{dq_{3}}{dt}\right)$. In particular, observe
that $0<\Delta_{\Psi}^{2}q\leq\Delta_{\phi}^{2}q$ when this control term
satisfies the inequality
$\text{ }-\left(\frac{3\hbar}{2\gamma
m}\right)\Delta_{\phi}^{2}q<\operatorname{Im}A_{w}\left(\frac{dq_{3}}{dt}\right)\leq
0.$ (11)
Similarly, the control term $\operatorname{Im}A_{w}\left(p_{3}\right)$ can be
used to manipulate $\Delta_{\Psi}^{2}p$ when it satisfies the inequality
$-\left(\frac{\hbar}{2\gamma}\right)\Delta_{\phi}^{2}p<\operatorname{Im}A_{w}\left(p_{3}\right)\leq
0.$ (12)
Thus, when measuring complex weak values the final pointer position (momentum)
variance can be made smaller than its initial value by choosing
$\operatorname{Im}A_{w}$ or $\frac{dq_{3}}{dt}$ ($p_{3}$) so that condition
(11) ((12)) is satisfied.
## IV Closing Remarks
Because of the growing interest in the practical application of weak values,
estimating their measurement sensitivities has also become important from both
the experimental and device engineering perspectives. Applying the calculus of
error propagation to the above results defines the measurement sensitivities
$\delta_{q}\operatorname{Re}A_{w}$ and $\delta_{q}\operatorname{Im}A_{w}$ for
determining $\operatorname{Re}A_{w}$ and $\operatorname{Im}A_{w}$ from the
mean pointer position. These sensitivities are the positive square roots of
the following expressions:
$\delta_{q}^{2}\operatorname{Re}A_{w}\equiv\frac{\Delta_{\Psi}^{2}q}{\left|\frac{\partial\left\langle\Psi\right|\widehat{q}\left|\Psi\right\rangle}{\partial\operatorname{Re}A_{w}}\right|^{2}}=\frac{\Delta_{\phi}^{2}q}{\gamma^{2}}+\frac{2m}{3\gamma\hbar}\operatorname{Im}A_{w}\left(\frac{dq_{3}}{dt}\right)$
(13)
(this quantity is obviously undefined when $A_{w}$ is purely imaginary) and
$\delta_{q}^{2}\operatorname{Im}A_{w}\equiv\frac{\Delta_{\Psi}^{2}q}{\left|\frac{\partial\left\langle\Psi\right|\widehat{q}\left|\Psi\right\rangle}{\partial\operatorname{Im}A_{w}}\right|^{2}}=\left(\frac{\hbar}{\gamma
m}\right)^{2}\left(\frac{\Delta_{\phi}^{2}q}{\left|\frac{d\Delta_{\phi}^{2}q}{dt}\right|^{2}}\right)+\frac{2}{3}\left(\frac{\hbar}{\gamma
m}\right)\operatorname{Im}A_{w}\left(\frac{\frac{dq_{3}}{dt}}{\left|\frac{d\Delta_{\phi}^{2}q}{dt}\right|^{2}}\right),\frac{d\Delta_{\phi}^{2}q}{dt}\neq
0$ (14)
(this quantity is obviously undefined when $A_{w}$ is real valued or when
$\frac{d\Delta_{\phi}^{2}q}{dt}=0$ \- in which case the mean position does not
depend upon $\operatorname{Im}A_{w}$). It is clear from eqs.(13) and (14)
that: (i) these measurement sensitivities depend upon the variance control
term $\operatorname{Im}A_{w}\left(\frac{dq_{3}}{dt}\right)$ and that this
dependence vanishes when $q_{3}$ is fixed (or if $A_{w}$ is real valued); (ii)
these measurement accuracies decrease (increase) as the measurement gets
weaker (stronger) - i.e. as $\gamma$ gets smaller (larger); (iii) in principle
- the accuracies associated with measuring both $\operatorname{Re}A_{w}$ and
$\operatorname{Im}A_{w}$ can be arbitrarily increased (for a fixed $\gamma>0$
and $m$) by invoking condition (11) and choosing
$\operatorname{Im}A_{w}\left(\frac{dq_{3}}{dt}\right)=-\left(\frac{3\hbar}{2\gamma
m}\right)\Delta_{\phi}^{2}q+\epsilon$, where $\epsilon$ is a small positive
real number; and (iv) surprisingly, the measurement accuracy for
$\operatorname{Re}A_{w}$ decreases with increasing pointer mass, whereas that
for $\operatorname{Im}A_{w}$ increases.
The sensitivity $\delta_{p}\operatorname{Im}A_{w}$ for determining
$\operatorname{Im}A_{w}$ from the mean pointer momentum is the positive square
root of
$\delta_{p}^{2}\operatorname{Im}A_{w}\equiv\frac{\Delta_{\Psi}^{2}p}{\left|\frac{\partial\left\langle\Psi\right|\widehat{p}\left|\Psi\right\rangle}{\partial\operatorname{Im}A_{w}}\right|^{2}}=\left(\frac{\hbar}{2\gamma}\right)^{2}\left(\frac{1}{\Delta_{\phi}^{2}p}\right)+\left(\frac{\hbar}{2\gamma}\right)\operatorname{Im}A_{3}\left(\frac{p_{3}}{\left(\Delta_{\phi}^{2}p\right)^{2}}\right),\Delta_{\phi}^{2}p\neq
0$ (15)
(this quantity is undefined when $A_{w}$ is real valued). Inspection of
eq.(15) reveals that for such measurements: (i) the sensitivity depends upon
the variance control term $\operatorname{Im}A_{w}\left(p_{3}\right)$; (ii) the
accuracy decreases (increases) as the measurement gets weaker (stronger) -
i.e. as $\gamma$ gets smaller (larger); and (iii) the accuracy can be
arbitrarily increased (for a fixed $\gamma>0$) via eq.(12) by choosing
$\operatorname{Im}A_{w}\left(p_{3}\right)=-\left(\frac{\hbar}{2\gamma}\right)\Delta_{\phi}^{2}p+\epsilon$,
where $\epsilon$ is again a small positive real number.
In closing, it is important to note that the results discussed and developed
above apply when the measurement interaction is instantaneous and the
measurement is read from the pointer immediately after the interaction DE .
## V Acknowledgement
This work was supported in part by a grant from the Naval Surface Warfare
Center Dahlgren Division’s In-house Laboratory Independent Research Program
sponsored by the Office of Naval Research.
## VI References
## References
* (1) Aharonov Y, Albert D, Casher D and Vaidman L 1986 New Techniques and Ideas in Quantum Measurement Theory ed D Greenberger (New York: New York Academy of Sciences) p 417
* (2) Aharonov Y, Albert D and Vaidman L 1988 Phys. Rev. Lett. 60 1351
* (3) Aharonov Y and Vaidman L 1990 Phys. Rev. A 41 11
* (4) Ritchie N, Storey J and Hulet R 1991 Phys. Rev. Lett. 66 1107
* (5) Parks A, Cullin D and Stoudt D 1998 Proc. R. Soc. A 454 2997
* (6) Resch K, Lundeen J and Steinberg A 2004 Phys. Lett. A 324 125
* (7) Wang Q, Sun F, Zhang Y, Li J, Huang Y and Guo G 2006 Phys. Rev. A 73 023814
* (8) Hosten O and Kwiat P 2008 Science 319 787
* (9) Yokota K, Yamamoto T, Koashi M and Imoto N 2009 N. J. Phys. 11 033011
* (10) Dixon P, Starling D, Jordan A and Howell J 2009 Phys. Rev. Lett. 102 173601
* (11) Johansen L 2004 Phys. Rev. Lett. 93 120402
* (12) Aharonov Y and Botero A 2005 Phys. Rev. A 72 052111
* (13) Jozsa R 2007 Phys. Rev. A 76 044103
* (14) Di Lorenzo A and Egues J 2008 Phys. Rev. A 77 042108
* (15) Cho Y, Lim H, Ra Y and Kim Y 2010 N. J. Phys. 12 023036
* (16) Aharonov Y and Rohrlich D 2005 Quantum Paradoxes: Quantum Theory for the Perplexed (Wiley-VCH, Weinheim) p.225
|
arxiv-papers
| 2011-04-19T04:54:34 |
2024-09-04T02:49:18.330065
|
{
"license": "Public Domain",
"authors": "A. D. Parks and J. E. Gray",
"submitter": "John E Gray Mr",
"url": "https://arxiv.org/abs/1104.3639"
}
|
1104.3758
|
# Galaxy Zoo Morphology and Photometric Redshifts in the Sloan Digital Sky
Survey
M. J. Way11affiliation: NASA Ames Research Center, Space Sciences Division, MS
245-6, Moffett Field, CA 94035, USA 22affiliation: Department of Astronomy and
Space Physics, Uppsala, Sweden NASA Goddard Institute for Space Studies, 2880
Broadway, New York, NY 10029, USA
###### Abstract
It has recently been demonstrated that one can accurately derive galaxy
morphology from particular primary and secondary isophotal shape estimates in
the Sloan Digital Sky Survey imaging catalog. This was accomplished by
applying Machine Learning techniques to the Galaxy Zoo morphology catalog.
Using the broad bandpass photometry of the Sloan Digital Sky Survey in
combination with with precise knowledge of galaxy morphology should help in
estimating more accurate photometric redshifts for galaxies. Using the Galaxy
Zoo separation for spirals and ellipticals in combination with Sloan Digital
Sky Survey photometry we attempt to calculate photometric redshifts. In the
best case we find that the root mean square error for Luminous Red Galaxies
classified as ellipticals is as low as 0.0118. Given these promising results
we believe better photometric redshift estimates for all galaxies in the Sloan
Digital Sky Survey ($\sim$350 million) will be feasible if researchers can
also leverage their derived morphologies via Machine Learning. These initial
results look to be promising for those interested in estimating Weak-Lensing,
Baryonic Acoustic Oscillation, and other fields dependent upon accurate
photometric redshifts.
###### Subject headings:
galaxies: distances and redshifts — methods: statistical
## 1\. Introduction
It is commonly believed that adding information about the morphology of
galaxies may help in the estimation of Photometric Redshifts (Photo-Zs) when
using training set methods. Most of this work in recent years has utilized The
Sloan Digital Sky Survey (SDSS, York et al., 2000). For example, as discussed
in Way et al. (2009, hereafter Paper II) many groups have attempted to use a
number of derived primary and secondary isophotal shape estimates in the Sloan
Digital Sky Survey imaging catalog to help in estimating Photo-Zs. Some
examples include; using the radius containing 50% and/or 90% of the Petrosian
(1976) flux in the SDSS r band (denoted as petroR50_r petroR90_r in the SDSS
catalog), concentration index (CI=petroR90_r/petroR50_r), surface brightness,
axial ratios and radial profile (e.g. Collister & Lahav, 2004; Ball et al.,
2004; Wadadekar, 2005; Kurtz et al., 2007; Wray & Gunn, 2008).
More recently Singal et al. (2011) have attempted to use Galaxy Shape
parameters derived from Hubble Space Telescope/Advanced Camera for Surveys
imaging data using a principle components approach and then feeding this
information into their Neural Network code to predict Photo-Zs, but for
samples much deeper than the SDSS. Unfortunately they find marginal
improvement when using their morphology estimators.
Another promising approach focuses on the reddening and inclination of
galaxies. Yip et al. (2011) have attempted to quantify these effects on a
galaxy’s spectral energy distribution (SED). The idea is to use this
information to correct the over-estimation of Photo-Zs of disk galaxies.
On the other hand, attempts to morphologically classify large number of
galaxies in the universe has gained in accuracy over the past 15 years as
better/larger training samples from eye classification has increased. For
example, Lahav et al. (1995) was one of the first to use an Artificial Neural
Network trained on 830 galaxies classified by the eyes of six different
professional astronomers. In more recent years Ball et al. (2004) has
attempted to classify galaxies by morphological type using a Neural Network
approach based on a sample of 1399 galaxies (from the catalog of Nakamura et
al. (2011)). Cheng et al. (2011) has used a sample of 984 non-star forming
SDSS early-type galaxies to distinguish between E, S0 and Sa galaxies. In the
past year two new attempts at morphological classification using Machine
Learning techniques on a Galaxy Zoo (Lintott et al., 2008, 2011) training
sample have been published (Banerji et al., 2010; Huertas-Company et al.,
2011). The Banerji et al. (2010) results were impressive in that they claim to
obtain classification to better than 90% for three different morphological
classes (spiral, elliptical and point-sources/artifacts).
These works are in contrast to previous work like that of Bernardi et al.
(2003) who used a classification scheme based on SDSS spectra. However, this
classification certainly missed some early-type galaxies from their desired
sample due to the presence of star formation.
In this paper we will continue our use of Gaussian Process Regression to
calculate Photo-Zs, using a variety of inputs. This method has been discussed
extensively in two previous papers (Way & Srivastava, 2006; Way et al., 2009).
We utilize the SDSS Main Galaxy Sample (MGS, Strauss et al., 2002) and the
Luminous Red Galaxy Sample (LRG, Eisenstein et al., 2001) from the SDSS Data
Release Seven (DR7, Abazajian et al., 2009). We also utilize the Galaxy Zoo 1
survey results (GZ1, Lintott et al., 2011). The Galaxy Zoo
project111http://www.galaxyzoo.org (Lintott et al., 2008) contains a total of
900,000 SDSS galaxies with morphological classifications (Lintott et al.,
2011).
While this study does not focus exclusively on the LRG sample, it should be
noted that if it is possible to improve the Photo-Z estimates for these
objects as shown herein it could also improve the estimation of cosmological
parameters (e.g. Blake & Bridle, 2005; Padmanabhan et al., 2007; Percival et
al., 2010; Reid et al., 2010; Zunckel, Gott & Lunnan, 2011) using the SDSS as
well as upcoming surveys such as BOSS222Baryon Oscillation Spectroscopic
Survey(Cuesta-Vazquez et al., 2011; Eisenstein et al., 2011), BigBOSS
(Schlegel et al., 2009), and possibly Euclid (Sorba & Sawicki, 2011), not to
mention LSST333Large Synoptic Survey Telescope(Ivezic et al., 2008). It could
also contribute to more reliable Photo-Z errors, as required for weak-lensing
surveys (Bernstein & Huterer, 2010; Kitching, Heavens & Miller, 2011) and
Baryonic Acoustic Oscillation measurements, which are also dependent upon
accurate Photo-Z estimation of LRGs (Roig et al., 2008).
## 2\. Data
All of the data used herein have been obtained via the SDSS casjobs
server444http://casjobs.sdss.org. In order to obtain results consistent with
Paper II for both the MGS and LRG samples we use the same photometric quality
flags (!BRIGHT and !BLENDED and !SATURATED) and redshift quality (zConf$>$0.95
and zWarning=0) but using the SDSS DR7 instead of earlier SDSS releases. These
data are cross-matched in casjobs with columns 14–16 in Table 2 of Lintott et
al. (2011) extracting the galaxies flagged as ‘spiral’, ‘elliptical’ or
‘uncertain’. The galaxies “flagged as ‘elliptical’ or ‘spiral’ require 80 per
cent of the vote in that category after the debiasing procedure has been
applied; all other galaxies are flagged ‘uncertain’” (Lintott et al., 2011).
Debiasing is the processes of correcting for small biases in spin direction
and color. See Section 3.1 in Lintott et al. (2011) for more details on
debiasing.
Note that the GZ1 sample is based upon the MGS, but the MGS contains LRGs as
well. This is why we can analyze both of these samples. However, the actual
LRG survey goes fainter than the MGS and so we do not find LRG galaxies
fainter than the MGS limit of r${}_{petrosian}\lesssim$17.77. See Strauss et
al. (2002) and Eisenstein et al. (2001) for details on the MGS and LRG
samples.
Figure 1.— Redshift and r-band dereddened model magnitudes for the Main Galaxy
Sample (top two panels) and Luminous Red Galaxies (bottom two panels).
A number of points from both the LRG and MGS were eliminated because of either
bad values (e.g. -9999) or because they were considered outliers from the main
distribution of points. The former offenders included: petroR90_i (13 points
in the MGS sample, 1 point in the LRG), mE1_i (43 points, 5 points),
petroR90Err_i (7177 points, 1262 points), mRrCcErr_i (22 points, 12 points).
The reason for eliminating bad mE1_i points is that we use it for calculating
aE_i from Table 2 of Banerji et al. (2010). A small number of outliers were
also removed from the MGS sample, but totalled only 27 points. No such outlier
points were removed in the LRG sample. This leaves us with a total of 437,273
MGS and 68,996 LRG objects. Using the GZ1 classifications in the MGS there are
45,249 ellipticals, 119,369 spirals and 272,655 uncertain ($\sim$ 62%). For
the LRG sample there are 27,227 ellipticals and 13,495 spirals leaving 28,274
uncertain ($\sim$41%).
## 3\. Discussion
Using the morphological classifications from the Galaxy Zoo project first data
release (Lintott et al., 2011) we attempt to calculate Photo-Zs for 4
different samples and four combinations of primary and secondary isophotal
shape estimates from the SDSS as seen in Table 1. A larger variety of input
combinations were tried including those in Table 1 of Banerji et al. (2010).
However, we only report those found with the lowest root mean square error
(rmse) in Table 1 of this paper.
Table 1Results DataaaMGS=Main Galaxy Sample (Strauss et al., 2002), LRG=Luminous Red Galaxies (Eisenstein et al., 2001), SP=Classified as spiral by Galaxy Zoo, ELL=Classified as elliptical by Galaxy Zoo | Inputsbbu-g-r-i-z=5 SDSS dereddened magnitudes, P50=Petrosian 50% light radius in SDSS i band, CI= Concentration Index (P90/P50), Q=Stokes Q value in i band, U=Stokes U value in i band, B=Inputs from Table 2 of Banerji et al. (2010)=CI,mRrCc_i,aE_i,mCr4_i,texture_i | $\sigma_{rmse}$ccWe quote the bootstrapped 50%, 10% and 90% confidence levels as in Paper II for the root mean square error (rmse)
---|---|---
MGS-ELL | ugriz+Q+U | 0.01561 0.01532 0.01620
- | ugriz+P50+CI | 0.01407 0.01400 0.01475
- | ugriz+P50+CI+Q+U | 0.01641 0.01560 0.01801
- | ugriz+B | 0.01679 0.01668 0.01683
MGS-SP | ugriz+Q+U | 0.01889 0.01864 0.01913
- | ugriz+P50+CI | 0.01938 0.01927 0.01947
- | ugriz+P50+CI+Q+U | 0.01751 0.01747 0.01777
- | ugriz+B | 0.02092 0.02089 0.02101
LRG-ELL | ugriz+Q+U | 0.01345 0.01291 0.01420
- | ugriz+P50+CI | 0.01334 0.01278 0.01426
- | ugriz+P50+CI+Q+U | 0.01584 0.01439 0.01693
- | ugriz+B | 0.01180 0.01175 0.01184
LRG-SP | ugriz+Q+U | 0.01520 0.01404 0.01910
- | ugriz+P50+CI | 0.01514 0.01474 0.01679
- | ugriz+P50+CI+Q+U | 0.01957 0.01870 0.02285
- | ugriz+B | 0.01737 0.01728 0.01765
Figure 2.— Plots of room mean square error for a given number of galaxies per
50% bootstrap level with representative errors (10% and 90%). Main Galaxy
Sample (top two panels elliptical and spiral) and Luminous Red Galaxies
(bottom two panels elliptical and spiral). Figure 3.— Plots of spectroscopic
redshift versus predicted photometric redshift for the input with the lowest
rmse for each of the four given data sets shown in Table 1
The results using the Banerji et al. (2010) suggested isophotal shape
estimates as well as others tested in Paper II are found in Figure 2 and Table
1. In Figure 3 we also show plots of the spectroscopic redshift versus the
predicted photometric redshift for the inputs that predict the lowest rmse for
each of the four data sets listed in Table 1. These are more impressive than
one might initially guess. In Paper II we showed how adding additional
bandpasses in the ultraviolet via the Galaxy Evolution
Explorer555http://www.galex.caltech.edu(GALEX, Martin et al., 2005) could
naively improve Photo-Z estimation. The same was shown when using additional
bandpasses from the infrared from the Two Micron All Sky
Survey666http://www.ipac.caltech.edu/2mass(2MASS, Skrutskie et al., 2006).
However, the results were biased because neither GALEX or 2MASS reach the same
magnitude or redshift depth as the full SDSS MGS or LRG samples. It is easier
to get lower rmse estimates of Photo-Z when you have a smaller range of lower
redshifts to fit. For the MGS it is clear from the top two panels in Figure 1
that the Galaxy Zoo objects span a similar range of redshifts and r-band
magnitudes. On the other hand the situation for the Luminous Red Galaxies is
not as straightforward. Looking at the bottom two panels of Figure 1 the large
second bump at a redshift of z$\sim$0.35 and r$\sim$18 does not exist. The
latter is logical because the Galaxy Zoo catalog was drawn from the MGS and
hence there are no galaxies beyond r${}_{petrosian}=$17.77 (see Petrosian
(1976) for details on Petrosian magnitudes) according to their selection
criteria (Strauss et al., 2002).
Our lowest rmse values come from galaxies categorized as ellipticals in the
Luminous Red Galaxy Sample using the SDSS u-g-r-i-z bandpass filters and the
isophotal shape estimates from Table 2 of Banerji et al. (2010): ci, mRrCc_i,
aE_i, mCr4_i, texture_i. These yield an rmse of only 0.01180, which we believe
is the lowest calculated to date for such a large sample of galaxies measured
in the bandpasses of the SDSS while also retaining a fairly large range of
redshifts (0 $\lesssim z\lesssim$ 0.25) and dereddened magnitudes (12
$\lesssim r_{petrosian}\lesssim$ 17.77).
Taking a closer look at the kinds of inputs that improve the results by galaxy
type can be interesting. It is clear from Table 1 that the Stokes parameters
appear to work better for spiral than elliptical galaxies. The Stokes
parameters measure the axis ratio and position angle of galaxies as projected
on the sky. In detail they are flux-weighted second moments of a particular
isophote.
$M_{xx}\equiv\langle\frac{x^{2}}{r^{2}}\rangle,\ \ \
M_{yy}\equiv\langle\frac{y^{2}}{r^{2}}\rangle,\ \ \
M_{xy}\equiv\langle\frac{xy}{r^{2}}\rangle$ (1)
When the isophotes are self-similar ellipses one finds (Stoughton et al.,
2002):
$Q\equiv M_{xx}-M_{yy}=\frac{a-b}{a+b}\cos(2\phi),\ \ \ U\equiv
M_{xy}=\frac{a-b}{a+b}\sin(2\phi),$ (2)
The semi-major and semi-minor axes are a and b while $\phi$ is the position
angle. Masters et al. (2010) demonstrates the efficacy of using SDSS derived
axis ratios in characterizing the inclinations of spiral galaxies. This is
seen in Table 1 where they offer the second best set of inputs when
determining photometric redshift for spirals. Both Stokes Q & U parameters
also display a larger range of values in the spirals than in the ellipticals.
The standard deviations in Stokes Q & U for spirals are 0.1877 & 0.1500 while
for ellipticals they are 0.0596 & 0.0459. Hence they clearly offer more room
for possible improvement in the former than in the latter.
One of the more surprising results is the difference in using the B inputs for
the MGS versus LRG ellipticals. In the latter case these inputs give the
lowest RMSE results, while in the MGS elliptical case they give the worst.
This could be do to the fact that the surface brightness of the LRG galaxies
are more easily modeled by the B inputs than the MGS. The MGS ellipticals may
still have clumps of star formation that can make the surface brightness more
difficult to model than the more passive LRG ellipticals.
When comparing the MGS and LRG spirals one stark difference is clear when
utilizing the P50 (Petrosian 50% light radius in SDSS i band) and CI
(Concentration Index=P90/P50) inputs shown in Table 1. In the MGS spiral case
these additional inputs yield worse fits, whereas they are among the most
useful in the LRG spiral case. This may indicate that MGS spirals are more
diverse morphologically than LRG spirals. The P50 and CI inputs are incapable
of helping to model the MGS spiral diversity and simply add noise rather than
signal to the fits. Masters et al. (2010) points out that red spirals (read
LRG type) will “be dominated by inclined dust reddened spirals, and spirals
with large bulges.” Note that this does not mean that LRG bulge dominated
spirals are necessarily S0 galaxies (which would add to their diversity both
morphologically and spectroscopically). Lintott et al. (2008); Bamford et al.
(2009) have both shown that contamination of S0s into spirals is only about 3%
in the best case scenario. So again, perhaps P50 and CI can do a better job of
modeling LRG spirals because they are less diverse than MGS spirals.
There are several outstanding issues with using this approach for studies that
may utilize large samples of SDSS LRG derived Photo-Zs (e.g. Baryonic Acoustic
Oscillations). The first is that the GZ1 catalog has only been able to
classify ($\sim$59%) of the LRG galaxies as spiral or elliptical. This means
that 41% of our sample cannot benefit from morphology knowledge when
estimating Photo-Zs. Secondly, the LRGs used herein do not go to the same
depth (in redshift or magnitude) as the full LRG (r$\lesssim$19) catalog since
the GZ1 is based on the MGS (r$\lesssim$17.77). Note also that the GZ1
morphology estimates get worse as one reaches the fainter end of the sample
(Lintott et al., 2008). Thirdly, the Machine Learning derived morphologies of
Banerji et al. (2010) can only classify up to 90% as accurately as their ‘by
eye’ GZ1 training set. These constraints will have to be taken into account
for any studies that attempt to utilize morphology in Photo-Z calculations.
The Photo-Z code used to generate the results from this paper are available on
the NASA Ames Dashlink web site
https://dashlink.arc.nasa.gov/algorithm/stablegp and is described in Foster et
al. (2009). Thanks to Jim Gray, Ani Thakar, Maria SanSebastien, and Alex
Szalay for their help with the SDSS casjobs server and Jeffrey Scargle for
reading an early draft. Thanks goes to the Galaxy Group in the Astronomy
Department at Uppsala University in Sweden for their generous hospitality
where part of this work was discussed and completed. We acknowledge funding
received from the NASA Applied Information Systems Research Program and from
the NASA Ames Research Center Director’s Discretionary Fund. This publication
has been made possible by the participation of more than 160 000 volunteers in
the GZ project. Their contributions are individually acknowledged at
http://www.galaxyzoo.org/ Volunteers.aspx. Funding for the SDSS has been
provided by the Alfred P. Sloan Foundation, the Participating Institutions,
the National Aeronautics and Space Administration, the National Science
Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and
the Max Planck Society. The SDSS Web site is http://www.sdss.org/. The SDSS is
managed by the Astrophysical Research Consortium for the Participating
Institutions. The Participating Institutions are The University of Chicago,
Fermilab, the Institute for Advanced Study, the Japan Participation Group, The
Johns Hopkins University, Los Alamos National Laboratory, the Max-Planck-
Institute for Astronomy, the Max-Planck-Institute for Astrophysics, New Mexico
State University, University of Pittsburgh, Princeton University, the United
States Naval Observatory, and the University of Washington. This research has
made use of NASA’s Astrophysics Data System Bibliographic Services. This
research has also utilized the viewpoints (Gazis, Levit, & Way, 2010) software
package.
## References
* Abazajian et al. (2009) Abazajian, K.N. et al. 2009, ApJS, 182, 543
* Ball et al. (2004) Ball, N.M., Loveday, J., Fukugita, M., Nakamura, O., Brinkmann, J. & Brunner, R.J. 2004, MNRAS, 348, 1038
* Bamford et al. (2009) Bamford S. P., Nichol, B., Baldry, I.K. et al., 2009, MNRAS, 393, 1324
* Banerji et al. (2010) Banerji, M. et al. 2010, MNRAS, 406, 342
* Bernardi et al. (2003) Bernardi, M., Sheth, R.K., Annis, J., et al. 2003, AJ, 125, 1882
* Bernstein & Huterer (2010) Bernstein, G. & Huterer, D. 2010, MNRAS, 401, 1399
* Blake & Bridle (2005) Blake, C. & Bridle S. 2005, MNRAS, 363, 1329
* Cheng et al. (2011) Cheng, J.Y., Faber, S.M., Simard, L., Graves, G.J., Lopez, E.D., Yan, R. & Cooper M.C. 2011, MNRAS, 412, 727
* Collister & Lahav (2004) Collister, A. A. & Lahav, O. 2004, PASP, 116, 345
* Cuesta-Vazquez et al. (2011) Cuesta-Vazquez et al. 2011, AAS, 21715005C
* Eisenstein et al. (2001) Eisenstein et al. 2001, AJ, 122, 2267
* Eisenstein et al. (2011) Eisenstein et al. 2011, arXiv:1101.1529
* Foster et al. (2009) Foster, L., Waagen, A., Aijaz, N. et al. 2009, Journal of Machine Learning Research, 10, 857
* Gazis, Levit, & Way (2010) Gazis, P.R., Levit, C. & Way, M.J. 2010, PASP, 122, 1518
* Huertas-Company et al. (2011) Huertas-Company, M., Aguerri, J. A. L., Bernardi, M., Mei, S., & Sánchez Almeida, J. 2011, A&A, 525, A157
* Ivezic et al. (2008) Ivezic, Z., Tyson, J.A., Allsman, et al. 2008, arXiv:0805.2366v1
* Kitching, Heavens & Miller (2011) Kitching, T.D., Heavens, A.F. & Miller, L. 2011, MNRAS, in press
* Kurtz et al. (2007) Kurtz, M.J., Geller, M.J, Fabricant, D.G., Wyatt, W.F. & Dell’Antonio 2007, AJ, 134, 1360
* Lahav et al. (1995) Lahav, O. et al. 1995, Science, 267, 859
* Lintott et al. (2008) Lintott, C., Schawinski, K., Slosar, A. et al. 2011, MNRAS, 389, 1179
* Lintott et al. (2011) Lintott, C., Schawinski, K., Bamford, S. et al. 2011, MNRAS, 410, 166
* Martin et al. (2005) Martin, D. C., et al. 2005, ApJ, 619, L1
* Masters et al. (2010) Masters, K.L., Nichol, R., Bamford, S. et al. 2010, MNRAS, 404, 792
* Nakamura et al. (2011) Nakamura O., Fukugita M.,Yasuda N., Loveday J., Brinkmann J., Schneider D. P., Shimasaku K., SubbaRao M. 2003, AJ, 125, 1682
* Padmanabhan et al. (2007) Padmanabhan, N. et al. 2007, MNRAS, 378, 852
* Percival et al. (2010) Percival, W.J. et al. 2010, MNRAS, 401, 2148
* Petrosian (1976) Petrosian, V. 1976, ApJ, 209, L1
* Reid et al. (2010) Reid, B.A. 2010, MNRAS, 404, 60
* Roig et al. (2008) Roig, D., Verde, L., Miralda-Escude, J., Jimenez, R. & Pena-Garay, C. 2008, arXiv:0812.3414v2
* Singal et al. (2011) Singal, J., Shmakova, M., Gerke, B., Griffith, R.L. & Lotz, J. 2011, arXiv:1011.4011
* Schlegel et al. (2009) Schlegel, D.J. et al. 2009, arXiv:0904:0468
* Skrutskie et al. (2006) Skrutskie, M.F. et al. 2006, AJ, 131, 1163
* Sorba & Sawicki (2011) Sorba, R. & Sawicki, M. 2011, arXiv:1101.4635
* Stoughton et al. (2002) Stoughton et al. 2002, AJ, 123, 485
* Strauss et al. (2002) Strauss, M.A., et al. 2002, AJ, 124, 1810
* Wadadekar (2005) Wadadekar, Y. 2005, PASP, 117, 79
* Way & Srivastava (2006) Way, M.J. & Srivastava, A.N. 2006, ApJ, 647, 102
* Way et al. (2009) Way, M.J., Foster, L.V., Gazis, P.R. & Srivastava, A.N. 2009, ApJ, 706, 623
* Wray & Gunn (2008) Wray, J.J. & Gunn, J.E. 2008, ApJ, 678, 144
* Yip et al. (2011) Yip, C., Szalay, A.S., Carliles, S. & Budavari, T. 2011, arXiv:1011.5651
* York et al. (2000) York, D.G., et al. 2000, AJ, 120, 1579
* Zunckel, Gott & Lunnan (2011) Zunckel, C., Gott, J.R. & Lunnan, R. 2011, MNRAS, 412, 1401
|
arxiv-papers
| 2011-04-19T14:17:51 |
2024-09-04T02:49:18.336317
|
{
"license": "Public Domain",
"authors": "M.J. Way (NASA/Goddard Institute for Space Studies)",
"submitter": "Michael Way",
"url": "https://arxiv.org/abs/1104.3758"
}
|
1104.4102
|
030001 2011 G. C. Barker B. Blasius, ICBM, University of Oldenburg, Germany.
030001
We study the propagation of an SIR (susceptible–infectious–recovered) disease
over an agent population which, at any instant, is fully divided into couples
of agents. Couples are occasionally allowed to exchange their members. This
process of couple recombination can compensate the instantaneous disconnection
of the interaction pattern and thus allow for the propagation of the
infection. We study the incidence of the disease as a function of its
infectivity and of the recombination rate of couples, thus characterizing the
interplay between the epidemic dynamics and the evolution of the population’s
interaction pattern.
# SIR epidemics in monogamous populations with recombination
Damián H. Zanette[inst1] E-mail: zanette@cab.cnea.gov.ar
(12 August 2010; 18 February 2011)
††volume: 3
99 inst1 Consejo Nacional de Investigaciones Científicas y Técnicas, Centro
Atómico Bariloche e Instituto Balseiro, 8400 Bariloche, Río Negro, Argentina.
## 1 Introduction
Models of disease propagation are widely used to provide a stylized picture of
the basic mechanisms at work during epidemic outbreaks and infection spreading
[1]. Within interdisciplinary physics, they have the additional interest of
being closely related to the mathematical representation of such diverse
phenomena as fire propagation, signal transmission in neuronal axons, and
oscillatory chemical reactions [2]. Because this kind of model describes the
joint dynamics of large populations of interacting active elements or agents,
its most interesting outcome is the emergence of self-organization. The
appearance of endemic states, with a stable finite portion of the population
actively transmitting an infection, is a typical form of self-organization in
epidemiological models [3].
Occurrence of self-organized collective behavior has, however, the sine qua
non condition that information about the individual state of agents must be
exchanged between each other. In turn, this requires the interaction pattern
between agents not to be disconnected. Fulfilment of such requirement is
usually assumed to be granted. However, it is not difficult to think of simple
scenarios where it is not guaranteed. In the specific context of epidemics,
for instance, a sexually transmitted infection never propagates in a
population where sexual partnership is confined within stable couples or small
groups [4].
In this paper, we consider an SIR (susceptible–infectious–recovered)
epidemiological model [3] in a monogamous population where, at any instant,
each agent has exactly one partner or neighbor [4, 5]. The population is thus
divided into couples, and is therefore highly disconnected. However, couples
can occasionally break up and their members can then be exchanged with those
of other broken couples. As was recently demonstrated for SIS models [6, 7],
this process of couple recombination can compensate to a certain extent the
instantaneous lack of connectivity of the population’s interaction pattern,
and possibly allow for the propagation of the otherwise confined disease. Our
main aim here is to characterize this interplay between recombination and
propagation for SIR epidemics.
In the next section, we review the SIR model and its mean field dynamics.
Analytical results are then provided for recombining monogamous populations in
the limits of zero and infinitely large recombination rate, while the case of
intermediate rates is studied numerically. Attention is focused on the disease
incidence –namely, the portion of the population that has been infectious
sometime during the epidemic process– and its dependence on the disease
infectivity and the recombination rates, as well as on the initial number of
infectious agents. Our results are inscribed in the broader context of
epidemics propagation on populations with evolving interaction patterns [4, 5,
8, 9, 10, 11].
## 2 SIR dynamics and mean field description
In the SIR model, a disease propagates over a population each of whose members
can be, at any given time, in one of three epidemiological states: susceptible
(S), infectious (I), or recovered (R). Susceptible agents become infectious by
contagion from infectious neighbors, with probability $\lambda$ per neighbor
per time unit. Infectious agents, in turn, become recovered spontaneously,
with probability $\gamma$ per time unit. The disease process S $\to$ I $\to$ R
ends there, since recovered agents cannot be infected again [3].
With a given initial fraction of S and I–agents, the disease first propagates
by contagion but later declines due to recovery. The population ends in an
absorbing state where the infection has disappeared, and each agent is either
recovered or still susceptible. In this respect, SIR epidemics differs from
the SIS and SIRS models, where –due to the cyclic nature of the disease,– the
infection can asymptotically reach an endemic state, with a constant fraction
of infectious agents permanently present in the population.
Another distinctive equilibrium property of SIR epidemics is that the final
state depends on the initial condition. In other words, the SIR model
possesses infinitely many equilibria parameterized by the initial states.
In a mean field description, it is assumed that each agent is exposed to the
average epidemiological state of the whole population. Calling $x$ and $y$ the
respective fractions of S and I–agents, the mean field evolution of the
disease is governed by the equations
$\begin{array}[]{lll}\dot{x}&=&-k\lambda xy,\\\ \dot{y}&=&k\lambda
xy-y,\end{array}$ (1)
where $k$ is the average number of neighbors per agent. Since the population
is assumed to remain constant in size, the fraction of R–agents is $z=1-x-y$.
In the second equation of Eqs. (1), we have assigned the recovery frequency
the value $\gamma=1$, thus fixing the time unit equal to $\gamma^{-1}$, the
average duration of the infectious state. The contagion frequency $\lambda$ is
accordingly normalized: $\lambda/\gamma\to\lambda$. This choice for $\gamma$
will be maintained throughout the remaining of the paper.
Figure 1: SIR epidemics incidence (measured by the final fraction of recovered
agents $z^{*}$) as a function of the infectivity (measured by the product of
the mean number of neighbors times the infection probability per time unit per
infected neighbor, $k\lambda$), for different initial fractions of infectious
agents, $y_{0}$. Upper panel: For the mean field equations (1). Lower panel:
For a static (non-recombining) monogamous population, described by Eqs. (3)
with $r=0$.
The solution to Eqs. (1) implies that, from an initial condition without
R–agents, the final fraction of S–agents, $x^{*}$, is related to the initial
fraction of I–agents, $y_{0}$, as [1]
$x^{*}=1-(k\lambda)^{-1}\log[(1-y_{0})/x^{*}].$ (2)
Note that the final fraction of R–agents, $z^{*}=1-x^{*}$, gives the total
fraction of agents who have been infectious sometime during the epidemic
process. Thus, $z^{*}$ directly measures the total incidence of the disease.
The incidence $z^{*}$ as a function of the infectivity $k\lambda$, obtained
from Eq. (2) through the standard Newton–Raphson method for several values
$y_{0}$ of the initial fraction of I–agents, is shown in the upper panel of
Fig. 1. As expected, the disease incidence grows both with the infectivity and
with $y_{0}$. Note that, on the one hand, this growth is smooth for finite
positive $y_{0}$. On the other hand, for $y_{0}\to 0$ (but $y_{0}\neq 0$)
there is a transcritical bifurcation at $k\lambda=1$. For lower infectivities,
the disease is not able to propagate and, consequently, its incidence is
identically equal to zero. For larger infectivities, even when the initial
fraction of I–agents is vanishingly small, the disease propagates and the
incidence turns out to be positive. Finally, for $y_{0}=0$ no agents are
initially infectious, no infection spreads, and the incidence thus vanishes
all over parameter space.
## 3 Monogamous populations with couple recombination
Suppose now that, at any given time, each agent in the population has exactly
just one neighbor or, in other words, that the whole population is always
divided into couples. In reference to sexually transmitted diseases, this
pattern of contacts between agents defines a monogamous population [5]. If
each couple is everlasting, so that neighbors do not change with time, the
disease incidence should be heavily limited by the impossibility of
propagating too far from the initially infectious agents. At most, some of the
initially susceptible agents with infectious neighbors will become themselves
infectious, but spontaneous recovery will soon prevail and the disease will
disappear.
If, on the other hand, the population remains monogamous but neighbors are
occasionally allowed to change, any I–agent may transmit the disease several
times before recovering. If such changes are frequent enough, the disease
could perhaps reach an incidence similar to that predicted by the mean field
description, Eq. (1) (for $k=1$, i.e. with an average of one neighbor per
agent).
We model neighbor changes by a process of couple recombination where, at each
event, two couples $(i,j)$ and $(m,n)$ are chosen at random and their partners
are exchanged [6, 7]. The two possible outcomes of recombination, either
$(i,m)$ and $(j,n)$ or $(i,n)$ and $(j,m)$, occur with equal probability. To
quantify recombination, we define $r$ as the probability per unit time that
any given couple becomes involved in such an event.
A suitable description of SIR epidemics in monogamous populations with
recombination is achieved in terms of the fractions of couples of different
kinds, $m_{\rm SS}$, $m_{\rm SI}$, $m_{\rm II}$, $m_{\rm IR}$, $m_{\rm RR}$,
and $m_{\rm SR}=1-m_{\rm SI}-m_{\rm II}-m_{\rm IR}-m_{\rm RR}$. Evolution
equations for these fractions are obtained by considering the possible
transitions between kinds of couples due to recombination and epidemic events
[7]. For instance, partner exchange between two couples (S,S) and (I,R) which
gives rise to (S,I) and (S,R), contributes positive terms to the time
derivative of $m_{\rm SI}$ and $m_{\rm SR}$, and negative terms to those of
$m_{\rm SS}$ and $m_{\rm IR}$, all of them proportional to the product $m_{\rm
SS}m_{\rm IR}$. Meanwhile, for example, contagion can transform an
(S,I)–couple into an (I,I)–couple, with negative and positive contributions to
the variations of the respective fractions, both proportional to $m_{\rm SI}$.
The equations resulting from these arguments read
$\begin{array}[]{lll}\dot{m}_{\rm SS}&=&rA_{\rm SIR},\\\ \dot{m}_{\rm
SI}&=&rB_{\rm SIR}-(1+\lambda)m_{\rm SI},\\\ \dot{m}_{\rm II}&=&rA_{\rm
IRS}+\lambda m_{\rm SI}-2m_{\rm II},\\\ \dot{m}_{\rm IR}&=&rB_{\rm
IRS}+2m_{\rm II}-m_{\rm IR},\\\ \dot{m}_{\rm RR}&=&rA_{\rm RSI}+m_{\rm IR},\\\
\dot{m}_{\rm SR}&=&rB_{\rm RSI}+m_{\rm SI}.\end{array}$ (3)
For brevity, we have here denoted the contribution of recombination by means
of the symbols
$A_{ijh}\equiv(m_{ij}+m_{ih})^{2}/4-m_{ii}(m_{jj}+m_{jh}+m_{hh}),$ (4)
and
$\displaystyle B_{ijh}\equiv(2m_{ii}+m_{ih})(2m_{jj}+m_{jh})/2$
$\displaystyle\ \ \ \ \ -m_{ij}(m_{ij}+m_{ih}+m_{jh}+m_{hh})/2,$ (5)
with $i$, $j$, $h$ $\in\\{{\rm S,I,R}\\}$. The remaining terms stand for the
epidemic events. In terms of the couple fractions, the fractions of S, I and
R–agents are expressed as
$\begin{array}[]{lll}x&=&m_{\rm SS}+(m_{\rm SI}+m_{\rm SR})/2,\\\ y&=&m_{\rm
II}+(m_{\rm SI}+m_{\rm IR})/2,\\\ z&=&m_{\rm RR}+(m_{\rm SR}+m_{\rm IR})/2.\\\
\end{array}$ (6)
Assuming that the agents with different epidemiological states are initially
distributed at random over the pattern of couples, the initial fraction of
each kind of couple is $m_{\rm SS}(0)=x_{0}^{2}$, $m_{\rm SI}(0)=2x_{0}y_{0}$,
$m_{\rm II}(0)=y_{0}^{2}$, $m_{\rm IR}(0)=2y_{0}z_{0}$, $m_{\rm
RR}(0)=z_{0}^{2}$, and $m_{\rm SR}(0)=2x_{0}z_{0}$, where $x_{0}$, $y_{0}$ and
$z_{0}$ are the initial fractions of each kind of agent.
It is important to realize that the mean field–like Eqs. (3) to (6) are exact
for infinitely large populations. In fact, first, pairs of couples are
selected at random for recombination. Second, any epidemic event that changes
the state of an agent modifies the kind of the corresponding couple, but does
not affect any other couple. Therefore, no correlations are created by either
process.
In the limit without recombination, $r=0$, the pattern of couples is static.
Equations (3) become linear and can be analytically solved. For asymptotically
long times, the solution provides –from the third of Eqs. (6)– the disease
incidence as a function of the initial condition. If no R–agents are present
in the initial state, the incidence is
$z^{*}=(1+\lambda)^{-1}[1+\lambda(2-y_{0})]y_{0}.$ (7)
This is plotted in the lower panel of Fig. 1 as a function of the infectivity
$k\lambda\equiv\lambda$, for various values of the initial fraction of
I–agents, $y_{0}$. When recombination is suppressed, as expected, the
incidence is limited even for large infectivities, since disease propagation
can only occur to susceptible agents initially connected to infectious
neighbors. Comparison with the upper panel makes apparent substantial
quantitative differences with the mean field description, especially for small
initial fractions of I–agents.
Another situation that can be treated analytically is the limit of infinitely
frequent recombination, $r\to\infty$. In this limit, over a sufficiently short
time interval, the epidemiological state of all agents is virtually “frozen”
while the pattern of couples tests all possible combinations of agent pairs.
Consequently, at each moment, the fraction of couples of each kind is
completely determined by the instantaneous fraction of each kind of agent,
namely,
$\begin{array}[]{ll}&m_{\rm SS}=x^{2},\ \ m_{\rm SI}=2xy,\ \ m_{\rm
II}=y^{2},\\\ &m_{\rm IR}=2yz,\ \ m_{\rm RR}=z^{2},\ \ m_{\rm
SR}=2xz.\end{array}$ (8)
These relations are, of course, the same as quoted above for uncorrelated
initial conditions.
Replacing Eqs. (8) into (3) we verify, first, that the operators $A_{ijh}$ and
$B_{ijh}$ vanish identically. The remaining of the equations, corresponding to
the contribution of epidemic events, become equivalent to the mean field
equations (1). Therefore, if the distributions of couples and epidemiological
states are initially uncorrelated, the evolution of the fraction of couples of
each kind is exactly determined by the mean field description for the fraction
each kind of agent, through the relations given in Eqs. (8).
For intermediate values of the recombination rate, $0<r<\infty$, we expect to
obtain incidence levels that interpolate between the results presented in the
two panels of Fig. 1. However, these cannot be obtained analytically. We thus
resort to the numerical solution of Eqs. (3).
Figure 2: SIR epidemics incidence as a function of the infectivity for three
initial fractions of infectious agents, $y_{0}$, and several recombination
rates, $r$. Mean field (m. f.) results are also shown. The insert in the upper
panel displays the boundary between the phases of no incidence and positive
incidence for $y_{0}\to 0$, in the parameter plane of infectivity vs.
recombination rate.
## 4 Numerical results for recombining couples
We solve Eqs. (3) by means of a standard fourth-order Runge-Kutta algorithm.
The initial conditions are as in the preceding section, representing no
R–agents and a fraction $y_{0}$ of I–agents. The disease incidence $z^{*}$ is
estimated from the third equation of Eqs. (6), using the long-time numerical
solutions for $m_{\rm RR}$, $m_{\rm SR}$, and $m_{\rm IR}$. In the range of
parameters considered here, numerical integration up to time $t=1000$ was
enough to get a satisfactory approach to asymptotic values.
Figure 2 shows the incidence as a function of infectivity for three values of
the initial fraction of I–agents, $y_{0}\to 0$, $y_{0}=0.2$ and $0.6$, and
several values of the recombination rate $r$. Numerically, the limit $y_{0}\to
0$ has been represented by taking $y_{0}=10^{-9}$. Within the plot resolution,
smaller values of $y_{0}$ give identical results. Mean field (m. f.) results
are also shown. As expected from the analytical results presented in the
preceding section, positive values of $r$ give rise to incidences between
those obtained for a static couple pattern ($r=0$) and for the mean field
description. Note that substantial departure from the limit of static couples
is only got for relatively large recombination rates, $r>1$, when at least one
recombination per couple occurs in the typical time of recovery from the
infection.
Among these results, the most interesting situation is that of a vanishingly
small initial fraction of I–agents, $y_{0}\to 0$. Figure 3 shows, in this
case, the epidemics incidence as a function of the recombination rate for
several fixed infectivities. We recall that, for $y_{0}\to 0$, the mean field
description predicts a transcritical bifurcation between zero and positive
incidence at a critical infectivity $\lambda=1$, while in the absence of
recombination the incidence is identically zero for all infectivities. Our
numerical calculations show that, for sufficiently large values of $r$, the
transition is still present, but the critical point depends on the
recombination rate. As $r$ grows to infinity, the critical infectivity
decreases approaching unity.
Figure 3: SIR epidemics incidence as a function of the recombination rate $r$
for a vanishingly small fraction of infectious agents, $y_{0}\to 0$, and
several infectivities $\lambda$.
Straightforward linearization analysis of Eqs. (3) shows that the state of
zero incidence becomes unstable above the critical infectivity
$\lambda_{c}=\frac{r+1}{r-1}.$ (9)
This value is in excellent agreement with the numerical determination of the
transition point. Note also that Eq. (9) predicts a divergent critical
infectivity for a recombination rate $r=1$. This implies that, for $0\leq
r\leq 1$, the transition is absent and the disease has no incidence
irrespectively of the infectivity level. For $y_{0}\to 0$, thus, the
recombination rate must overcome the critical value $r_{c}=1$ to find positive
incidence for sufficiently large infectivity. The critical line between zero
and positive incidence in the parameter plane of infectivity vs. recombination
rate, given by Eq. (9), is plotted in the insert of the upper panel of Fig. 2.
## 5 Conclusions
We have studied the dynamics of SIR epidemics in a population where, at any
time, each agent forms a couple with exactly one neighbor, but neighbors are
randomly exchanged at a fixed rate. As it had already been shown for the SIS
epidemiological model [6, 7], this recombination of couples can, to some
degree, compensate the high disconnection of the instantaneous interaction
pattern, and thus allow for the propagation of the disease over a finite
portion of the population. The interest of a separate study of SIR epidemics
is based on its peculiar dynamical features: in contrast with SIS epidemics,
it admits infinitely many absorbing equilibrium states. As a consequence, the
disease incidence depends not only on the infectivity and the recombination
rate, but also on the initial fraction of infectious agents in the population.
Due to the random nature of recombination, mean field–like arguments provide
exact equations for the evolution of couples formed by agents in every
possible epidemiological state. These equations can be analytically studied in
the limits of zero and infinitely large recombination rates. The latter case,
in particular, coincides with the standard mean field description of SIR
epidemics.
Numerical solutions for intermediate recombination rates smoothly interpolate
between the two limits, except when the initial fraction of infectious agents
is vanishingly small. For this special situation, if the recombination rate is
below one recombination event per couple per time unit (which equals the mean
recovery time), the disease does not propagate and its incidence is thus equal
to zero. Above that critical value, a transition appears as the disease
infectivity changes: for small infectivities the incidence is still zero,
while it becomes positive for large infectivities. The critical transition
point shifts to lower infectivities as the recombination rate grows.
It is worth mentioning that a similar transition between a state with no
disease and an endemic state with a permanent infection level occurs in SIS
epidemics with a vanishingly small fraction of infectious agents [6, 7]. For
this latter model, however, the transition is present for any positive
recombination rate. For SIR epidemics, on the other hand, the recombination
rate must overcome a critical value for the disease to spread, even at very
large infectivities.
While both the (monogamous) structure and the (recombination) dynamics of the
interaction pattern considered here are too artificial to play a role in the
description of real systems, they correspond to significant limits of more
realistic situations. First, the monogamous population represents the highest
possible lack of connectivity in the interaction pattern (if isolated agents
are excluded). Second, random couple recombination preserves the instantaneous
structure of interactions and does not introduce correlations between the
individual epidemiological state of agents. As was already demonstrated for
SIS epidemics and chaotic synchronization [7], they have the additional
advantage of being analytically tractable to a large extent. Therefore, this
kind of assumption promises to become a useful tool in the study of dynamical
processes on evolving networks.
###### Acknowledgements.
Financial support from SECTyP–UNCuyo and ANPCyT, Argentina, is gratefully
acknowledged.
## References
* [1] R M Anderson, R M May, Infectious Diseases in Humans, Oxford University Press, Oxford (1991).
* [2] A S Mikhailov, Foundations of Synergetics I. Distributed active systems, Springer, Berlin (1990).
* [3] J D Murray, Mathematical Biology, Springer, Berlin (2003).
* [4] K T D Eames, M J Keeling, Modeling dynamic and network heterogeneities in the spread of sexually transmitted diseases, Proc. Nat. Acad. Sci. 99, 13330 (2002).
* [5] K T D Eames, M J Keeling, Monogamous networks and the spread of sexually transmitted diseases, Math. Biosc. 189, 115 (2004).
* [6] S Bouzat, D H Zanette, Sexually transmitted infections and the marriage problem, Eur. Phys. J B 70, 557 (2009).
* [7] F Vazquez, D H Zanette, Epidemics and chaotic synchronization in recombining monogamous populations, Physica D 239, 1922 (2010).
* [8] T Gross, C J Dommar D’Lima, B Blasius, Epidemic dynamics in an adaptive network, Phys. Rev. Lett. 96, 208 (2006).
* [9] T Gross, B Blasius, Adaptive coevolutionary networks: a review, J. R. Soc. Interface 5, 259 (2008).
* [10] D H Zanette, S Risau–Gusman, Infection spreading in a population with evolving contacts, J. Biol. Phys. 34, 135 (2008).
* [11] S Risau–Gusman, D H Zanette, Contact switching as a control strategy for epidemic outbreaks, J. Theor. Biol. 257, 52 (2009).
|
arxiv-papers
| 2011-04-20T19:07:57 |
2024-09-04T02:49:18.349545
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Dami\\'an H. Zanette",
"submitter": "Luis Ariel Pugnaloni",
"url": "https://arxiv.org/abs/1104.4102"
}
|
1104.4257
|
# Possible Deuteron-like Molecular States Composed of Heavy Baryons
Ning Lee leening@pku.edu.cn Zhi-Gang Luo cglow@pku.edu.cn Xiao-Lin Chen
chenxl@pku.edu.cn Shi-Lin Zhu zhusl@pku.edu.cn Department of Physics and
State Key Laboratory of Nuclear Physics and Technology
Peking University, Beijing 100871, China
###### Abstract
We perform a systematic study of the possible loosely bound states composed of
two charmed baryons or a charmed baryon and an anti-charmed baryon within the
framework of the one boson exchange (OBE) model. We consider not only the
$\pi$ exchange but also the $\eta$, $\rho$, $\omega$, $\phi$ and $\sigma$
exchanges. The $S-D$ mixing effects for the spin-triplets are also taken into
account. With the derived effective potentials, we calculate the binding
energies and root-mean-square (RMS) radii for the systems
$\Lambda_{c}\Lambda_{c}(\bar{\Lambda}_{c})$, $\Xi_{c}\Xi_{c}(\bar{\Xi}_{c})$,
$\Sigma_{c}\Sigma_{c}(\bar{\Sigma}_{c})$,
$\Xi_{c}^{\prime}\Xi_{c}^{\prime}(\bar{\Xi}_{c}^{\prime})$ and
$\Omega_{c}\Omega_{c}(\bar{\Omega}_{c})$. Our numerical results indicate that:
(1) the H-dibaryon-like state $\Lambda_{c}\Lambda_{c}$ does not exist; (2)
there may exist four loosely bound deuteron-like states $\Xi_{c}\Xi_{c}$ and
$\Xi_{c}^{\prime}\Xi_{c}^{\prime}$ with small binding energies and large RMS
radii. .
###### pacs:
12.39.Pn, 14.20.-c, 12.40.Yx
## I Introduction
Many so-called “XYZ” charmonium-like states such as $X(3872)$, $X(4350)$ and
$Y(3940)$ have been observed by Belle, CDF, D0 and BaBar collaborations Belle
; BaBar ; CDF ; D0 during the past few years. Despite the similar production
mechanism, some of these structures do not easily fit into the conventional
charmonium spectrum, which implies other interpretations such as hybrid
mesons, heavy meson molecular states etc. might be responsible for these new
states Brambilla:2010cs Swanson2006 .
A natural idea is that some of the “XYZ” states near two heavy meson threshold
may be bound states of a pair of heavy meson and anti-heavy meson. Actually,
Rujula et al. applied this idea to explain $\psi(4040)$ as a P-wave
$D^{*}\bar{D}^{*}$ bound resonance in the 1970s Rujula77 . Tornqvist performed
an intensive study of the possible deuteron-like two-charm-meson bound states
with the one-pion-exchange (OPE) potential model in Ref. Torq . Recently,
motivated by the controversy over the nature of $X(3872)$ and $Z(4430)$, some
authors proposed $X(3872)$ might be a $D\bar{D}^{*}$ bound state Swan04 ;
Wong04 ; Close2004 ; Voloshin2004 ; Thomas2008 . Our group have studied the
possible molecular structures composed of a pair of heavy mesons in the
framework of the One-Boson-Exchange (OBE) model systematically LiuXLiuYR ;
ZhugrpDD . There are also many interesting investigations of other hadron
clusters Ding ; LiuX ; Liu2009 ; Ping:2000dx ; Liu:2011xc ; qiao .
The boson exchange models are very successful to describe nuclear force Mach87
; Mach01 ; Rijken . Especially the deuteron is a loosely bound state of proton
and neutron, which may be regarded as a hadronic molecular state. One may
wonder whether a pair of heavy baryons can form a deuteron-like bound state
through the light meson exchange mechanism. On the other hand, the large
masses of the heavy baryons reduce the kinetic of the systems, which makes it
easier to form bound states. Such a system is approximately non-relativistic.
Therefore, it is very interesting to study whether the OBE interactions are
strong enough to bind the two heavy baryons (dibaryon) or a heavy baryon and
an anti-baryon (baryonium).
A heavy charmed baryon contains a charm quark and two light quarks. The two
light quarks form a diquark. Heavy charmed baryons can be categorized by the
flavor wave function of the diquark, which form a symmetric $6$ or an
antisymmetric $\bar{3}$ representation. For the ground heavy baryon, the spin
of the diquark is either $0$ or $1$, and the spin of the baryon is either
$1/2$ or $3/2$. The product of the diquark flavor and spin wave functions of
the ground charmed baryon must be symmetric and correlate with each other.
Thus the spin of the sextet diquark is $1$ while the spin of the anti-triplet
diquark is $0$.
The ground charmed baryons are grouped into one antitrpilet with spin-1/2 and
two sextets with spin-1/2 and spin-3/2 respectively. These multiplets are
usually denoted as $B_{\bar{3}}$, $B_{6}$ and $B_{6}^{*}$ in literature Yan .
In the present work, we study the charmed dibaryon and baryonium systems, i.e.
$\Lambda_{c}\Lambda_{c}(\bar{\Lambda}_{c})$, $\Xi_{c}\Xi_{c}(\bar{\Xi}_{c})$,
$\Sigma_{c}\Sigma_{c}(\bar{\Sigma}_{c})$,
$\Xi^{\prime}_{c}\Xi^{\prime}_{c}(\bar{\Xi}_{c}^{\prime})$ and
$\Omega_{c}\Omega_{c}(\bar{\Omega}_{c})$. Other configurations will be
explored in a future work. We first derive the effective potentials of these
systems. Then we calculate the binding energies and root-mean-square (RMS)
radii to determine which system might be a loosely bound molecular state.
This work is organized as follows. We present the formalism in section II. In
section III, we discuss the extraction of the coupling constants between the
heavy baryons and light mesons and give the numerical results in Section IV.
The last section is a brief summary. Some useful formula and figures are
listed in appendix.
## II Formalism
In this section we will construct the wave functions and derive the effective
potentials.
### II.1 Wave Functions
As illustrated in Fig. 1, the states $\Lambda_{c}^{+}$, $\Xi_{c}^{+}$ and
$\Xi_{c}^{0}$ belong to the antitriplet $B_{\bar{3}}$ while $\Sigma_{c}^{++}$,
$\Sigma_{c}^{+}$, $\Sigma_{c}^{0}$, $\Xi_{c}^{\prime+}$, $\Xi_{c}^{\prime 0}$
and $\Omega_{c}^{0}$ are in sextet $B_{6}$. Among them, $\Lambda_{c}^{+}$ and
$\Omega_{c}^{0}$ are isoscalars; $\\{\Xi_{c}^{+},\Xi_{c}^{0}\\}$ and
$\\{\Xi_{c}^{\prime+},\Xi_{c}^{\prime 0}\\}$ are isospin spinnors;
$\\{\Sigma_{c}^{++},\Sigma_{c}^{+},\Sigma_{c}^{0}\\}$ is an isovector. We
denote these states $\Lambda_{c}$, $\Xi_{c}$, $\Sigma_{c}$, $\Xi_{c}^{\prime}$
and $\Omega_{c}$.
(a) antitriplet
(b) sextet
Figure 1: The antitriplet and sextet. Here the brackets and parentheses
represent antisymmetrization and symmetrization of the light quarks
respectively.
The wave function of a dibaryon is the product of its isospin, spatial and
spin wave functions,
$\displaystyle\Psi_{hh}^{[I,2S+1]}\sim\Psi_{hh}^{I}\otimes\Psi_{hh}^{L}\otimes\Psi_{hh}^{S}.$
(1)
We consider the isospin function $\Psi_{hh}^{I}$ first. The isospin of
$\Lambda_{c}$ is $0$, so $\Lambda_{c}\Lambda_{c}$ has isospin $I=0$ and
$\Psi_{\Lambda_{c}\Lambda_{c}}^{I=0}=\Lambda_{c}^{+}\Lambda_{c}^{+}$, which is
symmetric. For $\Xi_{c}\Xi_{c}$, the isospin is $I=0$ or $1$, and their
corresponding wave functions are antisymmetric and symmetric respectively.
$\Sigma_{c}\Sigma_{c}$ has isospin $0$, $1$ or $2$. Their flavor wave
functions can be constructed using Clebsch-Gordan coefficients.
$\Xi_{c}^{\prime}\Xi_{c}^{\prime}$ is the same as $\Xi_{c}\Xi_{c}$. The
isospin of the $\Omega_{c}\Omega_{c}$ is $0$. Because strong interactions
conserve isospin symmetry, the effective potentials do not depend on the third
components of the isospin. For example, it is adequate to take the isospin
function $\Xi_{c}^{+}\Xi_{c}^{+}$ with $I_{3}=1$ when we derive the effective
potential for $\Psi_{\Xi_{c}\Xi_{c}}^{I=1}$, though the wave function
$\frac{1}{\sqrt{2}}(\Xi_{c}^{+}\Xi_{c}^{0}+\Xi_{c}^{0}\Xi_{c}^{+})$ indeed
gives the same result. In the following, we show the relevant isospin
functions used in our calculation,
$\displaystyle\Psi_{\Lambda_{c}\Lambda_{c}}^{I=0}$ $\displaystyle=$
$\displaystyle\Lambda_{c}^{+}\Lambda_{c}^{+}$ (2)
$\displaystyle\Psi_{\Xi_{c}\Xi_{c}}^{I=0}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left(\Xi_{c}^{+}\Xi_{c}^{0}-\Xi_{c}^{0}\Xi_{c}^{+}\right)$
$\displaystyle\Psi_{\Xi_{c}\Xi_{c}}^{I=1}$ $\displaystyle=$
$\displaystyle\Xi_{c}^{+}\Xi_{c}^{+}$ (3)
$\displaystyle\Psi_{\Sigma_{c}\Sigma_{c}}^{I=0}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{3}}\left(\Sigma_{c}^{++}\Sigma_{c}^{0}-\Sigma_{c}^{+}\Sigma_{c}^{+}+\Sigma_{c}^{0}\Sigma_{c}^{++}\right)$
$\displaystyle\Psi_{\Sigma_{c}\Sigma_{c}}^{I=1}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left(\Sigma_{c}^{++}\Sigma_{c}^{+}-\Sigma_{c}^{+}\Sigma_{c}^{++}\right)$
$\displaystyle\Psi_{\Sigma_{c}\Sigma_{c}}^{I=2}$ $\displaystyle=$
$\displaystyle\Sigma_{c}^{++}\Sigma_{c}^{++}$ (4)
$\displaystyle\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{I=0}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left(\Xi_{c}^{\prime+}\Xi_{c}^{\prime
0}-\Xi_{c}^{\prime 0}\Xi_{c}^{\prime+}\right)$
$\displaystyle\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{I=1}$ $\displaystyle=$
$\displaystyle\Xi_{c}^{\prime+}\Xi_{c}^{\prime+}$ (5)
$\displaystyle\Psi_{\Omega_{c}\Omega_{c}}^{I=0}$ $\displaystyle=$
$\displaystyle\Omega_{c}^{0}\Omega_{c}^{0}.$ (6)
We are mainly interested in the ground states of dibaryons and baryonia where
the spatial wave functions of these states are symmetric. The tensor force in
the effective potentials mixes the $S$ and $D$ waves. Thus a physical ground
state is actually a superposition of the $S$ and $D$ waves. This mixture
fortunately does not affect the symmetries of the spatial wave functions. As a
mater of fact, for a dibaryon with a specific total spin $\bm{J}$, we must add
the spins of its components to form $\bm{S}$ first and then couple $\bm{S}$
and the relative orbit angular momentum $\bm{L}$ together to get
$\bm{J}=\bm{L}+\bm{S}$. This $L-S$ coupling scheme leads to six $S$ and $D$
wave states: ${}^{1}S_{0}$, ${}^{3}S_{1}$, ${}^{1}D_{2}$, ${}^{3}D_{1}$,
${}^{3}D_{2}$ and ${}^{3}D_{3}$. But the tenser force only mixes states with
the same $S$ and $J$. In our case we must deal with the
${}^{3}S_{1}$-${}^{3}D_{1}$ mixing. After stripping off the isospin function,
the mixed wave function is
$\displaystyle|\psi\rangle=R_{S}(r)|^{3}S_{1}\rangle+R_{D}(r)|^{3}D_{1}\rangle,$
(7)
which will lead to coupled channel Schrödinger equations for the radial
functions $R_{S}(r)$ and $R_{D}(r)$. In short, for the spatial wave functions,
we will discuss the ground states in ${}^{1}S_{0}$ and ${}^{3}S_{1}$, and the
latter mixes with ${}^{3}D_{1}$.
Finally, we point out that the $I$ and $S$ of states in Eq. (1) can not be
combined arbitrarily because the generalized identity principle constricts the
wave functions to be antisymmetric. It turns out that the survived
compositions are $\Psi_{\Lambda_{c}\Lambda_{c}}^{[0,1]}$,
$\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$, $\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$,
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[0,1]}$,$\Psi_{\Sigma_{c}\Sigma_{c}}^{[1,3]}$,$\Psi_{\Sigma_{c}\Sigma_{c}}^{[2,1]}$,$\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[0,3]}$,
$\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[1,1]}$, and
$\Psi_{\Omega_{c}\Omega_{c}}^{[0,1]}$. For baryonia, there is no constraint on
the wave functions. So we need take into account more states. The wave
functions of baryonia can be constructed in a similar way. However, we can use
the so-called “G-Parity rule” to derive the effective potentials for baryonia
directly from the corresponding potentials for dibaryons, and it is no need
discussing them here now.
### II.2 Lagrangians
We introduce notations
$\displaystyle\Lambda_{c}=\Lambda_{c}^{+},\quad\Xi_{c}=\left(\begin{array}[]{c}\Xi_{c}^{+}\\\
\Xi_{c}^{0}\end{array}\right),\quad\bm{\Sigma}_{c}=\left\\{\frac{1}{\sqrt{2}}(-\Sigma_{c}^{++}+\Sigma_{c}^{0}),\frac{i}{\sqrt{2}}(-\Sigma_{c}^{++}-\Sigma_{c}^{0}),\Sigma_{c}^{+}\right\\},\quad\Xi_{c}^{\prime}=\left(\begin{array}[]{c}\Xi_{c}^{\prime+}\\\
\Xi_{c}^{\prime 0}\end{array}\right),\quad\Omega_{c}=\Omega_{c}^{0}$ (12)
to represent the corresponding baryon fields. The long range interactions are
provided by the $\pi$ and $\eta$ meson exchanges:
$\displaystyle\mathcal{L}_{\pi}$ $\displaystyle=$ $\displaystyle
g_{\pi\Xi_{c}\Xi_{c}}\bar{\Xi}_{c}i\gamma_{5}\bm{\tau}\Xi_{c}\cdot\bm{\pi}+g_{\pi\Sigma_{c}\Sigma_{c}}(-i)\bar{\bm{\Sigma}}_{c}i\gamma_{5}\times\bm{\Sigma}_{c}\cdot\bm{\pi}+g_{\pi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}\bar{\Xi}^{\prime}_{c}i\gamma_{5}\bm{\tau}\Xi^{\prime}_{c}\cdot\bm{\pi}$
(13) $\displaystyle\mathcal{L}_{\eta}$ $\displaystyle=$ $\displaystyle
g_{\eta\Lambda_{c}\Lambda_{c}}\bar{\Lambda}_{c}i\gamma_{5}\Lambda_{c}\eta+g_{\eta\Xi_{c}\Xi_{c}}\bar{\Xi}_{c}i\gamma_{5}\Xi_{c}\eta$
(14) $\displaystyle+g_{\eta\Sigma_{c}\Sigma_{c}}\bar{\bm{\Sigma}}_{c}\cdot
i\gamma_{5}\bm{\Sigma}_{c}\eta+g_{\eta\Xi_{c}^{\prime}\Xi_{c}^{\prime}}\bar{\Xi}_{c}^{\prime}i\gamma_{5}\Xi_{c}^{\prime}\eta+g_{\eta\Omega_{c}\Omega_{c}}\bar{\Omega}_{c}i\gamma_{5}\Omega_{c}\eta,$
where $g_{\pi\Xi_{c}\Xi_{c}}$, $g_{\pi\Sigma_{c}\Sigma_{c}}$,
$g_{\eta\Omega_{c}\Omega_{c}}$ etc. are the coupling constants.
$\bm{\tau}=\\{\tau_{1},\tau_{2},\tau_{3}\\}$ are the Pauli matrices, and
$\bm{\pi}=\\{\frac{1}{\sqrt{2}}(\pi^{+}+\pi^{-}),\frac{i}{\sqrt{2}}(\pi^{+}-\pi^{-}),\pi^{0}\\}$
are the $\pi$ fields. The vector meson exchange Lagrangians read
$\displaystyle\mathcal{L}_{\rho}$ $\displaystyle=$ $\displaystyle
g_{\rho\Xi_{c}\Xi_{c}}\bar{\Xi}_{c}\gamma_{\mu}\bm{\tau}\Xi_{c}\cdot\bm{\rho}^{\mu}+\frac{f_{\rho\Xi_{c}\Xi_{c}}}{2m_{\Xi_{c}}}\bar{\Xi}_{c}\sigma_{\mu\nu}\bm{\tau}\Xi_{c}\cdot\partial^{\mu}\bm{\rho}^{\nu}$
(15)
$\displaystyle+g_{\rho\Sigma_{c}\Sigma_{c}}(-i)\bar{\bm{\Sigma}}_{c}\gamma_{\mu}\times\bm{\Sigma}_{c}\cdot\bm{\rho}^{\mu}+\frac{f_{\rho\Sigma_{c}\Sigma_{c}}}{2m_{\Sigma_{c}}}(-i)\bar{\bm{\Sigma}}_{c}\sigma_{\mu\nu}\times\bm{\Sigma}_{c}\cdot\partial^{\mu}\bm{\rho}^{\nu}$
$\displaystyle+g_{\rho\Xi_{c}^{\prime}\Xi_{c}^{\prime}}\bar{\Xi}_{c}^{\prime}\gamma_{\mu}\bm{\tau}\Xi_{c}^{\prime}\cdot\bm{\rho}^{\mu}+\frac{f_{\rho\Xi_{c}^{\prime}\Xi_{c}^{\prime}}}{2m_{\Xi^{\prime}_{c}}}\bar{\Xi}_{c}^{\prime}\sigma_{\mu\nu}\bm{\tau}\Xi_{c}^{\prime}\cdot\partial^{\mu}\bm{\rho}^{\nu}$
$\displaystyle\mathcal{L}_{\omega}$ $\displaystyle=$ $\displaystyle
g_{\omega\Lambda_{c}\Lambda_{c}}\bar{\Lambda}_{c}\gamma_{\mu}\Lambda_{c}\omega^{\mu}+\frac{f_{\omega\Lambda_{c}\Lambda_{c}}}{2m_{\Lambda_{c}}}\bar{\Lambda}_{c}\sigma_{\mu\nu}\Lambda_{c}\partial^{\mu}\omega^{\nu}$
(16)
$\displaystyle+g_{\omega\Xi_{c}\Xi_{c}}\bar{\Xi}_{c}\gamma_{\mu}\Xi_{c}\omega^{\mu}+\frac{f_{\omega\Xi_{c}\Xi_{c}}}{2m_{\Xi_{c}}}\bar{\Xi}_{c}\sigma_{\mu\nu}\Xi_{c}\partial^{\mu}\omega^{\nu}$
$\displaystyle+g_{\omega\Sigma_{c}\Sigma_{c}}\bar{\bm{\Sigma}}_{c}\gamma_{\mu}\cdot\bm{\Sigma}_{c}\omega^{\mu}+\frac{f_{\omega\Sigma_{c}\Sigma_{c}}}{2m_{\Sigma_{c}}}\bar{\bm{\Sigma}}_{c}\sigma_{\mu\nu}\cdot\bm{\Sigma}_{c}\partial^{\mu}\omega^{\nu}$
$\displaystyle+g_{\omega\Xi_{c}^{\prime}\Xi_{c}^{\prime}}\bar{\Xi}_{c}^{\prime}\gamma_{\mu}\Xi_{c}^{\prime}\omega^{\mu}+\frac{f_{\omega\Xi_{c}^{\prime}\Xi_{c}^{\prime}}}{2m_{\Xi_{c}^{\prime}}}\bar{\Xi}_{c}^{\prime}\sigma_{\mu\nu}\Xi_{c}^{\prime}\partial^{\mu}\omega^{\nu}$
$\displaystyle\mathcal{L}_{\phi}$ $\displaystyle=$ $\displaystyle
g_{\phi\Xi_{c}\Xi_{c}}\bar{\Xi}_{c}\gamma_{\mu}\Xi_{c}\phi^{\mu}+\frac{f_{\phi\Xi_{c}\Xi_{c}}}{2m_{\Xi_{c}}}\bar{\Xi}_{c}\sigma_{\mu\nu}\Xi_{c}\partial^{\mu}\phi^{\nu}$
(17)
$\displaystyle+g_{\phi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}\bar{\Xi}_{c}^{\prime}\gamma_{\mu}\Xi_{c}^{\prime}\phi^{\mu}+\frac{f_{\phi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}}{2m_{\Xi_{c}^{\prime}}}\bar{\Xi}_{c}^{\prime}\sigma_{\mu\nu}\Xi_{c}^{\prime}\partial^{\mu}\phi^{\nu}$
$\displaystyle+g_{\phi\Omega_{c}\Omega_{c}}\bar{\Omega}_{c}\gamma_{\mu}\Omega_{c}\phi^{\mu}+\frac{f_{\phi\Omega_{c}\Omega_{c}}}{2m_{\Omega_{c}}}\bar{\Omega}_{c}\sigma_{\mu\nu}\Omega_{c}\partial^{\mu}\phi^{\nu},$
with
$\bm{\rho}=\\{\frac{1}{\sqrt{2}}(\rho^{+}+\rho^{-}),\frac{i}{\sqrt{2}}(\rho^{+}-\rho^{-}),\rho^{0}\\}$.
The $\sigma$ exchange Lagrangian is
$\displaystyle\mathcal{L}_{\sigma}$ $\displaystyle=$ $\displaystyle
g_{\sigma\Lambda_{c}\Lambda_{c}}\bar{\Lambda}_{c}\Lambda_{c}\sigma+g_{\sigma\Xi_{c}\Xi_{c}}\bar{\Xi}_{c}\Xi_{c}\sigma+g_{\sigma\Sigma_{c}\Sigma_{c}}\bar{\bm{\Sigma}}_{c}\cdot\bm{\Sigma}_{c}\sigma$
(18)
$\displaystyle+g_{\sigma\Xi_{c}^{\prime}\Xi_{c}^{\prime}}\bar{\Xi}^{\prime}_{c}\Xi^{\prime}_{c}\sigma+g_{\sigma\Omega_{c}\Omega_{c}}\bar{\Omega}_{c}\Omega_{c}\sigma.$
There are thirty-three unknown coupling constants in the above Lagrangains,
which will be determined in Sec. III.
### II.3 Effective Potentials
To obtain the effective potentials, we calculate the $T$ matrices of the
scattering processes such as Fig. 2 in momentum space. Expanding the $T$
matrices with external momenta to the leading order, one gets Barnes:1999hs
$\displaystyle V(\bm{r})=\frac{1}{(2\pi)^{3}}\int
d^{3}{q}e^{-i\bm{Q}\cdot\bm{r}}T(\bm{Q})\mathcal{F}(\bm{Q})^{2},$ (19)
where $\mathcal{F}(\bm{Q})$ is the form factor, with which the divergency in
the above integral is controlled, and the non-point-like hadronic structures
attached to each vertex are roughly taken into account. Here we choose the
monopole form factor
$\displaystyle\mathcal{F}(\bm{Q})=\frac{\Lambda^{2}-m^{2}}{\Lambda^{2}-Q^{2}}$
(20)
with $Q=\\{Q_{0},\bm{Q}\\}$ and the cutoff $\Lambda$.
Figure 2: Scattering processes of
$\Lambda_{c}\Lambda_{c}\to\Lambda_{c}\Lambda_{c}$ and
$\Lambda_{c}\bar{\Lambda}_{c}\to\Lambda_{c}\bar{\Lambda}_{c}$, $Q$s are the
transformed four momenta.
Generally speaking, a potential derived from the scattering $T$ matrix
consists of the central term, spin-spin interaction term, orbit-spin
interaction term and tenser force term, i.e.,
$\displaystyle
V(\bm{r})=V_{C}(r)+V_{SS}(r)\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}+V_{LS}(r)\bm{L}\cdot\bm{S}+V_{T}(r)S_{12}(\hat{\bm{r}}),$
(21)
where $S_{12}(\hat{\bm{r}})$ is the tensor force operator,
$S_{12}(\hat{\bm{r}})=3(\bm{\sigma}_{1}\cdot\hat{\bm{r}})(\bm{\sigma}_{2}\cdot\hat{\bm{r}})-\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}$.
The effective potential of a specific channel, for example
$\Lambda_{c}\Lambda_{c}\to\Lambda_{c}\Lambda_{c}$ shown in Fig. 2, may contain
contributions from the pseudoscalar, vector and scalar meson exchanges. We
need work them out one by one and add them. The potentials with the stripped
isospin factors from the pseudoscalar, vector and scalar ($\sigma$ here) meson
exchange are
$\displaystyle V^{a}(\bm{r};\alpha,h)$ $\displaystyle=$ $\displaystyle
V^{a}_{SS}(r;\alpha,h)\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}+V^{a}_{T}(r;\alpha,h)S_{12}(\hat{\bm{r}}),$
$\displaystyle V^{b}(\bm{r};\beta,h)$ $\displaystyle=$ $\displaystyle
V^{b}_{C}(r;\beta,h)+V^{b}_{SS}(r;\beta,h)\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}+V^{b}_{LS}(r;\beta,h)\bm{L}\cdot\bm{S}+V^{b}_{T}(r;\beta,h)S_{12}(\hat{\bm{r}}),$
$\displaystyle V^{c}(\bm{r};\sigma,h)$ $\displaystyle=$ $\displaystyle
V^{c}_{C}(r;\sigma,h)+V^{c}_{LS}(r;\sigma,h)\bm{L}\cdot\bm{S},$ (22)
where $\alpha=\pi,\eta$, $\beta=\rho,\omega,\phi$ and
$\displaystyle V^{a}_{SS}(r;\alpha,h)$ $\displaystyle=$
$\displaystyle-\frac{g_{\alpha
hh}^{2}}{4\pi}\frac{m_{\alpha}^{3}}{12m_{h}^{2}}H_{1}(\Lambda,m_{\alpha},r),$
$\displaystyle V^{a}_{T}(r;\alpha,h)$ $\displaystyle=$
$\displaystyle\frac{g_{\alpha
hh}^{2}}{4\pi}\frac{m_{\alpha}^{3}}{12m_{h}^{2}}H_{3}(\Lambda,m_{\alpha},r),$
$\displaystyle V^{b}_{C}(r;\beta,h)$ $\displaystyle=$
$\displaystyle\frac{m_{\beta}}{4\pi}\left[g_{\beta
hh}^{2}H_{0}(\Lambda,m_{\beta},r)-(g_{\beta hh}^{2}+4g_{\beta hh}f_{\beta
hh})\frac{m_{\sigma}^{2}}{8m_{h}^{2}}H_{1}(\Lambda,m_{\beta},r)\right],$
$\displaystyle V^{b}_{SS}(r;\beta,h)$ $\displaystyle=$
$\displaystyle-\frac{1}{4\pi}(g_{\beta hh}+f_{\beta
hh})^{2}\frac{m_{\beta}^{3}}{6m_{h}^{2}}H_{1}(\Lambda,m_{\beta},r),$
$\displaystyle V^{b}_{LS}(r;\beta,h)$ $\displaystyle=$
$\displaystyle-\frac{1}{4\pi}(3g_{\beta hh}^{2}+4g_{\beta hh}f_{\beta
hh})\frac{m_{\beta}^{3}}{2m_{h}^{2}}H_{2}(\Lambda,m_{\beta},r),$
$\displaystyle V^{b}_{T}(r;\beta,h)$ $\displaystyle=$
$\displaystyle-\frac{1}{4\pi}(g_{\beta hh}+f_{\beta
hh})^{2}\frac{m_{\beta}^{3}}{12m_{h}^{2}}H_{3}(\Lambda,m_{\beta},r),$
$\displaystyle V^{c}_{C}(r;\sigma,h)$ $\displaystyle=$ $\displaystyle-
m_{\sigma}\frac{g_{\sigma
hh}^{2}}{4\pi}\left[H_{0}(\Lambda,m_{\sigma},r)+\frac{m_{\sigma}^{2}}{8m_{h}^{2}}H_{1}(\Lambda,m_{\sigma},r)\right],$
$\displaystyle V^{c}_{LS}(r;\sigma,h)$ $\displaystyle=$ $\displaystyle-
m_{\sigma}\frac{g_{\sigma
hh}^{2}}{4\pi}\frac{m_{\sigma}^{2}}{2m_{h}^{2}}H_{2}(\Lambda,m_{\sigma},r).$
(23)
The definitions of functions $H_{0}$, $H_{1}$, $H_{2}$ and $H_{3}$ are given
in the appendix. From Eq. (22), one can see the tensor force terms and spin-
spin terms are from the pseudoscalar and vector meson exchanges while the
central and obit-spin terms are from the vector and scalar meson exchanges.
Finally the effective potential of the state $hh$ is
$\displaystyle V_{hh}(\bm{r})$ $\displaystyle=$
$\displaystyle\sum_{\alpha}\mathcal{C}^{a}_{\alpha}V^{a}(\bm{r};\alpha,h)+\sum_{\beta}\mathcal{C}^{b}_{\beta}V^{b}(\bm{r};\beta,h)+\mathcal{C}^{c}_{\sigma}V^{c}(\bm{r};\sigma,h)$
$\displaystyle=$
$\displaystyle\left\\{\sum_{\beta}\mathcal{C}^{b}_{\beta}V^{b}_{C}(r;\beta,h)+\mathcal{C}^{c}_{\sigma}V^{c}_{C}(r;\sigma,h)\right\\}+\left\\{\sum_{\alpha}\mathcal{C}^{a}_{\alpha}V^{a}_{SS}(r;\alpha,h)+\sum_{\beta}\mathcal{C}^{b}_{\beta}V^{b}_{SS}(r;\beta,h)\right\\}\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}$
$\displaystyle+\left\\{\sum_{\beta}\mathcal{C}^{b}_{\beta}V^{b}_{LS}(r;\beta,h)+\mathcal{C}^{c}_{\sigma}V^{c}_{LS}(r;\beta,h)\right\\}\bm{L}\cdot\bm{S}+\left\\{\sum_{\alpha}\mathcal{C}^{a}_{\alpha}V^{a}_{T}(r;\alpha,h)+\sum_{\beta}\mathcal{C}^{b}_{\beta}V^{b}_{T}(r;\beta,h)\right\\}S_{12}(\hat{\bm{r}}),$
where $\mathcal{C}^{a}_{\alpha}$, $\mathcal{C}^{b}_{\beta}$ and
$\mathcal{C}^{c}_{\sigma}$ are the isospin factors, which are listed in Table
1.
| $\Lambda_{c}\Lambda_{c}[\bar{\Lambda}_{c}]$ | $\Xi_{c}\Xi_{c}[\bar{\Xi}_{c}]$ | $\Sigma_{c}\Sigma_{c}[\bar{\Sigma}_{c}]$ | $\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}[\bar{\Xi}_{c}^{{}^{\prime}}]$ | $\Omega_{c}\Omega_{c}[\bar{\Omega}_{c}]$
---|---|---|---|---|---
I | 0 | 0 | 1 | 0 | 1 | 2 | 0 | 1 | 0
$\mathcal{C}_{\pi}^{a}$ | | | | -2[2] | -1[1] | 1[-1] | -3[3] | 1[-1] |
$\mathcal{C}_{\eta}^{a}$ | | | | 1[1] | 1[1] | 1[1] | 1[1] | 1[1] | 1[1]
$\mathcal{C}_{\rho}^{b}$ | | -3[-3] | 1[1] | -2[-2] | -1[-1] | 1[1] | -3[-3] | 1[1] | 1[1]
$\mathcal{C}_{\omega}^{b}$ | 1[-1] | 1[-1] | 1[-1] | 1[-1] | 1[-1] | 1[-1] | 1[-1] | 1[-1] |
$\mathcal{C}_{\phi}^{b}$ | | 1[-1] | 1[-1] | | | | 1[-1] | 1[-1] | 1[-1]
$\mathcal{C}_{\sigma}^{c}$ | 1[1] | 1[1] | 1[1] | 1[1] | 1[1] | 1[1] | 1[1] | 1[1] |
Table 1: Isospin factors. The values in brackets for baryonia are derived by
the “G-Parity rule”.
Given the effective potential $V_{hh}$, the potential for $h\bar{h}$,
$V_{h\bar{h}}$, can be obtained using the “G-Parity rule”, which states that
the amplitude (or the effective potential) of the process $A\bar{A}\to
A\bar{A}$ with one light meson exchange is related to that of the process
$AA\to AA$ by multiplying the latter by a factor $(-)^{I_{G}}$, where
$(-)^{I_{G}}$ is the G-Parity of the exchanged light meson IGrule . The
expression of $V_{h\bar{h}}$ is the same as Eq. (II.3) but with
$V^{a}(\bm{r};\alpha,h)$, $V^{b}(\bm{r};\beta,h)$ and $V^{c}(\bm{r};\sigma,h)$
replaced by $V^{a}(\bm{r};\alpha,\bar{h})$, $V^{b}(\bm{r};\beta,\bar{h})$ and
$V^{c}(\bm{r};\sigma,\bar{h})$ respectively.
$\displaystyle V^{a}(\bm{r};\alpha,\bar{h})$ $\displaystyle=$
$\displaystyle(-)^{I_{G}[\alpha]}V^{a}(\bm{r};\alpha,h),$ $\displaystyle
V^{b}(\bm{r};\beta,\bar{h})$ $\displaystyle=$
$\displaystyle(-)^{I_{G}[\beta]}V^{b}(\bm{r};\beta,h),$ $\displaystyle
V^{c}(\bm{r};\sigma,\bar{h})$ $\displaystyle=$
$\displaystyle(-)^{I_{G}[\sigma]}V^{c}(\bm{r};\sigma,h).$ (25)
For example,
$\displaystyle V^{a}(\bm{r};\omega,\bar{\Lambda}_{c})$ $\displaystyle=$
$\displaystyle(-1)V^{a}(\bm{r};\omega,\Lambda_{c}),$ (26)
since the G-Parity of $\omega$ is negative. In other words, we can still use
the right hand side of Eq. (II.3) to calculate $V_{h\bar{h}}$ but with the
redefined isospin factors
$\displaystyle\mathcal{C}^{a}_{\alpha}\to(-)^{I_{G}[\alpha]}\mathcal{C}^{a}_{\alpha},\;\;\mathcal{C}^{b}_{\beta}\to(-)^{I_{G}[\beta]}\mathcal{C}^{b}_{\beta},\;\;\mathcal{C}^{c}_{\sigma}\to(-)^{I_{G}[\sigma]}\mathcal{C}^{c}_{\sigma},$
(27)
which are listed in Table 1 too.
The treatments of operators $\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}$,
$\bm{L}\cdot\bm{S}$ and $S_{12}(\hat{\bm{r}})$ are straightforward. For
${}^{1}S_{0}$,
$\displaystyle\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}=-3,\;\;\bm{L}\cdot\bm{S}=0,\;\;S_{12}(\hat{\bm{r}})=0,$
(28)
which lead to single channel Shrödinger equations. But for ${}^{3}S_{1}$,
because of mixing with ${}^{3}D_{1}$, the above operators should be
represented in the $\left\\{|^{3}S_{1}\rangle,|^{3}D_{1}\rangle\right\\}$
space, i.e.,
$\displaystyle\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}=\left(\begin{array}[]{cc}1&0\\\
0&1\end{array}\right),\;\;\bm{L}\cdot\bm{S}=\left(\begin{array}[]{cc}0&0\\\
0&-3\end{array}\right),\;\;S_{12}(\hat{\bm{r}})=\left(\begin{array}[]{cc}0&\sqrt{8}\\\
\sqrt{8}&-2\end{array}\right).$ (35)
These representations lead to the coupled channel Shrödinger equations.
## III Coupling Constants
It is difficult to extract the coupling constants in the Lagrangians
experimentally. We may estimate them using the well-known nucleon-meson
coupling constants as inputs with the help of the quark model. The details of
this method are provided in Ref. Riska2001 . The one-boson exchange Lagrangian
at the quark level is
$\displaystyle\mathcal{L}_{q}$ $\displaystyle=$ $\displaystyle g_{\pi
qq}\left(\bar{u}i\gamma_{5}u\pi^{0}-\bar{d}i\gamma_{5}d\pi^{0}\right)$ (36)
$\displaystyle+g_{\eta
qq}\left(\bar{u}i\gamma_{5}u\eta+\bar{d}i\gamma_{5}d\eta-2\bar{s}i\gamma_{5}s\eta\right)$
$\displaystyle+g_{\rho
qq}\left(\bar{u}\gamma_{\mu}u\rho^{0\mu}-\bar{d}\gamma_{\mu}d\rho^{0\mu}\right)$
$\displaystyle+g_{\omega
qq}\left(\bar{u}\gamma_{\mu}u\omega^{\mu}+\bar{d}\gamma_{\mu}d\omega^{\mu}\right)+g_{\phi
qq}\bar{s}\gamma_{\mu}s\phi^{\mu}$ $\displaystyle+g_{\sigma
qq}\left(\bar{u}u\sigma+\bar{d}d\sigma+\bar{s}s\sigma\right)+\cdots,$
where $g_{\pi qq}$, $g_{\eta qq}$, $\ldots$, $g_{\sigma qq}$ are the coupling
constants of the light mesons and quarks. The vector meson terms in this
Lagrangian do not contain the anomalous magnetic moment part because the
constituent quarks are treated as point-like particles. At the hadronic level,
for instance, the nucleon-nucleon-meson interaction Lagrangian reads
$\displaystyle\mathcal{L}_{NN}$ $\displaystyle=$ $\displaystyle g_{\pi
NN}\bar{N}i\gamma_{5}\bm{\tau}N\cdot\bm{\pi}+g_{\eta
NN}\bar{N}i\gamma_{5}N\eta$ (37) $\displaystyle+g_{\rho
NN}\bar{N}\gamma_{\mu}\bm{\tau}N\cdot\bm{\rho}^{\mu}+\frac{f_{\rho
NN}}{2m_{N}}\bar{N}\sigma_{\mu\nu}\bm{\tau}N\cdot\partial^{\mu}\bm{\rho}^{\nu}$
$\displaystyle+g_{\omega NN}\bar{N}\gamma_{\mu}N\omega^{\mu}+\frac{f_{\omega
NN}}{2m_{N}}\bar{N}\sigma_{\mu\nu}N\partial^{\mu}\omega^{\nu}$
$\displaystyle+g_{\sigma NN}\bar{N}N\sigma,$
where $g_{\pi NN}$, $g_{\eta NN}$, $\ldots$, $g_{\sigma NN}$ are the coupling
constants. We calculate the matrix elements for a specific process both at
quark and hadronic levels and then match them. In this way, we get relations
between the two sets of coupling constants,
$\displaystyle g_{\pi NN}=\frac{5}{3}g_{\pi qq}\frac{m_{N}}{m_{q}},\;\;g_{\eta
NN}=g_{\eta qq}\frac{m_{N}}{m_{q}},$ $\displaystyle g_{\omega NN}=3g_{\omega
qq},\;\;\frac{g_{\omega NN}+f_{\omega NN}}{m_{N}}=\frac{g_{\omega
qq}}{m_{q}},$ $\displaystyle g_{\rho NN}=g_{\rho qq},\;\;\frac{g_{\rho
NN}+f_{\rho NN}}{m_{N}}=\frac{5}{3}\frac{g_{\rho qq}}{m_{q}},$ $\displaystyle
g_{\sigma NN}=3g_{\sigma qq}.$ (38)
From these relations, we can see that $g_{\omega NN}$ and $f_{\omega NN}$ are
not independent. So are $g_{\rho NN}$ and $f_{\rho NN}$. The constituent quark
mass is about one third of the nucleon mass. Thus we have $f_{\omega
NN}\approx 0$ and $f_{\rho NN}\approx 4g_{\rho NN}$.
With the same prescription, we can obtain similar relations for heavy charmed
baryons which are collected in the appendix. Substituting the coupling
constants at the quark level with those from Eq. (38), we have
$\displaystyle
g_{\pi\Xi_{c}\Xi_{c}}=0,\;\;g_{\pi\Sigma_{c}\Sigma_{c}}=\frac{4}{5}g_{\pi
NN}\frac{m_{\Sigma_{c}}}{m_{N}},\;\;g_{\pi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=\frac{2}{5}g_{\pi
NN}\frac{m_{\Xi_{c}^{\prime}}}{m_{N}},$ (39) $\displaystyle
g_{\eta\Lambda_{c}\Lambda_{c}}=0,\;\;g_{\eta\Xi_{c}\Xi_{c}}=0,\;\;g_{\eta\Sigma_{c}\Sigma_{c}}=\frac{4}{3}g_{\eta
NN}\frac{m_{\Sigma_{c}}}{m_{N}},$ $\displaystyle
g_{\eta\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=-\frac{2}{3}g_{\eta
NN}\frac{m_{\Xi_{c}^{\prime}}}{m_{N}},\;\;g_{\eta\Omega_{c}\Omega_{c}}=-\frac{8}{3}g_{\eta
NN}\frac{m_{\Omega_{c}}}{m_{N}},$ (40) $\displaystyle
g_{\sigma\Lambda_{c}\Lambda_{c}}=\frac{2}{3}g_{\sigma
NN},\;\;g_{\sigma\Xi_{c}\Xi_{c}}=\frac{2}{3}g_{\sigma
NN},\;\;g_{\sigma\Sigma_{c}\Sigma_{c}}=\frac{2}{3}g_{\sigma NN},$
$\displaystyle g_{\sigma\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=\frac{2}{3}g_{\sigma
NN},\;\;g_{\sigma\Omega_{c}\Omega_{c}}=\frac{2}{3}g_{\sigma NN},$ (41)
$\displaystyle g_{\omega\Lambda_{c}\Lambda_{c}}=\frac{2}{3}g_{\omega
NN},\;\;f_{\omega\Lambda_{c}\Lambda_{c}}=-\frac{2}{3}g_{\omega NN},$
$\displaystyle g_{\omega\Xi_{c}\Xi_{c}}=\frac{1}{3}g_{\omega
NN},\;\;f_{\omega\Xi_{c}\Xi_{c}}=-\frac{1}{3}g_{\omega NN},$ $\displaystyle
g_{\omega\Sigma_{c}\Sigma_{c}}=\frac{2}{3}g_{\omega
NN},\;\;f_{\omega\Sigma_{c}\Sigma_{c}}=\frac{2}{3}g_{\omega
NN}\left(2\frac{m_{\Sigma_{c}}}{m_{N}}-1\right),$ $\displaystyle
g_{\omega\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=\frac{1}{3}g_{\omega
NN},\;\;f_{\omega\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=\frac{1}{3}g_{\omega
NN}\left(2\frac{m_{\Xi_{c}^{\prime}}}{m_{N}}-1\right),$ (42) $\displaystyle
g_{\rho\Xi_{c}\Xi_{c}}=g_{\rho
NN},\;\;f_{\rho\Xi_{c}\Xi_{c}}=-\frac{1}{5}\left(g_{\rho NN}+f_{\rho
NN}\right),$ $\displaystyle g_{\rho\Sigma_{c}\Sigma_{c}}=2g_{\rho
NN},\;\;f_{\rho\Sigma_{c}\Sigma_{c}}=\frac{2}{5}\left(g_{\rho NN}+f_{\rho
NN}\right)\left(2\frac{m_{\Sigma_{c}}}{m_{N}}-1\right),$ $\displaystyle
g_{\rho\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=g_{\rho
NN},\;\;f_{\rho\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=\frac{1}{5}\left(g_{\rho
NN}+f_{\rho NN}\right)\left(2\frac{m_{\Xi_{c}^{\prime}}}{m_{N}}-1\right),$
(43) $\displaystyle g_{\phi\Xi_{c}\Xi_{c}}=\sqrt{2}g_{\rho
NN},\;\;f_{\phi\Xi_{c}\Xi_{c}}=-\frac{\sqrt{2}}{5}\left(g_{\rho NN}+f_{\rho
NN}\right),$ $\displaystyle
g_{\phi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=\sqrt{2}g_{\rho
NN},\;\;f_{\phi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=\frac{\sqrt{2}}{5}\left(g_{\rho
NN}+f_{\rho NN}\right)\left(2\frac{m_{\Xi_{c}^{\prime}}}{m_{N}}-1\right),$
$\displaystyle g_{\phi\Omega_{c}\Omega_{c}}=2\sqrt{2}g_{\rho
NN},\;\;f_{\phi\Omega_{c}\Omega_{c}}=\frac{2\sqrt{2}}{5}(g_{\rho NN}+f_{\rho
NN})\left(2\frac{m_{\Omega_{c}}}{m_{N}}-1\right),$ (44)
where we have used $m_{N}\approx 3m_{q}$. The couplings of $\phi$ and heavy
charmed baryons can not be derived directly from the results for nucleons. So
in the right hand side of Eq. (44), we use the couplings of $\rho$ and
nucleons.
The above formula relate the unknown coupling constants for heavy charmed
baryons to $g_{\pi NN}$, $g_{\eta NN}$, etc. which can be determined by
fitting to experimental data. We choose the values $g_{\pi NN}=13.07$,
$g_{\eta NN}=2.242$, $g_{\sigma NN}=8.46$, $g_{\omega NN}=15.85$, $f_{\omega
NN}/g_{\omega NN}=0$, $g_{\rho NN}=3.25$ and $f_{\rho NN}/g_{\rho NN}=6.1$
from Refs. Mach87 ; Mach01 ; Cao2010 as inputs. In Table 2, we list the
numerical results of the coupling constants of the heavy charmed baryons and
light mesons. One notices that the vector meson couplings for $\Xi_{c}\Xi_{c}$
and $\Lambda_{c}\Lambda_{c}$ have opposite signs. They almost cancel out and
do not contribute to the tensor terms for spin-triplets. Thus in the following
numerical analysis, we omit the tensor forces of the spin-triplets in the
$\Xi_{c}\Xi_{c}$ and $\Lambda_{c}\Lambda_{c}$ systems.
| $\Lambda_{c}\Lambda_{c}$ | $\Xi_{c}\Xi_{c}$ | $\Sigma_{c}\Sigma_{c}$ | $\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}$ | $\Omega_{c}\Omega_{c}$
---|---|---|---|---|---
$\alpha$ | $g_{\alpha\Lambda_{c}\Lambda_{c}}$ | $f_{\alpha\Lambda_{c}\Lambda_{c}}$ | $g_{\alpha\Xi_{c}\Xi_{c}}$ | $f_{\alpha\Xi_{c}\Xi_{c}}$ | $g_{\alpha\Sigma_{c}\Sigma_{c}}$ | $f_{\alpha\Sigma_{c}\Sigma_{c}}$ | $g_{\alpha\Xi_{c}^{{}^{\prime}}\Xi_{c}{{}^{\prime}}}$ | $f_{\alpha\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}}$ | $g_{\alpha\Omega_{c}\Omega_{c}}$ | $f_{\alpha\Omega_{c}\Omega_{c}}$
$\pi$ | | | 0 | | $27.36$ | | $14.36$ | | |
$\eta$ | $0$ | | $0$ | | $7.82$ | | $-4.10$ | | $-17.19$ |
$\sigma$ | $5.64$ | | $5.64$ | | $5.64$ | | $5.64$ | | $5.64$ |
$\omega$ | $10.57$ | $-10.57$ | $5.28$ | $-5.28$ | $10.57$ | $44.67$ | $5.28$ | $23.72$ | |
$\rho$ | | | $3.25$ | $-4.62$ | $6.50$ | $39.01$ | $3.25$ | $20.72$ | |
$\phi$ | | | $4.60$ | $-6.53$ | | | $4.60$ | $29.30$ | $9.19$ | $61.94$
Table 2: Numerical results of the coupling constants. The coupling constants
with the $\phi$ exchange are deduced from $g_{\rho NN}$.
## IV Numerical Results
With the effective potentials and the coupling constants derived in the
previous sections, one can calculate the binding energies and root-mean-square
(RMS) radii for every possible molecular state numerically. Here we adopt the
program FESSDE which is a FORTRAN routine to solve problems of multi-channel
coupled ordinary differential equations fessde . Besides the coupling
constants in Table 2, we also need heavy charmed baryon masses listed in Table
3 as inputs. The typical value of this cutoff parameter for the deuteron is
$1.2\sim 1.5\mathrm{~{}GeV}$ Mach87 . In our case, the cutoff parameter
$\Lambda$ is taken in the region $0.80\sim 2.00\mathrm{~{}GeV}$. Such a region
is broad and reasonable enough to give us a clear picture of the possibility
of the heavy baryon molecules.
baryon | mass(MeV) | baryon | mass(MeV) | meson | mass(MeV) | meson | mass(MeV)
---|---|---|---|---|---|---|---
$\Lambda_{c}^{+}$ | $2286.5$ | $\Sigma_{c}$ | $2455$ | $\pi^{\pm}$ | $139.6$ | $\rho$ | $775.5$
$\Xi_{c}^{+}$ | $2467.8$ | $\Xi_{c}^{\prime+}$ | $2575.6$ | $\pi^{0}$ | $135.0$ | $\omega$ | $782.7$
$\Xi_{c}^{0}$ | $2470.9$ | $\Xi_{c}^{\prime 0}$ | $2577.9$ | $\eta$ | $547.9$ | $\phi$ | $1019.5$
$\Omega_{c}^{0}$ | $2695.2$ | | | $\sigma$ | $600$ | |
Table 3: Masses of heavy baryons and light mesons pdg2010 . We use
$m_{\Xi_{c}}=2469.3\mathrm{~{}MeV}$,
$m_{\Xi_{c}^{\prime}}=2576.7\mathrm{~{}MeV}$ and
$m_{\pi}=138.1\mathrm{~{}MeV}$ as numerical analysis inputs.
### IV.1 $\Lambda_{c}\Lambda_{c}$ and $\Xi_{c}\Xi_{c}$ systems
The total effective potential of $\Lambda_{c}\Lambda_{c}$ arises from the
$\sigma$ and $\omega$ exchanges. We plot it with $\Lambda=0.9\mathrm{~{}GeV}$
in Fig. 3 (a), from which we can see that the $\omega$ exchange is repulsive
while the $\sigma$ exchange is attractive. Because of the cancellation, the
total potential is too shallow to bind two $\Lambda_{c}$s. In fact, we fail to
find any bound solutions of $\Psi_{\Lambda_{c}\Lambda_{c}}^{[0,1]}$ even if
one takes the deepest potential with $\Lambda=0.9\mathrm{~{}GeV}$. In other
words, the loosely bound $\Lambda_{c}\Lambda_{c}$ molecular state does not
exist, which is the heavy analogue of the famous H dibaryon Aerts:1983hy ;
Iwasaki:1987db ; Stotzer:1997vr ; Ahn:1996hw to some extent.
For the $\Lambda_{c}\bar{\Lambda}_{c}$ system as shown in Fig. 3 (b), both
$\sigma$ and $\omega$ exchanges are attractive. They enhance each other and
lead to a very strong total interaction. From our results listed in Table 4,
the binding energies of the $\Lambda_{c}\bar{\Lambda}_{c}$ system could be
rather large. For example, when we increase the cutoff to
$\Lambda=1.10\mathrm{~{}GeV}$, the corresponding binding energy is
$142.19\mathrm{~{}MeV}$. The binding energies and RMS radii of this system are
very sensitive to the cutoff, which seems to be a general feature of the
systems composed of one hadron and anti-hadron.
(a) $V_{\Lambda_{c}\Lambda_{c}}^{[0,1]}$ with $\Lambda=0.9\mathrm{~{}GeV}$.
(d) $V_{\Xi_{c}\Xi_{c}}^{[0,3]}$ with $\Lambda=1.0\mathrm{~{}GeV}$.
(b) $V_{\Lambda_{c}\bar{\Lambda}_{c}}^{[0,1]}$ with
$\Lambda=0.9\mathrm{~{}GeV}$.
(e) $V_{\Xi_{c}\bar{\Xi}_{c}}^{[0,1]}$ with $\Lambda=1.0\mathrm{~{}GeV}$.
(c) $V_{\Xi_{c}\Xi_{c}}^{[1,1]}$ with $\Lambda=1.1\mathrm{~{}GeV}$.
(f) $V_{\Xi_{c}\bar{\Xi}_{c}}^{[1,3]}$ with $\Lambda=1.0\mathrm{~{}GeV}$.
Figure 3: The potentials of $\Psi_{\Lambda_{c}\Lambda_{c}}$, $\Psi_{\Lambda_{c}\bar{\Lambda}_{c}}$, $\Psi_{\Xi_{c}\Xi_{c}}$ and $\Psi_{\Xi_{c}\bar{\Xi}_{c}}$. The spin-triplets have no $S-D$ mixing because of the cancellations of the coupling constants. | $\Lambda$ (GeV) | E (MeV) | $r_{rms}$(fm) | | $\Lambda$ (GeV) | E (MeV) | $r_{rms}$(fm)
---|---|---|---|---|---|---|---
| | | | | 0.89 | 2.80 | 2.15
$\Psi_{\Lambda_{c}\Lambda_{c}}^{[0,1]}$ | | $-$ | | $\Psi_{\Lambda_{c}\bar{\Lambda}_{c}}^{[0,1(3)]}$ | 0.90 | 4.61 | 1.76
| | | | | 1.00 | 49.72 | 0.74
| | | | | 1.10 | 142.19 | 0.52
| 0.95 | 2.53 | 2.17 | | 1.01 | 0.14 | 5.58
$\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$ | 1.00 | 7.41 | 1.41 | $\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$ | 1.05 | 0.29 | 4.48
| 1.10 | 20.92 | 0.96 | | 1.10 | 0.35 | 4.62
| 1.20 | 36.59 | 0.78 | | 1.20 | 0.18 | 5.40
| 0.87 | 1.48 | 2.72 | | 0.90 | 1.24 | 2.92
$\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[0,1(3)]}$ | 0.90 | 4.12 | 1.78 | $\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[1,1(3)]}$ | 1.00 | 10.33 | 1.25
| 1.00 | 28.94 | 0.86 | | 1.10 | 31.80 | 0.83
| 1.10 | 82.86 | 0.60 | | 1.20 | 66.19 | 0.64
Table 4: Numerical results of the systems $\Lambda_{c}\Lambda_{c}$,
$\Lambda_{c}\bar{\Lambda}_{c}$, $\Xi_{c}\Xi_{c}$ and $\Xi_{c}\bar{\Xi}_{c}$,
where “$-$” means no bound state solutions. After neglecting the tensor force
terms, the results of spin-triplets are the same as those of spin-singlets.
$\Xi_{c}^{0}$ and $\Xi_{c}^{+}$ contain the $s$ quark and their isospin is
$I=1/2$. Besides the $\sigma$ and $\omega$ meson exchanges, the $\phi$ and
$\rho$ exchanges also contribute to the potentials for the
$\Xi_{c}\Xi_{c}(\bar{\Xi}_{c})$ systems. Figs. 3 (c) and (d) illustrate the
total potentials and the contributions from the light meson exchanges for
$\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$ and $\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$. For
$\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$, the attraction arises from the $\sigma$
exchange. Because of the repulsion provided by the $\phi$, $\rho$ and $\omega$
exchange in short range, the total potential has a shallow well at $r\approx
0.2\mathrm{~{}fm}$. However, the $\phi$ exchange almost does not contribute to
the potential of $\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$ and the $\rho$ exchange is
attractive which cancels the repulsion of the $\sigma$ exchange. The total
potential is about two times deeper than the total potential of
$\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$.
In Table 4, one notices that the binding energy is only hundreds of
$\mathrm{~{}keV}$ for $\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$ when the cutoff varies
from $1.01\mathrm{~{}GeV}$ to $1.20\mathrm{~{}GeV}$. Moreover the RMS radius
of this bound state is very large. So the state
$\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$ is very loosely bound if it really exists. The
$\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$ bound state may also exist. Its binding energy
and RMS radius are $2.53\sim 36.59\mathrm{~{}MeV}$ and $2.17\sim
0.78\mathrm{~{}fm}$ respectively with $\Lambda=0.95\sim 1.20\mathrm{~{}GeV}$.
As for the $\Xi_{c}\bar{\Xi}_{c}$ systems, the potentials are very deep. The
contribution from the $\phi$ exchange is negligible too, as shown in Fig. 3
(e) and (f). We find four bound state solutions for these systems:
$\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[0,1]}$, $\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[0,3]}$,
$\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[1,1]}$ and
$\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[1,3]}$. Among them, the numerical results of
$\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[0,3]}$ and $\Psi_{\Xi\bar{\Xi}_{c}}^{[1,1]}$
are almost the same as those of $\Psi_{\Xi\bar{\Xi}_{c}}^{[0,1]}$ and
$\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[1,3]}$ respectively. The binding energies and
the RMS radii of these states are shown in Table 4. We can see that the
binding energy of $\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[0,1]}$ varies from
$1.48\mathrm{~{}MeV}$ to $82.86\mathrm{~{}MeV}$ whereas the RMS radius reduces
from $2.72\mathrm{~{}fm}$ to $0.60\mathrm{~{}fm}$ when the cutoff is below
$1.10\mathrm{~{}GeV}$. The situation of $\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[1,3]}$
is similar to that of $\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[0,1]}$ qualitatively.
They may exist. But the binding energies appear a little large and the RMS
radii too small when one takes $\Lambda$ above $1.10\mathrm{~{}GeV}$.
### IV.2 $\Sigma_{c}\Sigma_{c}$, $\Xi^{\prime}_{c}\Xi^{\prime}_{c}$ and
$\Omega_{c}\Omega_{c}$ systems
For the $\Sigma_{c}\Sigma_{c}$ system, all the $\pi$, $\eta$, $\sigma$,
$\omega$ and $\rho$ exchanges contribute to the total potential. We give the
variation of the potentials with $r$ in Figs. 4 (a) and (b). For
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[0,1]}$, the potential of the $\omega$ exchange
and $\rho$ exchange almost cancel out, and the $\eta$ exchange gives very
small contribution. So the total potential of this state mainly comes from the
$\pi$ and $\sigma$ exchanges which account for the long and medium range
attraction respectively. There may exist a bound state
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[0,1]}$, see Table 5.
But for the other spin-singlet, $\Psi_{\Sigma_{c}\Sigma_{c}}^{[2,1]}$, the
$\sigma$ exchange provides only as small as $0.2\mathrm{~{}GeV}$ attraction
while the $\omega$ and $\rho$ exchanges give strong repulsions in short range
$r<0.6\mathrm{~{}fm}$. We have not found any bound solutions for
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[2,1]}$ as shown in Table 5. For the spin-
triplet state $\Psi_{\Sigma_{c}\Sigma_{c}}^{[1,3]}$, there exist bound state
solutions with binding energies between $0.11\mathrm{~{}MeV}$ and
$31.35\mathrm{~{}MeV}$ when the cutoff lies between $1.05\mathrm{~{}GeV}$ and
$1.80\mathrm{~{}GeV}$. This state is the mixture of ${}^{3}S_{1}$ and
${}^{3}D_{1}$ due to the tensor force in the potential. From Table 5, one can
see the $S$ wave percentage is more than $90\%$.
(a) $V_{\Sigma_{c}\Sigma_{c}}^{[0,1]}$ with $\Lambda=1.10\mathrm{~{}GeV}$.
(d) $V_{\Sigma_{c}\bar{\Sigma}_{c}}^{[1,1]}$ with
$\Lambda=1.0\mathrm{~{}GeV}$.
(g) $V_{\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}}^{[0,1]}$ with
$\Lambda=1.0\mathrm{~{}GeV}$.
(j) $V_{\Omega_{c}\bar{\Omega}_{c}}^{[0,1]}$ with
$\Lambda=1.0\mathrm{~{}GeV}$.
(b) $V_{\Sigma_{c}\Sigma_{c}}^{[2,1]}$ with $\Lambda=1.0\mathrm{~{}GeV}$.
(e) $V_{\Sigma_{c}\bar{\Sigma}_{c}}^{[2,1]}$ with
$\Lambda=1.0\mathrm{~{}GeV}$.
(h) $V_{\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}}^{[1,1]}$ with
$\Lambda=1.0\mathrm{~{}GeV}$.
(c) $V_{\Sigma_{c}\bar{\Sigma}_{c}}^{[0,1]}$ with
$\Lambda=1.0\mathrm{~{}GeV}$.
(f) $V_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[1,1]}$ with
$\Lambda=1.70\mathrm{~{}GeV}$.
(i) $V_{\Omega_{c}\Omega_{c}}^{[0,1]}$ with $\Lambda=1.0\mathrm{~{}GeV}$.
Figure 4: Potentials of $\Psi_{\Sigma_{c}\Sigma_{c}}$, $\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}$, $\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}$, $\Psi_{\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}}$, $\Psi_{\Omega_{c}\Omega_{c}}$ and $\Psi_{\Omega_{c}\bar{\Omega}_{c}}$. | $\Lambda$ (GeV) | E (MeV) | $r_{rms}$ (fm) | | $\Lambda$ (GeV) | E (MeV) | $r_{rms}$ (fm) | $P_{S}$ : | $P_{D}$ (%)
---|---|---|---|---|---|---|---|---|---
| 1.07 | 1.80 | 2.32 | | | | | |
| 1.08 | 3.10 | 1.88 | | | | | |
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[0,1]}$ | 1.10 | 6.55 | 1.44 | $\Psi_{\Sigma_{c}\Sigma_{c}}^{[0,3]}$ | | $\times$ | | |
| 1.20 | 42.95 | 0.78 | | | | | |
| 1.25 | 75.75 | 0.65 | | | | | |
| | | | | 1.05 | 0.11 | 5.94 | 98.11 | 1.89
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[1,1]}$ | | | | $\Psi_{\Sigma_{c}\Sigma_{c}}^{[1,3]}$ | 1.47 | 2.03 | 2.48 | 94.21 | 5.79
| | $\times$ | | | 1.50 | 2.52 | 2.27 | 93.79 | 6.21
| | | | | 1.80 | 31.35 | 0.76 | 91.41 | 8.59
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[2,1]}$ | | $-$ | | $\Psi_{\Sigma_{c}\Sigma_{c}}^{[2,3]}$ | | $\times$ | | |
Table 5: Numerical results of the $\Sigma_{c}\Sigma_{c}$ system, where the symbol “$\times$” means this state is forbidden and “$-$” means no solutions. | One Boson Exchange | One Pion Exchange
---|---|---
| $\Lambda$(GeV) | E (MeV) | $r_{rms}$(fm) | $P_{S}$ : | $P_{D}$(%) | $\Lambda$(GeV) | E(MeV) | $r_{rms}$(fm) | $P_{S}$ : | $P_{D}$(%)
| 0.97 | 0.86 | 3.76 | | | | | | |
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[0,1]}$ | 0.98 | 3.03 | 2.21 | | | | | | |
| 1.00 | 18.43 | 1.01 | | | | | $-$ | |
| 1.05 | 175.56 | 0.41 | | | | | | |
| 0.93 | 1.04 | 3.50 | 81.20 | 18.80 | 0.80 | 17.54 | 1.20 | 82.93 | 17.07
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[0,3]}$ | 0.94 | 2.55 | 2.57 | 75.27 | 24.73 | 0.85 | 26.33 | 1.04 | 81.66 | 18.34
| 1.00 | 28.16 | 1.29 | 58.07 | 41.93 | 0.90 | 37.48 | 0.92 | 80.57 | 19.42
| 1.05 | 78.48 | 0.99 | 50.56 | 49.44 | 1.05 | 87.94 | 0.68 | 78.03 | 21.97
| 0.93 | 0.75 | 3.77 | | | | | | |
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[1,1]}$ | 0.94 | 2.54 | 2.27 | | | | | $-$ | |
| 0.98 | 32.28 | 0.80 | | | | | | |
| 1.00 | 66.97 | 0.60 | | | | | | |
| 0.80 | 3.71 | 1.91 | 94.73 | 5.27 | 0.97 | 1.04 | 3.14 | 93.68 | 6.32
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[1,3]}$ | 0.81 | 5.18 | 1.69 | 94.38 | 5.62 | 1.02 | 2.51 | 2.18 | 91.58 | 8.42
| 0.90 | 40.35 | 0.86 | 90.12 | 9.88 | 1.10 | 6.44 | 1.51 | 89.04 | 10.96
| 1.00 | 143.46 | 0.62 | 76.86 | 23.14 | 1.30 | 27.27 | 0.88 | 84.89 | 15.11
| 0.80 | 24.87 | 0.85 | | | 0.75 | 2.49 | 1.98 | |
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[2,1]}$ | 0.85 | 49.30 | 0.67 | | | 0.80 | 5.95 | 1.38 | |
| 0.90 | 90.04 | 0.55 | | | 0.90 | 18.30 | 0.88 | |
| 0.95 | 149.66 | 0.46 | | | 1.10 | 72.23 | 0.51 | |
| 0.90 | 1.44 | 2.93 | 96.92 | 3.08 | | | | |
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[2,3]}$ | 1.00 | 14.99 | 1.21 | 95.43 | 4.57 | | | $-$ | |
| 1.10 | 41.81 | 0.86 | 95.11 | 4.89 | | | | |
| 1.20 | 77.28 | 0.71 | 94.72 | 5.28 | | | | |
Table 6: Numerical results of the $\Sigma_{c}$-$\bar{\Sigma}_{c}$ system.
Results from the OBE and OPE alone are compared.
There exist bound state solutions for all six states of the
$\Sigma_{c}\bar{\Sigma}_{c}$ system. The potentials of the three spin-singlets
are plotted in Figs. 4 (c)-(e). The attraction that binds the baryonium mainly
comes from the $\rho$ and $\omega$ exchanges. These contributions are of
relatively short range at region $r<0.6\mathrm{~{}fm}$. One may wonder whether
the annihilation of the heavy baryon and anti-baryon might play a role here.
Thus the numerical results for $\Sigma_{c}\bar{\Sigma}_{c}$ with strong short-
range attractions should be taken with caution. This feature differs from the
dibaryon systems greatly.
In Table 6, for comparison, we also present the numerical results with the
$\pi$ exchange only. It’s very interesting to investigate whether the long-
range one-pion-exchange potential (OPE) alone is strong enough to bind the
baryonia and form loosely bound molecular states. There do not exist bound
states solutions for $\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[0,1]}$ and
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[1,1]}$ since the $\pi$ exchange is
repulsive. In contrast, the attractions from the $\pi$ exchange are strong
enough to form baryonium bound states for
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[0,3]}$,
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[1,3]}$ and
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[2,1]}$. We notice that the $S-D$ mixing
effect for the spin-triplets mentioned above is stronger than that for the
$\Sigma_{c}\Sigma_{c}$ system.
| $\Lambda$ (GeV) | E (MeV) | $r_{rms}$ (fm) | | $\Lambda$ (GeV) | E (MeV) | $r_{rms}$ (fm) | $P_{S}$ : | $P_{D}$ (%)
---|---|---|---|---|---|---|---|---|---
| | | | | 0.95 | 1.22 | 3.03 | 97.88 | 2.12
| | | | | 0.98 | 2.44 | 2.29 | 97.45 | 2.55
$\Psi_{\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}}^{[0,1]}$ | | $\times$ | | $\Psi_{\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}}^{[0,3]}$ | 1.00 | 3.41 | 2.01 | 97.26 | 2.74
| | | | | 1.20 | 15.43 | 1.16 | 96.74 | 3.26
| | | | | 1.30 | 21.50 | 1.03 | 96.83 | 3.17
| 1.50 | 0.18 | 5.52 | | | | | |
| 1.65 | 1.24 | 3.08 | | | | | |
$\Psi_{\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}}^{[1,1]}$ | 1.70 | 1.83 | 2.64 | $\Psi_{\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}}^{[1,3]}$ | | | $\times$ | |
| 1.80 | 3.42 | 2.08 | | | | | |
| 1.90 | 5.58 | 1.74 | | | | | |
Table 7: Numerical results of the $\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}$ system. | One Boson Exchanges | One Pion Exchanges
---|---|---
| $\Lambda$ (GeV) | E (MeV) | $r_{rms}$ (fm) | $P_{s}$ : | $P_{D}$(%) | $\Lambda$ (GeV) | E (MeV) | $r_{rms}$ (fm) | $P_{S}$ : | $P_{D}$ (%)
| 0.96 | 0.40 | 4.57 | | | | | | |
| 0.99 | 3.22 | 2.00 | | | | | $-$ | |
$\Psi_{\Xi_{c}^{{}^{\prime}}\bar{\Xi}_{c}^{{}^{\prime}}}^{[0,1]}$ | 1.00 | 5.13 | 1.65 | | | | | | |
| 1.10 | 83.53 | 0.58 | | | | | | |
| 0.80 | 3.82 | 1.86 | 96.33 | 3.67 | 1.15 | 0.77 | 3.42 | 94.89 | 5.11
| 0.90 | 19.40 | 1.04 | 94.34 | 5.66 | 1.20 | 1.89 | 2.35 | 93.01 | 6.99
$\Psi_{\Xi_{c}^{{}^{\prime}}\bar{\Xi}_{c}^{{}^{\prime}}}^{[0,3]}$ | 1.00 | 59.74 | 0.74 | 90.03 | 9.97 | 1.40 | 12.69 | 1.10 | 88.10 | 11.90
| 1.05 | 90.87 | 0.66 | 86.20 | 13.80 | 1.50 | 22.91 | 0.88 | 86.44 | 13.56
| 0.80 | 14.13 | 1.01 | | | | | | |
$\Psi_{\Xi_{c}^{{}^{\prime}}\bar{\Xi}_{c}^{{}^{\prime}}}^{[1,1]}$ | 0.90 | 13.58 | 1.07 | | | | | $-$ | |
| 1.00 | 34.00 | 0.77 | | | | | | |
| 1.10 | 83.78 | 0.56 | | | | | | |
| 0.90 | 0.56 | 3.99 | 99.76 | 0.24 | | | | |
$\Psi_{\Xi_{c}^{{}^{\prime}}\bar{\Xi}_{c}^{{}^{\prime}}}^{[1,3]}$ | 1.00 | 7.53 | 1.41 | 99.59 | 0.41 | | | $-$ | |
| 1.10 | 22.97 | 0.94 | 99.58 | 0.42 | | | | |
| 1.20 | 43.80 | 0.76 | 99.58 | 0.42 | | | | |
Table 8: Comparison of the numerical results of the system
$\Xi_{c}^{{}^{\prime}}\bar{\Xi}_{c}^{{}^{\prime}}$ in the OBE model and OPE
model.
The $\Xi_{c}^{\prime}\Xi_{c}^{\prime}(\bar{\Xi}_{c}^{\prime})$ systems are
similar to $\Xi_{c}\Xi_{c}(\bar{\Xi}_{c})$ and the results are listed in Figs.
4 (f)-(h) and Tables 7-8. Among the six bound states,
$\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[1,1]}$ is the most interesting one.
As shown in Fig. 4 (f), the $\eta$ exchange does not contribute to the total
potential. The $\pi$ exchange is repulsive. So the dominant contributions are
from the $\sigma$, $\omega$, $\rho$ and $\phi$ exchanges, which lead to a deep
well around $r=0.6\mathrm{~{}fm}$ and a loosely bound state. When we increase
the cutoff from $1.50\mathrm{~{}GeV}$ to $1.90\mathrm{~{}GeV}$, the binding
energy of $\Psi_{\Xi_{c}^{\prime}{\Xi}_{c}^{\prime}}^{[1,1]}$ varies from
$0.18\mathrm{~{}MeV}$ to $5.58\mathrm{~{}MeV}$, and the RMS radius varies from
$5.52$ fm to $1.74\mathrm{~{}fm}$. This implies the existence of this loosely
bound state. If we consider the $\pi$ exchange alone, only the
$\Psi_{\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}}^{[0,3]}$ state is bound. The
percentage of the ${}^{3}S_{1}$ component is more than $86\%$ when
$1.15\mathrm{~{}GeV}<\Lambda<1.50\mathrm{~{}GeV}$ as shown in Table 8.
| $\Lambda$ (GeV) | E (MeV) | $r_{rms}$ (fm) | | $\Lambda$ (GeV) | E (MeV) | $r_{rms}$ (fm) | $P_{S}$ : | $P_{D}$ (%)
---|---|---|---|---|---|---|---|---|---
| 0.96 | 1.07 | 3.04 | | | | | |
| 0.98 | 2.67 | 2.08 | | | | | |
$\Psi_{\Omega_{c}\Omega_{c}}^{[0,1]}$ | 1.00 | 4.51 | 1.69 | $\Psi_{\Omega_{c}\Omega_{c}}^{[0,3]}$ | | $\times$ | | |
| 1.20 | 5.92 | 1.59 | | | | | |
| 1.70 | 19.88 | 1.15 | | | | | |
| 0.90 | 13.12 | 1.06 | | 0.80 | 6.92 | 1.53 | 99.64 | 0.06
| 0.97 | 4.34 | 1.70 | | 0.88 | 3.05 | 1.98 | 99.96 | 0.04
$\Psi_{\Omega_{c}\bar{\Omega}_{c}}^{[0,1]}$ | 1.00 | 5.01 | 1.62 | $\Psi_{\Omega_{c}\bar{\Omega}_{c}}^{[0,3]}$ | 1.00 | 9.77 | 1.23 | 99.90 | 0.10
| 1.10 | 20.96 | 0.94 | | 1.10 | 26.22 | 0.86 | 99.79 | 0.21
| 1.20 | 108.50 | 0.48 | | 1.20 | 47.23 | 0.72 | 99.53 | 0.47
Table 9: Numerical results of the $\Omega_{c}\Omega_{c}$ and
$\Omega_{c}\bar{\Omega}_{c}$ systems.
The $\Omega_{c}\Omega_{c}(\bar{\Omega}_{c})$ case is quite simple. Only the
$\eta$, $\sigma$ and $\phi$ exchanges contribute to the total potentials. The
shape of the potential of $\Psi_{\Omega_{c}\Omega_{c}}^{[0,1]}$ is similar to
that of $\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[1,1]}$. The binding energy
of this state is very small. For the spin-triplet $\Omega_{c}\bar{\Omega}_{c}$
system, its $S$ wave percentage is more than $99\%$. In other words, the $S-D$
mixing effect is tiny for this system.
We give a brief comparison of our results with those of Refs. Froemel:2004ea ;
JuliaDiaz:2004rf in Table 10. In Ref. Froemel:2004ea , Fröemel et al. deduced
the potentials of nucleon-hyperon and hyperon-hyperon by scaling the
potentials of nucleon-nucleon. With the nucleon-nucleon potentials from
different models, they discussed possible molecular states such as
$\Xi_{cc}N$, $\Xi_{c}\Xi_{cc}$, $\Sigma_{c}\Sigma_{c}$ etc.. The second column
of Table 10 shows the binding energies corresponding different models while
the last column is the relevant results of this work. One can see the results
of Ref. Froemel:2004ea depend on models while our results are sensitive to
the cutoff $\Lambda$.
models | Nijm93 | NijmI | NijmII | AV18 | AV$8^{\prime}$ | AV$6^{\prime}$ | AV$4^{\prime}$ | AV$X^{\prime}$ | AV$2^{\prime}$ | AV$1^{\prime}$ | This work
---|---|---|---|---|---|---|---|---|---|---|---
$[\Xi_{c}^{{}^{\prime}}\Xi_{c}^{{}^{\prime}}]_{I=0}$ | - | * | $71.0$ | $457.0$ | - | $0.7$ | $24.5$ | $9.5$ | $12.8$ | - | $1.22\sim 21.50$
$[\Sigma_{c}\Sigma_{c}]_{I=2}$ | $66.6$ | - | - | $41.1$ | - | - | - | - | - | $0.7$ | -
$[\Sigma_{c}\Sigma_{c}]_{I=1}$ | - | * | $53.7$ | - | - | - | $7.3$ | $2.8$ | $8.3$ | $0.7$ | $0.11\sim 31.35$
$[\Sigma_{c}\Sigma_{c}]_{I=0}$ | * | * | $285.8$ | * | $16.1$ | $10.8$ | $87.4$ | $53.3$ | $58.5$ | $0.7$ | $1.80\sim 75.75$
Table 10: The comparison of the binding energies of $\Xi^{\prime}\Xi^{\prime}$
and $\Sigma_{c}\Sigma_{c}$ systems in this work and those in Ref.
Froemel:2004ea . The unit is$\mathrm{~{}MeV}$. “-” means there is no bound
state and “*” represents exiting unrealistic deeply bindings ($1\sim
10\mathrm{~{}GeV}$).
## V Conclusions
The one boson exchange model is very successful in the description of the
deuteron, which may be regarded as a loosely bound molecular system of the
neutron and proton. It’s very interesting to extend the same framework to
investigate the possible molecular states composed of a pair of heavy baryons.
With heavier mass and reduced kinetic energy, such a system is non-
relativistic. We expect the OBE framework also works in the study of the heavy
dibaryon system.
On the other hand, one should be cautious when extending the OBE framework to
study the heavy baryonium system. The difficulty lies in the lack of reliable
knowledge of the short-range interaction due to the heavy baryon and anti-
baryon annihilation. However, there may exist a loosely bound heavy baryonium
state when one turns off the short-range interaction and considers only the
long-range one-pion-exchange potential. Such a case is particularly
interesting. This long-range OPE attraction may lead to a bump, cusp or some
enhancement structure in the heavy baryon and anti-baryon invariant mass
spectrum when they are produced in the $e^{+}e-$ annihilation or B decay
process etc.
In this work, we have discussed the possible existence of the
$\Lambda_{c}\Lambda_{c}(\bar{\Lambda}_{c})$, $\Xi_{c}\Xi_{c}(\bar{\Xi}_{c})$,
$\Sigma_{c}\Sigma_{c}(\bar{\Sigma}_{c})$,
$\Xi^{\prime}_{c}\Xi^{\prime}_{c}(\bar{\Xi}_{c}^{\prime})$ and
$\Omega_{c}\Omega_{c}(\bar{\Omega}_{c})$ molecular states. We consider both
the long range contributions from the pseudo-scalar meson exchanges and the
short and medium range contributions from the vector and scalar meson
exchanges.
Within our formalism, the heavy analogue of the H dibaryon
$\Psi_{\Lambda_{c}\Lambda_{c}}^{[0,1]}$ does not exist though its potential is
attractive. However, the $\Psi_{\Lambda_{c}\bar{\Lambda}_{c}}^{[0,1]}$ and
$\Psi_{\Lambda_{c}\bar{\Lambda}_{c}}^{[0,3]}$ bound states might exist. For
the $\Xi_{c}\Xi_{c}$ system, there exists a loosely bound state
$\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$ with a very small binding energy and a very
large RMS radius around $5\mathrm{~{}fm}$. The spin-triplet state
$\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$ may also exist. Its binding energy and RMS
radius vary rapidly with increasing cutoff $\Lambda$. The qualitative
properties of $\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[0,1]}$ and
$\Psi_{\Xi_{c}\bar{\Xi}_{c}}^{[1,3]}$ are similar to those of
$\Psi_{\Lambda_{c}\bar{\Lambda}_{c}}^{[0,1]}$. They could exist but the
binding energies and RMS radii are unfortunately very sensitive to the values
of the cutoff parameter.
For the $\Sigma_{c}\Sigma_{c}$, $\Sigma_{c}\bar{\Sigma}_{c}$,
$\Xi_{c}^{\prime}\Xi_{c}^{\prime}$, $\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}$,
$\Omega_{c}\Omega_{c}$ and $\Omega_{c}\bar{\Omega}_{c}$ systems, the tensor
forces lead to the $S-D$ wave mixing. There probably exist the
$\Sigma_{c}\Sigma_{c}$ molecules $\Psi_{\Sigma_{c}\Sigma_{c}}^{[0,1]}$ and
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[1,3]}$ only. For the
$\Sigma_{c}\bar{\Sigma}_{c}$ system, the $\omega$ and $\rho$ exchanges are
crucial to form the bound states $\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[0,1]}$,
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[1,1]}$ and
$\Psi_{\Sigma_{c}\bar{\Sigma}_{c}}^{[2,3]}$. If one considers the $\pi$
exchange only for the $\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}$ system, there
may exist one bound state
$\Psi_{\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}}^{[0,3]}$.
The states $\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$ and
$\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[0,3]}$ are very interesting. They
are similar to the deuteron. Especially, $\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$ and
$\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[0,3]}$ have the same quantum
numbers as deuteron. For $\Psi_{\Xi_{c}\Xi_{c}}^{[0,3]}$, the $S-D$ mixing is
negligible whereas for deuteron such an effect can make the percentage of the
$D$ wave up to $4.25\%\sim 6.5\%$ Mach87 ; Rijken ; SprungEtc . The $D$ wave
percentage of $\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[0,3]}$ is $2.12\%\sim
3.17\%$.
The other two states $\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$ and
$\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[1,1]}$ are very loosely bound $S$
wave states. Remember that the binding energy of deuteron is about
$2.22\mathrm{~{}MeV}$ HoukAndLeun with a RMS radius $r_{rms}\approx
1.96\mathrm{~{}fm}$ BerardAndSimon . The binding energy and RMS radius of
$\Psi_{\Xi_{c}^{\prime}\Xi_{c}^{\prime}}^{[1,1]}$ is quite close to those of
the deuteron. In contrast, the state $\Psi_{\Xi_{c}\Xi_{c}}^{[1,1]}$ is much
more loosely bound. Its binding energy is only a tenth of that of deuteron.
However, the binding mechanisms for the deuteron and the above four bound
states are very different. For the deuteron, the attraction is from the $\pi$
and vector exchanges. But for these four states, the $\pi$ exchange
contribution is very small. Either the $\sigma$ (for $\Xi_{c}\Xi_{c}$) or
vector meson (for $\Xi_{c}^{\prime}\Xi_{c}^{\prime}$) exchange provides enough
attractions to bind the two heavy baryons.
Although very difficult, it may be possible to produce the charmed dibaryons
at RHIC and LHC. Once produced, the states $\Xi_{c}\Xi_{c}$ and
$\Xi_{c}^{\prime}\Xi_{c}^{\prime}$ are stable since $\Xi_{c}$ and
$\Xi_{c}^{\prime}$ decays either via weak or electromagnetic interaction with
a lifetime around $10^{-15}s$ pdg2010 . On the other hand, $\Sigma_{c}$ mainly
decays into $\Lambda_{c}^{+}\pi$. However its width is only
$2.2\mathrm{~{}MeV}$ pdg2010 . The relatively long lifetime of $\Sigma_{c}$
allows the formation of the molecular states
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[0,1]}$ and
$\Psi_{\Sigma_{c}\Sigma_{c}}^{[0,1]}$. These states may decay into
$\Sigma_{c}\Lambda_{c}^{+}\pi$ or $\Lambda_{c}^{+}\Lambda_{c}^{+}\pi\pi$ if
the binding energies are less than $131\mathrm{~{}MeV}$ or $62\mathrm{~{}MeV}$
respectively. Another very interesting decay mode is $\Xi_{cc}N$ with the
decay momentum around one hundred MeV. In addition, a baryonium can decay into
one charmonium and some light mesons. In most cases, such a decay mode may be
kinetically allowed. These decay patterns are characteristic and useful to the
future experimental search of these baryonium states.
Up to now, many charmonium-like “XYZ” states have been observed
experimentally. Some of them are close to the two charmed meson threshold.
Moreover, Belle collaboration observed a near-threshold enhancement in
$e^{+}e^{-}\to\Lambda_{c}\bar{\Lambda}_{c}$ ISR process with the mass and
width of $m=(4634^{+8}_{-7}(stat.)^{+5}_{-8}(sys.))\mathrm{~{}MeV}/c^{2}$ and
$\Gamma_{tot}=(92^{+40}_{-24}(stat.)^{+10}_{-21}(sys.))\mathrm{~{}MeV}$
respectively Pakhlova:2008vn . BaBar collaboration also studied the correlated
leading $\Lambda_{c}\bar{\Lambda}_{c}$ production Aubert:2010yz . Our
investigation indicates there does exist strong attraction through the
$\sigma$ and $\omega$ exchange in the $\Lambda_{c}\bar{\Lambda}_{c}$ channel,
which mimics the correlated two-pion and three-pion exchange to some extent.
Recently, ALICE collaboration observed the production of nuclei and antinuclei
in $pp$ collisions at LHC Collaboration:2011yf . A significant number of light
nuclei and antinuclei such as (anti)deuterons, (anti)tritons, (anti)Helium3
and possibly (anti)hypertritons with high statistics of over
$350\mathrm{~{}M}$ events were produced. Hopefully the heavy dibaryon and
heavy baryon and anti-baryon pair may also be produced at LHC. The heavy
baryon and anti-baryon pair may also be studied at other facilities such as
PANDA, J-Parc and Super-B factories in the future.
## Acknowledgments
We thank Profs. Wei-Zhen Deng, Jun He, Gui-Jun Ding and Jean-Marc Richard for
useful discussions. This project is supported by the National Natural Science
Foundation of China under Grants No. 11075004, No. 11021092, and the Ministry
of Science and Technology of China (No. 2009CB825200).
## References
* (1) S. K. Choi, S. L. Olsen, et al. [Belle Collaboration], Phys. Rev. Lett. 91, 262001 (2003); Phys. Rev. Lett. 94, 182002 (2005); Phys. Rev. Lett. 100, 142001 (2008); X. L. Wang, et al. [Belle Collaboration], Phys. Rev. Lett. 99, 142002 (2007); P. Pakhlov, et al. [Belle Collaboration], Phys. Rev. Lett. 100, 202001 (2008).
* (2) B. Aubert, et al. [BaBar Collaboration], Phys. Rev. Lett. 101, 082001 (2008); Phys. Rev. Lett. 95, 142001 (2005); Phys. Rev. Lett. 98, 212001 (2007).
* (3) D. E. Acosta, T. Affolder, et al. [CDF Collaboration], Phys. Rev. Lett. 93, 072001 (2004).
* (4) V. M. Abazov, et al. [D0 Collaboration], Phys. Rev. Lett. 93, 162002 (2004).
* (5) N. Brambilla et al., Eur. Phys. J. C 71, 1534 (2011) arXiv:1010.5827 [hep-ph].
* (6) E. S. Swanson, Phys. Rept. 429, 243 (2006).
* (7) A. D. Rujula, H. Georgi and S. L. Glashow, Phys. Rev. Lett. 38, 317 (1977).
* (8) N. A. Tornqvist, Z. Phys. C 61, 525 (1994).
* (9) E. S. Swanson, Phys. Lett. B 588, 189 (2004).
* (10) C. Y. Wong, Phys. Rev. C 69, 055202 (2004).
* (11) F. E. Close and P. R. Page, Phys. Lett. B 578, 119 (2004).
* (12) M. B. Voloshin, Phys. Lett. B 579, 316 (2004).
* (13) C. E. Thomas and F. E. Close, Phys. Rev. D 78, 034007 (2008).
* (14) Y. R. Liu, X. Liu, W. Z. Deng and S. L. Zhu, Eur. Phy. J. C 56 63 (2008); X. Liu, Y. R. Liu, W. Z. Deng and S. L. Zhu, Phys. Rev. D 77 094015 (2008); Phys. Rev. D 77, 034003 (2008).
* (15) X. Liu., Z. G. Luo, Y. R. Liu and S. L. zhu, Eur. Phys. J. C 61, 411 (2009); L. L. Shen, X. L. Chen, et al., Eur. Phys. J. C 70, 183 (2010); B. Hu, X. L. Chen, et al., Chin. Phys. C 35, 113 (2011); X. Liu and S. L. Zhu, Phys. Rev. D 80, 017502 (2009); X. Liu, Z. G. Luo and S. L. Zhu, arXiv:1011.1045 [hep-ph].
* (16) J. Ping, H. Pang, F. Wang and T. Goldman, Phys. Rev. C 65, 044003 (2002) [arXiv:nucl-th/0012011].
* (17) G. J. Ding, Phys. Rev. D 80, 034005 (2009); G. J. Ding, J. F. Liu and M. L. Yan, Phys. Rev. D 79, 054005 (2009).
* (18) X. Liu, Eur. Phys. J. C 54, 471 (2008).
* (19) F. Huang and Z. Y. Zhang, Phys. Rev. C 72, 068201 (2005); Y. R. Liu and Z. Y. Zhang, Phys. Rev. C 79, 035206 (2009); Q. B. Li, P. N. Shen, Z. Y. Zhang and Y. W. Yu, Nucl. Phys. A 683, 487 (2001).
* (20) Y. R. Liu and M. Oka, arXiv:1103.4624 [hep-ph].
* (21) Y. D. Chen and C. F. Qiao, arXiv:1102.3487 [hep-ph]
* (22) R. Machleidt, K. Holinde and C. Elster, Phys. Rept. 149, 1 (1987).
* (23) R. Machleidt, Phys. Rev. C 63, 024001 (2001).
* (24) M. M. Nagels, T. A. Rijken and J. D. de Swart, Phys. Rev. D 12 744 (1975); Phys. Rev. D 15 2547 (1977).
* (25) T. M. Yan, et al. Phys. Rev. D 46, 1148 (1992).
* (26) T. Barnes, N. Black, D. J. Dean and E. S. Swanson, Phys. Rev. C 60, 045202 (1999) [arXiv:nucl-th/9902068].
* (27) E. Klempt, F. Bradamante, et al. Phys. Rept. 368 (2002)
* (28) D. O. Riska and G. E. Brown, Nucl. Phys. A 679, 577 (2001).
* (29) X. Cao, B. S. Zou and H. S. Xu, Phys. Rev. C 81, 065201 (2010).
* (30) A. G. Abrashkevich, D. G. Abrashkevich, M. S. Kaschiev and I. V. Puzynin, Comput. Phys. Commun. 85 65-81 (1995).
* (31) K. Nakamura, et al. (Particle Data Group), J. Phys. G 37, 075021 (2010).
* (32) A. T. M. Aerts and C. B. Dover, Phys. Rev. D 28, 450 (1983).
* (33) Y. Iwasaki, T. Yoshie and Y. Tsuboi, Phys. Rev. Lett. 60, 1371 (1988).
* (34) R. W. Stotzer et al. [BNL E836 Collaboration], Phys. Rev. Lett. 78, 3646 (1997).
* (35) J. K. Ahn et al. [E224 Collaboration], Phys. Lett. B 378 (1996) 53.
* (36) F. Fröemel, B. Juliá-Díaz and D. O. Riska, Nucl. Phys. A 750, 337 (2005) [arXiv:nucl-th/0410034].
* (37) B. Juliá-Díaz and D. O. Riska, Nucl. Phys. A 755, 431 (2005) [arXiv:nucl-th/0405061].
* (38) R. de Tourreil, B. Rouben and D. W. L. Sprung, Nucl. Phys. A 242 445 (1975); M. Lacombe et al., Phys. Rev. C 21 861 (1980); R. Blankenbecler and R. Sugar, Phys. Rev. 142 (1966) 1051; R. B. Wiringa, R. A. Smith and T. L. Ainsworth, Phys. Rev. C 29 1207 (1984).
* (39) T. L. Houk, Phys. Rev. C 3 1886 (1971); C. van der Leun and C. Alderliesten, Nucl. Phys. A 380 261 (1982).
* (40) G. G. Simon, Ch. Schmitt and V. H. Walther, Nucl. Phys. A 364 (1981) 285; R. W. Bérard et al., Phys. Lett. B 47 355 (1973).
* (41) G. Pakhlova et al. [Belle Collaboration], Phys. Rev. Lett. 101, 172001 (2008) arXiv:0807.4458 [hep-ex].
* (42) B. Aubert et al. [BABAR Collaboration], Phys. Rev. D 82, 091102 (2010) arXiv:1006.2216 [hep-ex].
* (43) N. S. f. Collaboration, arXiv:1104.3311 [nucl-ex].
## APPENDIX
### V.1 The functions $H_{0}$, $H_{1}$, $H_{2}$ and $H_{3}$
The functions $H_{0}$, $H_{1}$, $H_{2}$ and $H_{3}$ are defined as Ding
$\displaystyle H_{0}(\Lambda,m,r)$ $\displaystyle=$
$\displaystyle\frac{1}{mr}\left(e^{-mr}-e^{-\Lambda
r}\right)-\frac{\Lambda^{2}-m^{2}}{2m\Lambda}e^{-\Lambda r},$ $\displaystyle
H_{1}(\Lambda,m,r)$ $\displaystyle=$
$\displaystyle-\frac{1}{mr}(e^{-mr}-e^{-\Lambda
r})+\Lambda\frac{\Lambda^{2}-m^{2}}{2m^{3}}e^{-\Lambda r},$ $\displaystyle
H_{2}\left(\Lambda,m,r\right)$ $\displaystyle=$
$\displaystyle\left(1+\frac{1}{mr}\right)\frac{e^{-mr}}{m^{2}r^{2}}-\left(1+\frac{1}{\Lambda
r}\right)\frac{\Lambda}{m}\frac{e^{-\Lambda
r}}{m^{2}r^{2}}-\frac{\Lambda^{2}-m^{2}}{2m^{2}}\frac{e^{-\Lambda r}}{mr},$
$\displaystyle H_{3}\left(\Lambda,m,r\right)$ $\displaystyle=$
$\displaystyle\left(1+\frac{3}{mr}+\frac{3}{m^{2}r^{2}}\right)\frac{e^{-mr}}{mr}-\left(1+\frac{3}{\Lambda
r}+\frac{3}{\Lambda^{2}r^{2}}\right)\frac{\Lambda^{2}}{m^{2}}\frac{e^{-\Lambda
r}}{mr}-\frac{\Lambda^{2}-m^{2}}{2m^{2}}\left(1+\Lambda
r\right)\frac{e^{-\Lambda r}}{mr}.$ (45)
With Fourier transformations we have
$\displaystyle\frac{1}{m^{2}+\bm{Q}^{2}}$ $\displaystyle\rightarrow$
$\displaystyle\frac{m}{4\pi}H_{0}(\Lambda,m,r),$
$\displaystyle\frac{\bm{Q}^{2}}{m^{2}+\bm{Q}^{2}}$ $\displaystyle\rightarrow$
$\displaystyle\frac{m^{3}}{4\pi}H_{1}(\Lambda,m,r),$
$\displaystyle\frac{\bm{Q}}{m^{2}+\bm{Q}^{2}}$ $\displaystyle\rightarrow$
$\displaystyle\frac{im^{3}\bm{r}}{4\pi}H_{2}(\Lambda,m,r),$
$\displaystyle\frac{Q_{i}Q_{j}}{m^{2}+\bm{Q}^{2}}$ $\displaystyle\rightarrow$
$\displaystyle-\frac{m^{3}}{12\pi}\left\\{H_{3}(\Lambda,m,r)\left(3\frac{r_{i}r_{j}}{r^{2}}-\delta_{ij}\right)-H_{1}(\Lambda,m,r)\delta_{ij}\right\\}.$
(46)
### V.2 The coupling constants of the heavy baryons and light mesons
In the quark model we have
$\displaystyle
g_{\pi\Xi_{c}\Xi_{c}}=0,\;\;g_{\pi\Sigma_{c}\Sigma_{c}}=\frac{4}{3}g_{\pi
qq}\frac{m_{\Sigma_{c}}}{m_{q}},\;\;g_{\pi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=\frac{2}{3}g_{\pi
qq}\frac{m_{\Xi^{\prime}_{c}}}{m_{q}},$ $\displaystyle
g_{\eta\Lambda_{c}\Lambda_{c}}=0,\;\;g_{\eta\Xi_{c}\Xi_{c}}=0,\;\;g_{\eta\Sigma_{c}\Sigma_{c}}=\frac{4}{3}g_{\eta
qq}\frac{m_{\Sigma_{c}}}{m_{q}},$ $\displaystyle
g_{\eta\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=-\frac{2}{3}g_{\eta
qq}\frac{m_{\Xi_{c}^{\prime}}}{m_{q}},\;\;g_{\eta\Omega_{c}\Omega_{c}}=-\frac{8}{3}g_{\eta
qq}\frac{m_{\Omega_{c}}}{m_{q}},$ $\displaystyle
g_{\rho\Xi_{c}\Xi_{c}}=g_{\rho qq},\;\;f_{\rho\Xi_{c}\Xi_{c}}=-g_{\rho qq},$
$\displaystyle g_{\rho\Sigma_{c}\Sigma_{c}}=2g_{\rho
qq},\;\;f_{\rho\Sigma_{c}\Sigma_{c}}=2g_{\rho
qq}\left(\frac{2}{3}\frac{m_{\Sigma_{c}}}{m_{q}}-1\right),$ $\displaystyle
g_{\rho\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=g_{\rho
qq},\;\;f_{\rho\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=g_{\rho
qq}\left(\frac{2}{3}\frac{m_{\Xi_{c}^{\prime}}}{m_{q}}-1\right),$
$\displaystyle g_{\omega\Lambda_{c}\Lambda_{c}}=2g_{\omega
qq},\;\;f_{\omega\Lambda_{c}\Lambda_{c}}=-2g_{\omega
qq},\;\;g_{\omega\Xi_{c}\Xi_{c}}=g_{\omega
qq},\;\;f_{\omega\Xi_{c}\Xi_{c}}=-g_{\omega qq},$ $\displaystyle
g_{\omega\Sigma_{c}\Sigma_{c}}=2g_{\omega
qq},\;\;f_{\omega\Sigma_{c}\Sigma_{c}}=2g_{\omega
qq}\left(\frac{2}{3}\frac{m_{\Sigma_{c}}}{m_{q}}-1\right),$ $\displaystyle
g_{\omega\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=g_{\omega
qq},\;\;f_{\omega\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=g_{\omega
qq}\left(\frac{2}{3}\frac{m_{\Xi_{c}^{\prime}}}{m_{q}}-1\right),$
$\displaystyle g_{\phi\Xi_{c}\Xi_{c}}=g_{\phi
qq},\;\;f_{\phi\Xi_{c}\Xi_{c}}=-g_{\phi qq},$ $\displaystyle
g_{\phi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=g_{\phi
qq},\;\;f_{\phi\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=g_{\phi
qq}\left(\frac{2}{3}\frac{m_{\Xi_{c}^{\prime}}}{m_{q}}-1\right),$
$\displaystyle g_{\phi\Omega_{c}\Omega_{c}}=2g_{\phi
qq},\;\;f_{\phi\Omega_{c}\Omega_{c}}=2g_{\phi
qq}\left(\frac{2}{3}\frac{m_{\Omega_{c}}}{m_{q}}-1\right),$ $\displaystyle
g_{\sigma\Lambda_{c}\Lambda_{c}}=2g_{\sigma
qq},\;\;g_{\sigma\Xi_{c}\Xi_{c}}=2g_{\sigma qq},$ $\displaystyle
g_{\sigma\Sigma_{c}\Sigma_{c}}=2g_{\sigma
qq},\;\;g_{\sigma\Xi_{c}^{\prime}\Xi_{c}^{\prime}}=2g_{\sigma
qq},\;\;g_{\sigma\Omega_{c}\Omega_{c}}=2g_{\sigma qq}.$ (47)
Because nucleons do not interact directly with the $\phi$ meson in the quark
model, we can not get $g_{\phi qq}$ in this way. However, using the $SU(3)$
flavor symmetry, we have $g_{\phi qq}=\sqrt{2}g_{\rho qq}$. Since $g_{\rho
qq}$ is related to $g_{\rho NN}$, all coupling constants of heavy charmed
baryons and $\phi$ can be expressed in terms of $g_{\rho NN}$.
### V.3 The dependence of the binding energy on the cutoff
Finally, we plot the variations of the binding energies with the cutoff.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figure 5: Dependence of the binding energy on the cutoff. In Figs.(f) and (i),
only one-pion contributions are included.
|
arxiv-papers
| 2011-04-21T13:27:04 |
2024-09-04T02:49:18.359296
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ning Lee, Zhi-Gang Luo, Xiao-Lin Chen and Shi-Lin Zhu",
"submitter": "Zhi-Gang Luo",
"url": "https://arxiv.org/abs/1104.4257"
}
|
1104.4277
|
# Two dimensional dipolar scattering with a tilt
Christopher Ticknor Theoretical Division, Los Alamos National Laboratory, Los
Alamos, New Mexico 87545, USA
###### Abstract
We study two body dipolar scattering in two dimensions with a tilted
polarization axis. This tilt reintroduces the anisotropic interaction in a
controllable manner. As a function of this polarization angle we present the
scattering results in both the threshold and semi-classical regimes. We find a
series of resonances as a function of the angle which allows the scattering to
be tuned. However the character of the resonances varies strongly as a
function angle. Additionally we study the properties of the molecular bound
states as a function of the polarization angle.
###### pacs:
34.20.Cf,34.50.-s,34.20.-b
## I introduction
There are exciting proposals based on dipolar gases in two-dimensional (2D)
geometries, such theories show dipolar systems will lead to exotic and highly
correlated quantum systems bar . There has also been tremendous advances in
the production of ultracold polar molecules carr , especially at JILA where a
group has produced ultracold RbK in 1D optical lattice fermi-q2d . The group
used an electric field to align the molecular dipole moments ($\vec{d}$) along
the $\hat{z}$ direction perpendicular to the plane of motion ($\vec{\rho}$).
The tight trapping geometry and the dipolar interaction were used to inhibit
the molecules from reaching their short range interaction where they would
chemically react silke .
There are alternative molecular systems which will not chemically react and
less restrictive configurations can be considered. For example RbCs and NaCs
are chemically stable chemstab . One interesting possibility to control the
properties of the these dipolar gases is to tilt the polarization axis into
the plane of motion. Such a scenario has been considered by many-body
theories: for example anisotropic superfluidity has been predicted aniso , 2D
dipolar fermions have been studied tilt-fermi , and few body dipolar complexes
have been investigated wigner-few . However little is known about the nature
of the scattering physics of such a 2D system with some in-plane polarization.
That is the aim of this work; we study the scattering properties of the 2D
dipolar system when the polarization is not fixed out of the plane of motion.
This reintroduces the anisotropy of the interaction controllably, such that
there is a preferred direction where the dipoles can line up in a head to tail
fashion that is energetically favored to the side by side configuration, just
as in 3D dipolar physics. But in this case the strength of the anisotropy can
be controlled directly by the polarization angle. For small angles, there is
only a weaker repulsion in one direction, but for large angles (near $\pi/2$)
there is an attractive interaction.
Some recent work has aimed at understanding the scattering behavior of dipoles
in 2D and quasi-two dimensions (q2D). First, dipolar scattering 2D in the
threshold and semi-classical regime was studied in pure 2D determining the
limiting behavior of such scattering ticknor2d . Then q2D was studied
ticknorq2d and more recently Ref. jose introduced an elegant method to solve
the full scattering problem. Other theories have focused on understanding
scattering and chemical reactions in q2D Goulven ; micheli and how to use the
electric field and trap to control the scattering rate. There has also been
some recent work on understanding the scattering and bound state structure of
2D layer dipolar systems layer ; layer2 and in a layered system with a tilted
polarization axis volo .
In the next section we look at the basic scattering of the system and offer
estimates of the scattering as a function of the polarization angle. Then we
look at the character of the scattering resonances. Finally we study the
molecules that can be formed, their binding energies, size, and shape as a
function of polarization angle.
## II Equations of motion
For this work we assume that the length scale of confinement is much smaller
than any other length. This effectively removes it from the scattering
problem. Realistically this length scale will be important, but as a first
study to provide useful estimates of the scattering this assumption is
justified. The Schrödinger equation for two dipoles in 2D is
$\displaystyle\left(-{\hbar^{2}\over
2\mu}\nabla^{2}_{\vec{\rho}}+d^{2}{1-3(\hat{d}\cdot\hat{\rho})^{2}\over\rho^{3}}\right)\psi={E}\psi.$
(1)
where the dipoles are polarized along
$\hat{d}=\hat{z}\cos(\alpha)+\hat{x}\sin(\alpha)$ with magnitude $d$ and
$\vec{\rho}=x\hat{x}+y\hat{y}$. We solve this equation by using partial wave
expansion: $\psi(\vec{\rho})=\sum_{m}\phi_{m}(k\rho)e^{im\phi}/\sqrt{\rho}$.
In this case the tilted polarization axis ruins the cylindrical symmetry,
meaning that different azimuthal symmetries are coupled together. Important
features of the interaction anisotropy can be distilled by looking at the
matrix elements:
$\displaystyle\langle
m|1-3(\hat{d}\cdot\hat{\rho})^{2}|m^{\prime}\rangle=U_{mm^{\prime}}$
$\displaystyle=\left(1-{3\over
2}\sin^{2}(\alpha)\right)\delta_{mm^{\prime}}-{3\over
4}\sin^{2}(\alpha)\delta_{mm^{\prime}\pm 2}$ (2)
For $\alpha=0$, the system is totally repulsive and isotropic. Then as
$\alpha$ is increased the isotropic repulsive term is weakened in the $x$
direction but it is still full strength in the $y$ direction. This anisotropy
enters as couplings between channels with $m$ and $m\pm 2$. At the
$\alpha_{c}=\sin^{-1}(\sqrt{2/3})\sim 54.7$ degrees or $\alpha/\pi\sim 0.3$
there is no barrier to the short range and past this angle there is an
attractive dipolar diagonal potential.
Using the matrix elements in Eq. (2) and the dipolar length $D=\mu
d^{2}/\hbar^{2}$ to rescale Eq. (1), we obtain a multi-channel radial
Schrödinger equation describing 2D dipolar scattering with tilted polarization
axis:
$\displaystyle\left(-{1\over 2}{d^{2}\over d\tilde{\rho}^{2}}+{m^{2}-1/4\over
2\tilde{\rho}^{2}}-{E\over E_{D}}\right)\phi_{m}(\tilde{\rho})=$
$\displaystyle-\sum_{m^{\prime}}{U_{mm^{\prime}}\over\tilde{\rho}^{3}}\phi_{m^{\prime}}(\tilde{\rho}).$
(3)
where $\tilde{\rho}=\rho/D$ and $E_{D}$ the dipolar energy is
$\hbar^{6}/\mu^{3}d^{4}$. To perform the scattering calculation we start at
$\rho_{0}/D$. We vary this parameter and use it to control the scattering
properties of the system; $\rho_{0}/D$ is used to tune the $m$=0 scattering or
the scattering length $a/D$. This distance signifies where the interaction
becomes more complicated through transverse modes or system specific
interactions becoming important. Thus a more sophisticated boundary condition
is required at this wall. However for our initial treatment, we only demand
the wave function be zero at $\rho_{0}/D$.
It is worth while to comment that using $\rho_{0}/D$ to parameterize the
scattering should be viewed as varying the electric field. $D$ is $\mu
d^{2}/\hbar^{2}$ and as and electric field is increased the induced dipole
moment, $d$, becomes larger. Thus decreasing $rho_{0}/D$ mimics an increasing
electric field. Additionally, the correspondence of $rho_{0}/D$ to $a/D$ is
unique but due to the nature of the problem (multi-channel) it is complex and
numerically found.
Before we present the full numerical scattering calculations, we discuss the
form of the free ($U_{mm^{\prime}}=0$) 2D wavefunctions for both scattering
and bound states. The scattering wavefunction is
$\phi_{m}(k\rho)=\cos(\delta_{m})f_{m}(k\rho)-\sin(\delta_{m})g_{m}(k\rho)$
where $\delta_{m}$ is the scattering phase shift for the $m$ partial wave and
$f_{m}$ $(g_{m})$ is the regular (irregular) free solution. In 2D, it is
$\sqrt{k\rho}J_{m}(k\rho)$ ($\sqrt{k\rho}N_{m}$) where $J_{m}$ ($N_{m}$) is a
Bessel (von Neumann) function and $k=\sqrt{2\mu E}$. If the system were bound
then the asymptotic wavefunction for the $m=0$ is
$\phi_{b}=\sqrt{\kappa\rho}K_{0}(\kappa\rho)$ where $K_{0}$ is the modified
Bessel function and in the large $\rho$ limit this decays as $e^{-\kappa\rho}$
with $\kappa=\sqrt{-2\mu E_{b}}$ and $E_{b}$ is the binding energy.
As in 3D, the scattering length is defined by when the zero energy
wavefunction is zero, $\psi(a)=\phi_{0}(a)=0$:
$\phi_{0}(a)=\cot(\delta_{0})f_{0}(a)-g_{0}(a)=0$ verhaar . Then the
scattering length can be computed with $a={2\over k}e^{{\pi\over
2}\cot(\delta_{0})-\gamma}$ where $\gamma$ is the Euler gamma function
$\sim$0.577. Conversely, the phase shift can be defined by the scattering
length: $\cot(\delta_{0})={2\over\pi}(\ln(ka/2)+\gamma)$ as the first term in
the effective range expansion verhaar . This definition of the scattering
length is effectively energy independent once in the thresholds regime,
$Dk<1$.
Using these wavefunctions to study the basic molecular properties, we start in
the large $a$ limit where the binding energy goes to zero. At moderate $\rho$,
the wavefunction is essentially the same whether the two particles are a near
zero energy scattering state or a loosely bound molecule. Using this fact, we
match log-derivatives of the near-zero energy scattering wavefunction in terms
of the scattering length from the short range to the long range asymptotic
bound wavefunction, $\phi_{b}(\kappa\rho)$. Using the small argument
expansions of both wavefunctions allows us to determine $\kappa$ in terms of
$a$: $\kappa_{a}=2e^{-\gamma}/a$ and the binding energy follows:
$-4\hbar^{2}e^{-2\gamma}/2\mu a^{2}$ kanj . Using $\phi_{b}(\kappa_{a}\rho)$
will offer us many interesting analytic properties of the molecules in the
large $a$ limit. When compared to the full multi-channel numerical
calculation, these analytic results provide very good estimates of the
molecular properties for large values of $a/D$.
## III Scattering
Now we look at the scattering properties of this system as a function of the
polarization angle. To do this, we solve Eq. (3) with the Johnson Log-
derivative propagator johnson . We then extract the T-matrix, $T_{if}$, which
describes the scattering between channels $i$ and $f$. The total cross section
is $\sigma={1\over k}\sum_{if}|T_{if}|^{2}$. The elastic cross section for $m$
can be written as $\sigma_{m}={4\over k}\sin^{2}(\delta_{m})$, where
$\delta_{m}$ is the scattering phaseshift for the $m$ partial wave gu ; ajp_2d
. Sometimes, the most useful quantity is not the scattering cross section,
rather it is the dimensionless scattering rate $k\sigma$. Plotting this
quantity reveals the system independent or universal scattering
characteristics of the scattering dipolar system, as was shown in 3D by Refs.
universal ; NJP ; roudnev . We now present general trends of the 2D dipolar
system with a tilted polarization axis.
Figure 1: (Color Online) The Born approximation (BA) for fermions shown as a
function of $\alpha/\pi$. We have plotted the energy independent:
$\sigma/\sigma_{BA}^{odd}(\alpha=0)$ as a function of $\alpha$ for
$Dk=10^{-1}$ (red open circle), $10^{-2}$ (blue +), $10^{-3}$ (green dash),
$10^{-4}$ (purple open diamond), and the BA is shown as a bold blue line.
In the threshold regime the Born approximation (BA) offers a good estimate of
the scattering for non-zero partial waves ticknor2d ; ajp_2d . This is most
useful for estimating the cross section for identical fermions. In this model
the dipoles are spinless and the way one models fermions or bosons is by
imposing the symmetric or anti-symmetric requirement on the spatial
wavefunction. This leads to the scattering properties of fermions being a sum
of only the odd partial waves and bosons a sum of only the even. For the case
of distinguishable particles, one sums up all of the partial waves. The BA
result for this systems is:
$\displaystyle k\sigma_{BA}^{m\rightarrow m}={4(Dk)^{2}\over(m^{2}-{1\over
4})^{2}}\left(1-{3\over 2}\sin(\alpha)^{2}\right)^{2}$ (4) $\displaystyle
k\sigma_{BA}^{\pm 1\rightarrow\mp 1}={4(Dk)^{2}\over(m^{2}-{1\over
4})^{2}}\left({3\over 4}\sin(\alpha)^{2}\right)^{2}$ $\displaystyle
k\sigma_{BA}^{m\rightarrow m+2}={4(Dk)^{2}\over\left((m+{1\over 2})(m+{3\over
2})\right)^{2}}\left({3\over 4}\sin(\alpha)^{2}\right)^{2}$
There are two basic types of collisions given here: diagonal and off-diagonal
or $m$ changing. For the off-diagonal scattering, there is a special case of
p-wave collisions where $\pm 1$ goes to $\mp 1$ and has the same functional
form as the diagonal contribution. Then for the other off-diagonal terms
$m\rightarrow m+2$, there is a distinct form. For identical fermions, the BA
can be compared directly to the full scattering cross section with out
worrying about a short range phase shift. This comparison is made in Fig. 1
where the full cross section is divided by the $\alpha=0$ BA cross section.
Plotting this removes the energy dependence of the cross section. The BA is
shown as a thick blue line normalized by its $\alpha=0$ value, and the full
cross sections ares shown for $Dk=10^{-4}$ (violet open diamonds), $10^{-3}$
(dashed green), $10^{-2}$ (black $+$), and $0.1$ (red open circles). Relating
$Dk$ to scattering energy is simply: $E/E_{D}=(Dk)^{2}/2$.
In Fig. 1 the agreement is good for the full range of $\alpha$, especially at
small $k$. But it is worth noticing for $\rho_{0}/D=0.01$ there are two
resonances as $\alpha/\pi$ goes to $1/2$. This are most clearly seen in
$k=0.1$. For the smaller values of $k$ they are narrow and the region where
the $\sigma$s deviates from the BA are increasingly small.
Figure 2: (Color Online) The total and inelastic $\sigma/\sigma_{SC}$ as a
function of $\alpha$ is plotted for four different values of $Dk$: 10 (black
solid line), 35 (red dashed), 53 (blue circles), and 80 (green +). The
inelastic data is when $m$ is changed in a collisions, and in this case, it
makes up a significant fraction of the scattering.
Alternatively, in the high energy regime we can estimate the cross section
with the Eikonal Approximation ticknor2d : $\sigma_{SC}={4\over k}\sqrt{\pi
Dk}$. This offers a good estimate of the scattering cross section. We have
plotted the scattering cross section over $\sigma_{SC}$. Plotting this
quantity removes the energy dependence of the scattering. In Fig. 2 we have
plotted both the total and inelastic ($m$ changing) cross sections over
$\sigma_{SC}$. The different energies are $k$=10 (black solid line), 35 (red
dashed), 53 (blue circles), and 80 (green +). This estimate is given for the
distinguishable case. In the case of bosons or fermions their $\sigma$ will
oscillate about $\sigma_{Eik}$ ticknor2d ; jose . Notice that the elastic
scattering never turns off even though there is no diagonal interaction at
$\alpha_{c}/\pi\sim 0.30$. This shows that the scattering is made up from
second-order processes; even though there is no diagonal interaction, there is
a significant diagonal scattering contribution because of the off-diagonal
channel couplings. The shape of this curve is that the total scattering rate
dips and reduces to about 40$\%$ of it original value then increases up to
about 65$\%$. The inelastic rate starts at zero and quickly climbs until
$\alpha/\pi>0.2$, but after that it only slightly increases. For the lower end
of this regime ($k=10$) scattering, there are still noticeable resonances in
the scattering when a single partial wave makes up a large percentage of the
scattering.
Figure 3: (Color Online) The scattering rate, $k\sigma_{0}$, as a function of
$\rho_{0}/D$ at three different angles: a) $\alpha/\pi=0.25$, b)
$\alpha/\pi=0.35$, and c) $\alpha/\pi=0.50$ at $Dk=10^{-2}$ (black). Notice
that these resonances for small angles are narrow and fewer in number than the
case where polarization is in-plane. In that case there are many resonances
and they are wider in $\rho_{0}/D$.
With these simple estimates of the scattering magnitude in hand, we now move
to study the impact of the tilted polarization axis on the scattering for
bosons or distinguishable dipoles, where there is the $m=0$ contribution
relaying information about the short range scattering. For fermions, the
short-range is strongly shielded, and only when one considers specific cases
does the scattering become more involved. For this reason, we are more
interested in the bosonic scattering and general long-range behavior and leave
the case specific fermionic scattering for the future.
As a first look at this scattering behavior, we look at the scattering rate as
a function of $\rho_{0}/D$ at three different angles: a) $\alpha/\pi=0.25$, b)
$\alpha/\pi=0.35$, and c) $\alpha/\pi=0.50$. In Fig. 3 a) the resonances which
occur as $\rho_{0}/D$ is decreased are narrow. Only a few exist because there
is a dipolar barrier to the scattering and thus the inter-channel couplings at
short range must be strong enough to support the bound state. As the
polarization angle is increased, the resonances become wider and more
frequent. As the barrier is turned off and ultimately turning into an
attractive potential, we see more frequent and wide resonances. These
resonances are much like the long range resonances seen in 3D You ; CTPR ; NJP
; universal ; roudnev ; kanj2 .
Figure 4: (Color Online) Energy dependence of the scattering rate,
$k\sigma_{0}$ and the binding energies are shown as a function of $\rho_{0}/D$
for $\alpha/\pi=0.35$. The curves are for $Dk=10^{-3}$ (green with circles),
$10^{-2}$ (black), $10^{-1}$ (dashed red) and $1$ (blue dotted). The binding
energies are also shown for the same $\rho_{0}/D$. The cross sections and
binding energy go to zero simultaneously and the strong energy dependence of
the scattering resonance width.
To study this energy dependence of the scattering further, we look at a
particular angle and vary the energy. In Fig. 4 we have plotted the scattering
rate ($\alpha/\pi=0.35$) in the upper panel at four values of $Dk$: $10^{-3}$
(green with circles), $10^{-2}$ (black), $10^{-1}$ (dashed red) and $1$ (blue
dotted). In the lower panel we have plotted the binding energy. There are a
few points to be made here about the strong energy dependence of the
scattering. First, the peak of the resonance shifts noticeably as the energy
is lowered. Second, the width of the resonance becomes more narrow as the
energy is decreased. Third, as the the binding energy goes to zero, the
scattering rate goes to zero; this is in contrast to 3D. Fourth, for the
$|m|>0$ resonances are very narrow, and they bind tightly as $\rho_{0}/D$ is
decreased. In these plots these resonance are hard to see because they are so
narrow. They are most easily found by looking at the binding energy where
there are steep lines.
It is worth commenting on the relationship between the scattering length and
the cross section and how these two quantities relate to the binding energy.
First, for identical bosons in 3D, the cross section is $\sigma\sim 8\pi
a^{2}$ but for $a\gg k$ it saturates at $8\pi/k^{2}$. In the large $a$ limit,
the binding energy goes as $\left(-{\hbar^{2}\over 2\mu a^{2}}\right)$, and
the maximum of $a$ corresponds to the maximum of $\sigma$. 2D is very
different. Consider when $a\rightarrow\infty$, and the scattering cross
section goes to zero. This is most easily seen as from the effective range
expansion: $\cot(\delta)\propto\ln(ka)\rightarrow\infty$. This leads to
$\delta\sim 0$ and $\sigma\sim 0$ when $a$ is very large. The maximum of the
scattering cross section occurs at $\delta=\pi/2$, and this happens when
$ak=2e^{-\gamma}\sim 1.12$.
Figure 5: (Color Online) The scattering rate, $k\sigma_{0}$, and $a^{2}E_{b}$
are shown as a function of $a$ for many values of $\alpha$. The different
energies are: $Dk=10^{-3}$ (green), $10^{-2}$ (black) $10^{-1}$ (red) and $1$
(blue). Cyan open circles are the effective range at each energy. The energy
dependence of the scattering rate is seen simply as where the scattering rate
peaks when $ka=2e^{-\gamma}\sim 1.12$. The binding energies times $a^{2}$ are
also shown for all resonances, the blue dashed line is the universal limit,
$\sim 0.63$. Only for the $Dk=1$ does the effective range not give a good
estimate of the scattering rate.
A way to clearly demonstrate the energy dependent behavior is shown in Fig. 5,
where we have replotted the scattering rate and binding energy as a function
of $a/D$, not $\rho_{0}/D$. We have replotted all the scattering data (i.e.
many different values of $\alpha$) as a function of $a/D$ for many different
values of $Dk$: $10^{-3}$ (black), $10^{-2}$ (red) $10^{-1}$ (green) and $1$
(blue). The binding energies are now plotted as $a^{2}E_{b}$, this makes it so
that when the energies become universal they go to a constant value of
$\sim$-0.63. Replotting the data this way, allows us to observe clearly
several points. First, the scattering rate is maximum when
$ka=2e^{-\gamma}\sim 1.12$; this is clear from the four different energies
plotted. It is also clear that the 2D system has strong energy dependence, and
that the scattering rate goes to zero in the large $a$ limit.
From this plot we can understand why there is such strong energy dependence to
the width of the resonances as a function of $\rho_{0}/D$. $a/D$ is energy
independent once one is in the threshold regime and depends only on
$\rho_{0}/D$. We know that the scattering rate is zero when
$a\rightarrow\infty$ and that it is maximum when $ka\sim 1.12$. Therefore as
energy is decreased, the maximum and minimum of the scattering rate approach
each other in $a/D$ (Fig. 5) or $\rho_{0}/D$ (Fig 4).
The effective range expansion gives a very good estimate of the scattering
rate and therefore the phase shift at low $k$. For $k$=1 (blue squares), we
are leaving the threshold regime, and the effective range description breaks
down.
Moving to the binding energies, we are see $a^{2}E_{b}$ (black circles)
converges to the universal value (blue dashed line) for $a/D>10$ and when it
is strongly system dependent. We also see that at small $a/D$ the binding
energies deviate from the universal trend and $a^{2}E_{b}$ widely varies. In
this figure we have only plotted the binding energies which were numerically
found. Going beyond $a/D$=100 is both computationally challenging and only
returns the universal binding energy.
## IV Molecules
In this system the molecules have widely varying properties depending on the
polarization angle. To study them more closely we pick six angles to explore:
$\alpha/\pi$=0.25, 0.275, 0.3, 0.35, 0.4, and 0.5. For $\alpha/\pi$ smaller
than 0.2, very little variation in the scattering, and there are no bound
states for the values of $\rho_{0}/D$ we consider. The first important point
is that for a given angle the properties are robust. This is demonstrated by
obtaining the molecular energies and wavefunctions for the first two $m=0$
resonances for each angle. Then we determine the values of $\rho_{0}/D$ at
each resonance which result in a set of chosen scattering lengths. We pick 40
different scattering lengths between 0.1 and 100$D$ that are found at each of
the first 2 resonances for each of the 6 angles.
This idea of using $a/D$ as the control parameter was used in 3D dipolar
scattering to study three body recombination of dipoles 3body . That work
showed that using the scattering length to characterize the 2-body system in
the calculations, even outside the $a\gg D$ regime, worked well at revealing
universal characteristics of three body recombination.
Figure 6: (Color Online) The binding energies for the first and second $m=0$
resonances at $\alpha/\pi$= 0.25, 0.275, 0.3, 0.35, 0.4, and 0.5. The
universal 2D binding energy at large $a/D$ is shown as a blue dashed line.
In fig. 6 we plot the binding energies of the molecules for the first (black
line) and second (red circle) resonance for all values of $\alpha$ considered.
For $\alpha/\pi=0.25$ the molecules are most tightly bound at small $a/D$ and
at $\alpha/\pi=0.5$ are most loosely bound, as expected, although there is
roughly 2 orders of magnitude difference in the binding energies between these
two extreme cases. For the tightly bound case the binding energies plateau as
$a/D$ is lowered. This energy corresponds to the size of the hard sphere and
therefore a minimum size of the molecule. In this case there is a strong
dipolar barrier and it is the inter-channel couplings that form the attractive
short range region where the molecule is found. In contrast for
$\alpha/\pi=0.5$ there is an attractive dipolar term for the $m=0$ case and
the molecules are only mildly multi-channel objects. This will be explained
below.
As $a/D$ is increased, for all of the polarization angles, the system becomes
more loosely bound, and they are strongly system dependent. Only for
relatively large $a/D$, say 10, do the binding energies truly resemble the
universal value. We have plotted the universal binding energy as a blue dashed
line.
Figure 7: (Color Online) (a) The size of the molecules are shown as a function
of $a/D$ for various angles for both the first (black) and second (red
circles) resonance. The universal form of the size is shown as a dashed blue
line. (b) The partial wave population is shown (first resonance only), for all
six angles as a function of $a/D$. The extreme angles are shown as filled
circles ($\alpha/\pi=0.25$) and dashed lines ($\alpha/\pi=0.5$) for partial
waves $m=0$ (black), $|m|=2$ (red) and $|m|=4$ (blue).
We now consider the size and shape of the molecules. First in Fig. 7 (a) we
look at the expectation value of the molecular size:
$\langle\psi|\rho|\psi\rangle$ as a function of scattering length. All values
of $\alpha$ are plotted for both the first (black line) and second (red
circle) resonance. For $\alpha/\pi=0.25$ and small $a/D$ the molecules are
very small, $\langle\rho\rangle\sim 0.05D$; this is roughly the size of the
hard sphere when the first bound state is captured. In contrast
$\alpha/\pi=0.5$ the molecule is about $\langle\rho\rangle\sim 1D$ even for
$a/D\sim 0.1$. In the large $a/D$ limit we find that the size of the molecule
goes to $\pi^{2}e^{\gamma}a/32\sim 0.55a$ (blue dashed) which was obtained
from $\phi_{b}(\kappa_{a}\rho)$. Again we find that the size of the molecules
depends on the polarization until $a/D>10$.
Now we look at the shape of the molecules shown in Fig. 7 (b). This is done by
considering the partial wave population
$n_{m}=\langle\phi_{m}|\phi_{m}\rangle$ as a function of scattering length for
each angle. The extreme angles of $\alpha/\pi=0.25$ and $\alpha/\pi=0.5$ are
shown as filled circles and dashed lines for $m=0$ (black), $|m|=2$ (red) and
$|m|=4$ (blue).
For $\alpha/\pi=0.25$ and small $a/D$ the molecules are strongly aligned along
the polarization axis behind the dipolar barrier. This is why they are so
small and highly aligned. This is seen by the fact that the largest
contribution is from $|m|=2$ (red circles) and $|m|=4$ (blue circles) is
nearly 10$\%$ of the partial wave population at small scattering length. This
is from the fact that the dipoles are behind the dipolar barrier and the
inter-channel couplings are the origin of the molecule. The anisotropy at
small $a/D$ is still true for $\alpha/\pi=0.5$, but the $m$=0 contribution is
nearly 80$\%$. This strong contrast is from because of the attractive dipolar
interaction for the $m=0$ channel only requires a slight amount of inter-
channel coupling to form a bound state. For the 3D case, the $l=0$ molecule is
made up of about about 60$\%$ s-wave and the rest is essentially d-wave 3body
.
It is important to notice that at large $a/D$ the partial wave populations go
to very similar values. The $m=0$ contribution dominates and all other
contributions become small. To better understand the shape of the molecules we
plot the radial weighted molecular densities.
Figure 8: (Color Online) $\rho|\psi(\vec{\rho})|^{2}$ for: $\alpha/\pi=$ a)
0.25, b) 0.35, c) 0.5 for $a/D$ (i) 0.1, (ii) 1, (iii) 10 and (iv) 100. The
contours indicated are drawn ever 20$\%$ of maximum value, with the largest
80$\%$ contour being drawn as red. The solid grey circle in the middle is the
hard sphere short range interaction. The scale changes for every plot on the
left (i, ii), in contrast the scale is the same for the plots on the right
(iii, iv) when the system is in the large $a/D$ regime.
In Fig. 8 we have plotted the radial weighted molecular densities:
$\rho|\psi(\vec{\rho})|^{2}=|\sum_{m}u_{m}(\rho)e^{im\phi}|^{2}$ for
$\alpha/\pi=$ a) 0.25, b) 0.35, c) 0.5 (top to bottom) for $a/D$ of (i) 0.1,
(ii) 1, (iii) 10, and (iv) 100 (left to right). These densities are generated
from the second resonance. The first resonance wavefunctions look the same
expect there inner hard core is larger and over takes the inner oscillations.
The scale changes for the plots on the left.
This plot clearly shows the change in both shape and size of the molecules as
both $a/D$ and $\alpha/\pi$ are changed. Now starting at $a/D=0.1$ and
$\alpha/\pi$=0.25 (ai) we see that the molecule is very small and highly
anisotropic. The density is along the polarization axis, $x$, and very tightly
bound against the hard core. In fact its spatial extend does not really extend
beyond the hard core in the $y$ direction.
Then as the angle is increased, for a fixed $a/D$ the size of the molecule and
anisotropy is softened. Observe the change is scale in both (bi) and (ci). In
(ci), the molecule is larger ($\sim 1D$), and still aligned along the
polarization axis. Additionally see the contrast in (ai) and (ci) between the
extend of the density over the width in $y$ of the hard core.
Now for (a), (b), and (c) consider increasing $a/D$. The size of the molecules
gets larger and more isotropic. In (iv) for all angles the system is
isotropic, except for small region near the hard core, but the bulk of the
radial weighted density is isotropically distributed at large $\rho$. This is
why the size and shape of the molecules are universal in the large $a/D$
limit.
## V Conclusions
In this paper we have studied the scattering properties of the a pure 2D
dipolar system when the polarization can tilt into the plane of motion. We
have shown how the tilt angle impacts the scattering in both the threshold and
semi-classical regimes. We then studied the character of the scattering
resonances generated by altering $\rho_{0}/D$ or electric field as a function
of the polarization angle. We found that at small angles the systems gained
bound states which produced narrow resonances. This is because of the dipolar
barrier. We also found when the polarization is entirely in the plane of
motion, the resonances are frequent and wide because of the partially
attractive potential.
We studied the molecular system generated the tilt of the polarization. We
showed that at large $a/D$ the system have a universal shape independent of
polarization angle, but at small $a/D$ we have found that the molecules have a
wide range of properties which strongly depend on polarization angle. Future
work on this topic will be to more fully consider a realistic system, to
include the effect of confinement, fermionic dipoles and layer systems.
###### Acknowledgements.
The author greatfully acknowledges support from the Advanced Simulation and
Computing Program (ASC) through the N. C. Metropolis Fellowship and LANL which
is operated by LANS, LLC for the NNSA of the U.S. DOE under Contract No. DE-
AC52-06NA25396.
## References
* (1) #
* (2) M. A. Baranov, Phys. Rep. 464 71 (2008).
* (3) L. D. Carr et al. New J. Phys. 11, 055049 (2009).
* (4) M. H. G. de Miranda et al., 1 Nat. Phys. (2011).
* (5) S. Ospelkaus et al. Science 327, 853 (2010).
* (6) P. S. Zuchowski and J. M. Hutson, Phys. Rev. A 81, 060703 (2010).
* (7) C. Ticknor, R.M.Wilson, and J.L. Bohn, Phys. Rev. Lett. 106, 065301 (2011).
* (8) G. M. Bruun and E. Taylor, Phys. Rev. Lett. 101, 245301 (2008).
* (9) J. C. Cremon, G. M. Bruun, and S. M. Reimann, Phys. Rev. Lett. 105, 255301 (2010).
* (10) C. Ticknor, Phys. Rev. A. 80 052702 (2009).
* (11) C. Ticknor, Phys. Rev. A. 81 042708 (2010).
* (12) J. D’Incao and C. H. Greene, Phys. Rev. A 83, 030702(R) (2011).
* (13) G. Quemener and J. L. Bohn, Phys. Rev. A 81, 060701 (2010); 83 012705 (2011).
* (14) A. Micheli et al. Phys. Rev. Lett. 105, 073202 (2010).
* (15) M. Klawunn, A. Pikovski and L. Santos, Phys. Rev. A 82, 044701 (2010)
* (16) J. R. Armstrong, N. T. Zinner, D. V. Fedorov, and A. S. Jensen, Europhysics Letters 91, 16001 (2010).
* (17) A. G. Volosniev, D. V. Fedorov, A. S. Jensen, and N. T. Zinner, ArXiv1103.1549.
* (18) B. J. Verhaar, et al., J. Phys. A: Math. Gen. 17, 595 (1984).
* (19) K. Kanjilal and D. Blume, Phys. Rev. A 73, 060701 (2006).
* (20) B. R. Johnson, J. Comput. Phys. 13, 445 (1973).
* (21) Z.-Y. Gu and S. W. Quian, Phys. Lett. A 136, 6 (1989).
* (22) I. R. Lapidus, Am. J. Phys. 50, 45 (1982); S. K. Adhikari, Am. J. Phys. 54, 362 (1986).
* (23) C. Ticknor, Phys. Rev. Lett., 100 133202 (2008); Phys. Rev. A 76, 052703 (2007).
* (24) V. Roudnev and M. Cavagnero, Phys. Rev. A, 79 014701 (2009); J. Phys. B, 42, 044017 (2009).
* (25) J. L. Bohn, M. Cavagnero, and C. Ticknor, New J. Phys. 11 055039 (2009).
* (26) K. Kanjilal and D. Blume, Phys. Rev. A 78, 040703(R) (2008).
* (27) M. Marinescu and L. You, Phys. Rev. Lett. 81, 4596 (1998).
* (28) C. Ticknor and J. L. Bohn Phys. Rev. A 72, 032717 (2005).
* (29) C. Ticknor and S. T. Rittenhouse, Phys. Rev. Lett. 105, 013201 (2010).
|
arxiv-papers
| 2011-04-21T14:53:40 |
2024-09-04T02:49:18.367642
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Christopher Ticknor",
"submitter": "Chris Ticknor",
"url": "https://arxiv.org/abs/1104.4277"
}
|
1104.4320
|
# Organic photovoltaic bulk heterojunctions with spatially varying
composition.
Paul M. Haney1 1Center for Nanoscale Science and Technology, National
Institute of Standards and Technology, Gaithersburg, Maryland 20899-6202, USA
###### Abstract
Models of organic bulk heterojunction photovoltaics which include the effect
of spatially varying composition of donor/acceptor materials are developed and
analyzed. Analytic expressions for the current-voltage relation in simplified
cases show that the effect of varying blend composition on charge transport is
minimal. Numerical results for various blend compositions, including the
experimentally relevant composition of a donor-rich region near the cathode (a
“skin layer” of donor material), show that the primary effect of this
variation on device performance derives from its effect on photocharge
generation. The general relation between the geometry of the blend and its
effect on performance is given explicitly. The analysis shows that the effect
of a skin layer on device performance is small.
## I introduction
Photovoltaic devices consisting of two types of organic materials (referred to
as donor (D) and acceptor (A)) have attracted considerable scientific interest
in recent years. Their operation consists of the generation of an exciton in
the donor molecule, which is then disassociated into free carriers at the D-A
interface (the electron is transferred to the acceptor molecule’s lowest
unoccupied molecular orbital (LUMO), leaving a hole in the donor molecule’s
highest occupied molecular orbital (HOMO)). Carriers which avoid recombination
are then collected by contacts. The geometry first studied consisted of single
D and A layers, with a single planar interface tang . The resulting
efficiencies were low (1 %), owing in part to the short exciton diffusion
length (10 nm) - only excitons within this short distance from the interface
lead to free carriers. It was subsequently discovered that blending D and A
together throughout the device thickness led to increased efficiencies yu ,
now above 5 % eff1 ; eff2 ; eff3 . This increase in efficiency is attributed
to an increase in D-A interfacial area; carrier transport is sufficiently
robust to the disorder present in the blend to accommodate reasonable quantum
efficiencies. If the organic blend is completely homogeneous, the contacts on
the device must be different in order to break spatial symmetry and permit a
nonzero short-circuit current in a preferred direction. The key difference
between the contacts is their work function: a lower (higher) work function
ensures that the contact preferentially collects and injects electrons
(holes). Hence it is understood that the cathode collects electrons, and the
anode collects holes.
A major thrust of experimental efforts has been to attain control over blend
morphology in order to optimize both exciton disassociation and charge
transport. Recent examples include using nanoimprint lithography to control
the structure of the donor-acceptor molecules’ interfacial profile morph1 , or
using a graded donor-acceptor blend to optimize both carrier collection and
transport holmes . A key challenge of engineering blend morphology is the
measurement and characterization of the structure of the organic blend.
Techniques for accomplishing this include atomic force microscopy hamadani ,
ellipsometry germack , and X-ray photoelectron spectroscopy xu . These
techniques have revealed that typical methods for fabricating devices lead to
a layer of enhanced donor molecule density at the cathode, which has been
attributed to surface energy differences between the active layer and other
components xu . This would seem to present an impediment to good device
performance: the cathode collects electrons, but in its vicinity is mostly
holes! Nevertheless, internal quantum efficiencies of 90 % have been observed
in these materials schilinksy , indicating that charge collection is still a
relatively efficient process germack .
In this work, I theoretically study the effect of a nonuniform blend on
organic photovoltaic (OPV) device performance. I employ a drift-diffusion
equation to describe electron and hole transport, a field and temperature
dependent generation and recombination rate that captures the exciton physics,
and the Poisson equation for electrostatics. To this model I add the effect of
a spatially varying effective density of states (EDOS) (note that “density of
states” refers to the number of states per volume per unit energy, whereas
“effective density of states” refers to the number of states per volume). Part
I describes details of the model. In part II, I present analytic solutions for
the transport under certain approximations; these point to the fact that the
effect of a spatially varying EDOS on charge transport is small. In part III,
I present numerical results which indicate that the primary effect of a
spatially varying EDOS is on the charge generation and ensuing $J_{\rm sc}$.
It is shown that this can be understood in terms of the overall geometry of
the composition profile. I conclude that since the skin layer near the cathode
is geometrically small on the scale of the device thickness, its effect on
performance is similarly small.
## II Model
The model used to describe the system is similar to that found in Ref. koster,
. Its basic equations are presented here in dimensionless form. Table I shows
the variable scalings used. The dimensionless drift-diffusion/Poisson
equations including a spatially varying EDOS are given as fonash :
$\displaystyle J_{n}$ $\displaystyle=$ $\displaystyle
n\left(-\frac{\partial}{\partial x}V-\frac{\partial}{\partial x}\left[{\rm
ln}N\right]\right)+\frac{\partial}{\partial x}n~{},$ $\displaystyle J_{p}$
$\displaystyle=$ $\displaystyle f_{\mu}\left[p\left(-\frac{\partial}{\partial
x}V+\frac{\partial}{\partial x}\left[{\rm
ln}P\right]\right)-\frac{\partial}{\partial x}p\right]~{},$
$\displaystyle-\frac{\partial}{\partial x}J_{n}$ $\displaystyle=$
$\displaystyle\frac{1}{f_{\mu}}\frac{\partial}{\partial x}J_{p}=G-R~{},$ (1)
$\displaystyle\frac{\partial^{2}}{\partial x^{2}}V$ $\displaystyle=$
$\displaystyle p-n~{},$ (2)
where $f_{\mu}=\mu_{h}/\mu_{e}$ is the ratio of hole to electron mobility, $G$
is the carrier density generation rate, and $R$ is the recombination. $N(x)$
and $P(x)$ are the spatially-dependent electron and hole effective density of
states, respectively. $n$ and $N$ are related via:
$n=Ne^{-\left(E_{c}-E_{F,n}\right)/kT}$, where $E_{F,n}$ is the electron
quasi-Fermi level, $E_{c}$ is the conduction band edge, and all quantities are
position-dependent (the densities are assumed to be such that the system is in
a nondegenerate regime). $p$ and $P$ are related similarly. $N(x)$ and $P(x)$
are fixed material parameters, while $n$ and $p$ are system variables that
depend on applied voltage and illumination. For a single band semiconductor,
the effective density of states $N$ is given by
$\frac{1}{\sqrt{2}}\left(\frac{m^{*}_{n}k_{\rm
B}T}{\pi\hbar^{2}}\right)^{3/2}$, where $m^{*}_{n}$ is the effective electron
mass. In the present context of organic materials, $N$ is more properly
understood as the number of HOMO states per unit volume, and is proportional
to the donor molecule density.
Table 1: Normalization to dimensionless variables. In the below $N_{0}$ is the characteristic density (typically chosen to be on the order of $10^{-25}~{}{\rm m}^{-3}$) $D_{n}$ is the electron diffusivity, $\epsilon$ is the dielectric constant of the organic blend, $q$ is the magnitude of the electron charge, $T$ is the temperature, and $k_{\rm B}$ is Boltzmann’s constant. Quantity | Normalization
---|---
density | $N_{0}$
position | $\sqrt{\epsilon k_{B}T/(q^{2}N_{0})}\equiv x_{0}$
charge current | $qD_{n}N_{0}/x_{0}$
electric potential | $k_{\rm B}T/q$
rate density | $x_{0}^{2}/N_{0}D_{n}$
The boundary conditions are given as:
$\displaystyle n\left(0\right)$ $\displaystyle=$ $\displaystyle
N\left(0\right)e^{-E_{g}+\phi_{L}},$ $\displaystyle p\left(0\right)$
$\displaystyle=$ $\displaystyle P\left(0\right)e^{-\phi_{L}},$
$\displaystyle~{}n\left(L\right)$ $\displaystyle=$ $\displaystyle
N\left(L\right)e^{-\phi_{R}},$ $\displaystyle~{}p\left(L\right)$
$\displaystyle=$ $\displaystyle P\left(L\right)e^{-E_{g}+\phi_{R}},$ (3)
where $L$ is the device thickness (this represents placing the anode at $x=0$,
and the cathode at $x=L$). $\phi_{L(R)}$ is the absolute value of the
difference between HOMO (LUMO) and left (right) contact Fermi level. The
boundary condition for the Poisson equation is:
$\displaystyle V(L)-V(0)=\left(E_{g}-\phi_{L}-\phi_{R}\right)-V_{\rm A},$ (4)
where $V_{\rm A}$ is the applied voltage (with the sign convention above,
$V_{\rm A}>0$ corresponds to forward bias).
I consider only bimolecular recombination, with (dimensionless) form:
$\displaystyle R=\left(np-n_{i}^{2}\right)~{},$ (5)
where $n_{i}^{2}=n_{0}p_{0}$, and $n_{0}~{}(p_{0})$ is the equilibrium
electron (hole) density. The carrier generation rate density is taken to be
spatially uniform. As described in Ref. koster , adding the exciton density as
a system variable modifies the source term in Eq. (1):
$\displaystyle\left(G-R\right)\rightarrow\tilde{P}\times
G_{0}-\left(1-\tilde{P}\right)\times R,$ (6)
where $G_{0}$ is the exciton density generation rate, and $\tilde{P}$ is a
field and temperature dependent factor which represents the probability for an
exciton to disassociate into free electron and hole koster ; braun . The field
and temperature dependence is described by Braun’s extension of Onsager’s
theory of ion disassociation in electrolytes onsager .
Charge recombination and generation also generally depend on the donor and
acceptor effective density of states. The total source term of Eq. (1)
(denoted here by $U(x)$) is therefore of the generic form:
$\displaystyle U(x)$ $\displaystyle=$ $\displaystyle\tilde{P}\times
G_{0}\times g\left[N(x),P(x)\right]-$ (7)
$\displaystyle~{}~{}\left(1-\tilde{P}\right)\times R\times
r\left[N(x),P(x)\right].$
The appropriate forms for $g$ and $r$ depend on several factors, such as the
dependence of the optical absorption and D-A interface area on relative D-A
composition.
Figure 1: (a) Energy diagram for device model; cartoon of particle flow
depicts dark current in forward bias. (b) Spatial dependence of EDOS: linear
variation (shaded region), and step-like change (dotted line) in both $N(x)$
and $P(x)$.
## III Analytic cases
The set of equations described in Eq. (1) can be solved analytically for
limiting cases, which can provide some insight into the effect of a spatially
varying EDOS on the transport. Two cases are considered here: the first is an
exponentially varying EDOS (which can be extrapolated to a linearly varying
EDOS), and the second is an abrupt, step-like change in the EDOS. I present
both solutions first and discuss the physics they describe second.
In both cases the electric field $E$ is taken to be spatially uniform (so that
$V(x)=-Ex$), and recombination is ignored. I suppose further that $G$ is
constant, and independent of $N,P$ (that is, $g(N,P)=1$ in Eq. (7)). The
exponentially varying EDOS is parameterized as:
$\displaystyle N\left(x\right)$ $\displaystyle=$ $\displaystyle
P\left(x\right)=A_{0}~{}e^{ax/L},$ (8)
where $A_{0}=a/\left(e^{a}-1\right)$ ensures that the total number of states
is independent of $a$. Substituting the expressions for electron (hole)
current density $J_{n(p)}$ into the equation of continuity (Eq. (1)) results
in a second order differential equation for the electron (hole) density $n$
($p$). For the EDOS of Eq. (8), the resulting general solution is:
$\displaystyle n\left(x\right)$ $\displaystyle=$ $\displaystyle
c_{1}e^{\left(a-f\right)x}+c_{2}+\frac{Gx}{a-f}~{},$ $\displaystyle
p\left(x\right)$ $\displaystyle=$ $\displaystyle
c_{1}e^{\left(a+f\right)x}+c_{2}+\frac{Gx}{a+f},$ (9)
where $c_{1},~{}c_{2}$ are determined by the boundary conditions of Eq. (3).
From this solution the current density can be obtained directly.
I express the resulting current-voltage relation as a sum of dark current and
light current:
$\displaystyle J\left(V_{\rm A}\right)=J_{D}+GJ_{L}~{},$ (10)
Both light and dark currents are well described by expanding to lowest order
in the spatial variation of EDOS parameter $a$; I take $\phi_{L}=\phi_{R}=0$,
and express the applied voltage dependence in terms of $f=\left(E_{g}-qV_{\rm
A}\right)/k_{\rm B}T$. $f$ is bigger than 1 in the region of interest
footnote1 , leading to the further approximation that $\sinh f\approx\cosh
f\approx 1/2~{}e^{f}$. It’s useful to express current-voltage relation in
terms of that for a uniform EDOS and electric field:
$\displaystyle J_{D}^{0}$ $\displaystyle=$
$\displaystyle\frac{2f\left(e^{V_{\rm
A}}-1\right)}{L\left(e^{f}-1\right)e^{V_{\rm A}}}$ $\displaystyle J_{L}^{0}$
$\displaystyle=$ $\displaystyle
L\left(\frac{2}{f}-\coth\left(\frac{f}{2}\right)\right).$ (11)
The dark and light current for the exponentially varying profile of Eq. (8) is
then found to be:
$\displaystyle J_{D}^{\rm exp}$ $\displaystyle\approx$ $\displaystyle
J_{D}^{0}\left(1+a^{2}\left(\frac{1}{12}-\frac{1}{2f}\right)+O\left(a^{4}\right)+...\right)~{},$
$\displaystyle J_{L}^{\rm exp}$ $\displaystyle\approx$ $\displaystyle
J_{L}^{0}\left(1+a^{2}\left(-\frac{2}{f^{3}}+e^{-f}\right)+O\left(a^{4}\right)+...\right).$
(12)
I next consider a step function form of $N(x),P(x)$. I use the following form:
$\displaystyle
N\left(x\right)=P\left(x\right)=\left\\{\begin{array}[]{rl}1-a/2&\text{if
}x<L/2,\\\ 1+a/2&\text{if }x\geq L/2.\end{array}\right.$ (15)
The general solutions for each region ($x<L/2,~{}x>L/2$) are of the form given
by Eq. (9) with $a$=0. In addition to the boundary condition Eq. (3), there’s
an additional boundary condition for this EDOS of continuity of charge and
current density at $x=L/2$. Making the same approximations as above leads to
the following dark and light current:
$\displaystyle J_{D}^{\rm step}$ $\displaystyle\approx$ $\displaystyle
J_{D}^{0}\left(1-a^{2}e^{-f/2}+O\left(a^{4}\right)+...\right)~{},$
$\displaystyle J_{L}^{\rm step}$ $\displaystyle\approx$ $\displaystyle
J_{L}^{0}\left(1-\frac{a^{2}}{2}\frac{fe^{-f/2}}{f-2}+O\left(a^{4}\right)+...\right).$
(16)
A number of interesting and relevant features emerge from these solutions:
first, only even powers of $a$ appear in the expansions. This is a consequence
of the symmetry built in to the system: when $\phi_{L}=\phi_{R}$ and
$f_{\mu}=1$, electron particle transport from left to right is equal to hole
particle transport from right to left. In both the exponential and step-like
cases above, holes encounter an expansion in the EDOS along their transport
direction, which increases the hole current. Conversely, electrons encounter a
constriction, which decreases the electron current. To linear order (and all
odd orders) in the expasion/contraction parameter $a$, these effects cancel
each other so that the total charge current only appears with even powers of
$a$. If the electron/hole symmetry is broken, or the symmetry of the EDOS is
reduced (by shifting the step away from the center of the device), then odd
powers of $a$ are present (with prefactors whose magnitude reflects the degree
of symmetry breaking).
The other relevant feature of Eqs. (12) and (16) is the small magnitude of the
$a^{2}$ prefactor. Noting again that $f$ is generally larger than 1 for the
applied voltages of interest to solar cells, it’s clear by inspection that the
prefactors are much smaller than 1. This indicates that the effect on
transport of a spatial variation of the EDOS is quite weak.
Figure 2: Extinction of current when EDOS goes to zero. This is for the step-
like change in EDOS, for parameters $G=10^{-9},~{}V_{\rm A}=0.7$. Both
approximate and exact values are shown (where the approximate expression is
given by Eq. (16)). It is seen that the current decreases substantially only
when the EDOS is nearly zero (or when $a$ is nearly 2).
The intuitive picture that emerges from this analysis is that electrons and
holes can very easily “squeeze” through regions of reduced density. A natural
question concerns the way in which transport is ultimately “pinched off” by
letting the density vanish at a point in space. This is shown in Fig. (2),
which shows the current in the step-like structure as $a\rightarrow 2$. The
way in which the current vanishes is very steep; it is only at very small
values of EDOS at the cathode that the current drops appreciably (in this
limit, the approximation $a\ll 1$ used in deriving Eq. (16) is not satisfied,
hence the discrepancy between exact solution and Eq. (16)). However, for very
small values of HOMO and LUMO density in real systems, the model presented
here is likely not appropriate. This point is discussed more fully in the
conclusion.
## IV numerical studies
I next consider the effect of spatially varying EDOS when the Poisson equation
for the electric potential and bimolecular recombination are included. Recall
that the dependence of the generation and recombination on EDOS of electron
$N$ and hole $P$ is described generically as:
$\displaystyle\tilde{P}\times G_{0}\times
g\left(N,P\right)-\left(1-\tilde{P}\right)\times R\times
r\left(N,P\right)~{}.$ (17)
I make the following ansatz for $g$ (the main conclusion can be formulated in
a way that’s independent of this specific choice for $g$):
$\displaystyle g\left(N,P\right)=P^{2}N~{}.$ (18)
This is motivated by the observation that the D-A interfacial area requires
both $P$ and $N$, hence $g$ has a factor of each; an extra factor of $P$ is
added since the exciton is initially generated in the donor. $r$ is taken
simply to be 1, since $R$ already has $N$ and $P$ dependence built in through
$n$ and $p$. Adding a factor of $P$ to the recombination (so that the
$N,P$-dependence of both generation and recombination is the same) has only a
weak effect on the results.
A range of composition profiles has been explored for the numeric evaluation
of device performance, and I present two representative examples here:
$\displaystyle N_{1}(x)=1-P_{1}(x)$ $\displaystyle=$
$\displaystyle\frac{a}{\left(e^{a}-1\right)}e^{ax/L}~{},$ (19) $\displaystyle
N_{2}(x)=1-P_{2}(x)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(1+\left(1-2a\right)\tanh\left(\frac{x-x_{0}}{\lambda}\right)\right).$
Fig. (3) shows the $J-V$ curves for the $\left(N_{2},P_{2}\right)$ case (Eq.
(IV)) for the uniform profile ($a=1/2$), and a sharp S-shaped profile
($a=0.95$). Note that the effect of the EDOS profile on the short circuit
current $J_{\rm sc}$ is substantial, while the effect on open circuit voltage
$V_{\rm oc}$ is small.
Figure 3: Current density-Voltage relation for two spatial profiles of D-A
EDOS profiles. Blue dotted line is for uniform EDOS profile, red line is for
S-shaped EDOS profile, given by Eq. (IV)
The previous analysis can explain the relative insensitivity of $V_{\rm oc}$
to a nontrivial EDOS profile: the effect of a varying EDOS profile on
transport is weak, so that the injected current required to offset the
photogenerated current (and the corresponding required voltage - $V_{\rm oc}$)
is only weakly sensitive to changes in EDOS footnote2 .
The change in $J_{\rm sc}$ can be understood as a direct consequence of the
model construction. $J_{\rm sc}$ is the current collected in the absence of an
applied voltage, that is, in the absence of charge injected from the contacts.
As such it is simply equal to the total charge generation rate in the device:
$J_{\rm sc}=\int dx\left({\rm Generation}(x)-{\rm Recombination}(x)\right)$.
As described above, this is directly parameterized as:
$\displaystyle J_{\rm sc}$ $\displaystyle=$ $\displaystyle\int
dx\left(\tilde{P}\times G_{0}\times g\left[N(x),P(x)\right]-\right.$ (21)
$\displaystyle\left.~{}~{}~{}~{}~{}~{}~{}~{}(1-\tilde{P})\times R\times
r\left[N(x),P(x)\right]\right).$
In analyzing the effect of $N(x),P(x)$ on $J_{\rm sc}$, it is instructive to
separate the $N,P$ dependence of the generation from the above integral. This
leaves a quantity $\delta U$ which depends only on the geometry of the D-A
EDOS profile:
$\displaystyle\delta U$ $\displaystyle=$ $\displaystyle\int
dx~{}g\left[N(x),P(x)\right]$ (22) $\displaystyle=$ $\displaystyle\int
dx~{}N(x)P^{2}(x).$ (23)
Strictly speaking the integral in Eq. (21) does not factorize in a manner that
leads lead directly to a $\delta U$ term. However, as I show in the following,
$\delta U$ is a good predictor of the effect of the geometry of the EDOS on
the device performance.
For each EDOS profile, I also vary other system parameters. The three
different parameterizations are shown in Fig. (4). In system 1, HOMO/LUMO
levels are aligned with cathode/anode Fermi levels ($\phi_{L}=\phi_{R}=0$),
and electron and hole mobility are equal. For system 2, $\phi_{L}=\phi_{R}=0$,
but electron and hole mobilities are not equal ($\mu_{e}=10\mu_{h}$). In
system 3, the HOMO/LUMO are offset from cathode/anode by $0.2~{}{\rm eV}$
($\phi_{L}=\phi_{R}=0.2~{}{\rm eV}$), and electron/hole mobilities are equal.
Figure 4: Cartoon of the three system parameterizations: system 1:
$\phi_{L}=\phi_{R}=0,~{}\mu_{h}=\mu_{e}$, system 2:
$\phi_{L}=\phi_{R}=0,~{}\mu_{h}=10\mu_{e}$, system 3:
$\phi_{L}=\phi_{R}=0.2~{}{\rm eV},~{}\mu_{h}=\mu_{e}$
Fig. (5a) and (5b) shows $\delta U$ as the profile parameter $a$ is varied,
for various EDOS configurations given by Eqs. (19) and (IV), respectively.
This is shown for the three system parameterizations. The overall device
efficiency $\eta$ tracks $\delta U$ very closely for all of these cases (the
efficiency is proportional to the maximum absolute value of $(JV)$ in the 4th
quadrant of the $J-V$ plane). For this reason I conclude that the primary
effect of a spatially varying EDOS on device performance is to change the
total carrier generation rate and ensuing $J_{\rm sc}$. $\delta U$ in Fig. (5)
is calculated using Eq. (23), however the conclusion is valid for any choice
of $g$ I’ve tried. Hence the effect of a nonuniform blend on performance can
be approximately specified in the generic form given by Eq. (22).
Figure 5: The efficiency $\eta$ and geometrical factor $\delta U$ (normalized
by their maximum value) versus geometrical parameter $a$ for (a) exponentially
varying profile ($N_{1}(x)$ of Eq. (19)) (b) S-shaped profile $N_{2}(x)$ of
Eq. (IV), with $x_{0}=L/2$, $\lambda=L/8$), (c) “skin” layer geometry.
Representations of the spatial variation of EDOS as a function of $a$ are
shown above the figure. The gray and white regions represent $N(x)$ and
$P(x)$, respectively. The efficiency closely follows the geometrical factor
$\delta U$ for most cases. For each geometry I use the three system
parameterizations described in Fig. (4) (the subscript of $\eta$ specifies the
system parameterization).
Next I turn to the experimentally motivated geometry of a skin layer of D near
the cathode. It’s parameterized as:
$\displaystyle
N(x)=1-P(x)=\frac{2+a}{4}+\frac{2-a}{4}\tanh\left(\frac{x-x_{0}}{\lambda}\right),$
with $\lambda=0.0075~{}L$, $x_{0}=0.05~{}L$. Fig. (5c) shows how the
efficiency evolves as the skin layer goes from mostly D-like (small $a$), to
an even D-A mix, to mostly A-like (large $a$) (the experimentally realistic
case is smaller $a$). The change in efficiency is a rather small effect for
all three cases (a maximum of 10 % change). Also shown is the geometrical
factor $\delta U$ (solid line). The efficiency of system 1 conforms most
closely to the geometrical factor profile dependence. Inspection of the $J-V$
curves for the three systems reveals subtle differences in the fill-factor
between the three; there is no simple or obvious source for the difference in
behavior between the three system parameterizations. The difference in
behavior between the three systems is more conspicuous for the skin layer
geometry because the effect of a nonuniform blend is smaller for the skin
layer, so that the overall performance is more sensitive to other system
parameters. (When the blend profile leads to larger effects, for example that
shown in Fig. (5b), there is a similar dependence of device performance on
profile for all system parameterizations.) Nevertheless, the important
conclusion common to all three system parameterizations of the skin layer
geometry is that the effect of the skin layer is small. Its smallness can be
understood in terms of the analysis of the previous sections. The analytic
work points to the fact that the effect of blend non-uniformity on charge
transport is generically small (except in extreme cases). The numerical work
of the previous test cases indicates that the effect of blend non-uniformity
can be understood in terms of its effect on charge generation and resulting
$J_{\rm sc}$ \- and that this effect is essentially geometrical (see Eq.
(22)). Since a skin layer is by definition geometrically small, its effect is
similarly small.
## V Conclusion
In this work I presented a simple model for the effect of nonuniform blend
profiles on OPV device performance. The main effect of a nonuniform D-A blend
is on the the charge generation and resulting short-circuit current: in
regions where the blend is primarily of one type at the expense of the other,
there is less charge generation due to a reduced D-A interfacial area. The
details of how charge generation depends on local blend mix are complicated,
and involve almost all aspects of OPV device operation (e.g optics moule ,
exciton diffusion holmes , etc.). The influence of a nonuniform blend on
electron and hole transport is a weaker effect.
It’s important to appreciate the simplicity of the model presented here
relative to the complexity of real OPV devices. Two simplifications of the
model are: its treatment of the metal-organic interface, and its restriction
to 1 spatial dimension. I make no attempt to capture the effect of a skin
layer geometry on the metal-organic contact. The physics at this interface is
included most simply as a finite recombination velocity scott (which can also
depend on temperature and field lacic ). A hallmark of less effective charge
collection/injection at this interface is S-shaped $J-V$ curves deibel . This
feature is correlated to metal contact deposition techniques deibel , and is
not ubiquitously observed in devices. I therefore conclude that the details of
the metal-organic contact is not directly tied to the phase segregation in the
organic blend.
A more severe approximation of this model is its restriction to 1-d. When the
EDOS is small, the charge and current density is also small. However,
experiments reveal localized hot-spots of conducting paths hamadani . A 1-d
model necessarily averages these localized hot-spots or large current density
over the entire cross-sectional area, leading to a diffuse current. As the
overall area of hot-spots decreases, the charge and current density they must
accommodate increases, and current may become space-charge limited. A 1-d
model is unable to capture the physics described in this scenario. However for
less extreme cases, the treatment described here offers the simplest account
for a spatially varying blend structure.
I acknowledge very useful discussions with Behrang Hamadani and Lee Richter.
## References
* (1) C. W. Tang. Two-layer organic photovoltaic cell. Appl. Phys. Lett. 48, 183 (1986).
* (2) G. Yu, J. Gao, J. C. Hummelen, F. Wudl, and A. J. Heeger, Science 270, 1789 (1995).
* (3) Jiangeng Xue, Barry P. Rand, Soichi Uchida, and Stephen R. Forrest. J. Appl. Phys., 9, 124903 (2005).
* (4) J. Peet, J. Y. Kim, N. E. Coates, W. L. Ma, D. Moses, A. J. Heeger, and G. C. Bazan. Nat. Mater. 6 497 (2007).
* (5) S. H. Park, A. Roy, S. Beaupre, S. Cho, N. Coates, J. S. Moon, D. Moses, M. Leclerc, K. Lee, and A. J. Heeger. Nat. Photon. 3 297 (2009).
* (6) D. Cheyns, K. Vasseur, C. Rolin, J. Genoe, J. Poortmans, and P. Heremans. Nanotechn. 19, 424016 (2008).
* (7) 1 R. Pandey and R.J. Holmes, Adv. Mater. 22, 5301-5305 (2010).
* (8) B.H. Hamadani, S. Jung, P. M. Haney, L. J. Richter, and N. B. Zhitenev, N.B. Nano letters 10, 1611 1617 (2010).
* (9) D.S. Germack, C.K. Chan, R.J. Kline, D.A. Fischer, D.J. Gundlach, M.F. Toney, L.J. Richter, and D.M. DeLongchamp, Macromolecules 43, 3828 (2010).
* (10) Z. Xu, L. Chen, G. Yang, C. Huang, J. Hou, Y. Wu, G. Li, C. Hsu, and Y. Yang, Advanced Functional Materials, 19, 1227 (2009).
* (11) P. Schilinsky, C. Waldauf, and C.J. Brabec, App. Phys. Lett. 81, 3885 (2002).
* (12) S. J. Fonash, Solar Cell Device Physics (Academic Press, Inc., London, 1981).
* (13) B. V. Andersson, A. Herland, S. Masich, and Olle Ingan as. Nano Lett. 9 853, (2009).
* (14) L. J. A. Koster, E. C. P. Smits, V. D. Mihailetchi, and P. W. M. Blom, Phys. Rev. B 72, 085205 (2005).
* (15) The most relevant region of the current-voltage relation is in the 4th quadrant. In this region, $V_{\rm A}<E_{g}$; the different between $E_{g}$ and $qV_{\rm A}$ is scaled by $1/k_{\rm B}T=40\left({\rm eV}\right)^{-1}$, so that $f$ is generally much larger than 1.
* (16) The change in $J_{\rm sc}$ induced by a spatially varying EDOS has some effect on $V_{\rm oc}$ as well, but this is also a small effect, as $V_{\rm oc}$ generically varies only logarithmically with $J_{\rm sc}$.
* (17) H. K. Gummel, IEEE Transactions on Electron Devices, 11, 455 (1964).
* (18) R. Sokel and R. C. Hughes, J. Appl. Phys, 53, 7414 (1982).
* (19) Adam J. Moulé, Jörg B. Bonekamp, and Klaus Meerholz, J. Appl. Phys. 100, 094503 (2006).
* (20) L. Braun, J. Chem. Phys. 80, 4157 (1984).
* (21) L. Onsager, J. Chem. Phys. 2, 599 (1934).
* (22) J.C. Scott and G.G. Malliaras, Chem. Phys. Lett. 299, 115-119 (1999).
* (23) S. Lacic and O. Ingana?s, J. Appl. Phys. 97, 124901 (2005).
* (24) A. Wagenpfahl, D. Rauh, M. Binder, C. Deibel, and V. Dyakonov, Phys. Rev. B 82, 115306 (2010).
|
arxiv-papers
| 2011-04-21T17:56:24 |
2024-09-04T02:49:18.373551
|
{
"license": "Public Domain",
"authors": "Paul M. Haney",
"submitter": "Paul Haney Mr.",
"url": "https://arxiv.org/abs/1104.4320"
}
|
1104.4467
|
aainstitutetext: Centre for High Energy Physics, Indian Institute of Science,
Bangalore 560 012, Indiabbinstitutetext: Department of Physics, University of
Cologne, 50923 Cologne, Germany
# Flavored Co-annihilations
Debtosh Chowdhury b Raghuveer Garani a and Sudhir K. Vempati
debtosh@cts.iisc.ernet.in veergarani@gmail.com vempati@cts.iisc.ernet.in
###### Abstract
Neutralino dark matter in supersymmetric models is revisited in the presence
of flavor violation in the soft supersymmetry breaking sector. We focus on
flavor violation in the sleptonic sector and study the implications for the
co-annihilation regions. Flavor violation is introduced by a single
$\tilde{\mu}_{R}-\tilde{\tau}_{R}$ insertion in the slepton mass matrix.
Limits on this insertion from BR($\tau\to\mu+\gamma$) are weak in some regions
of the parameter space where cancellations happen within the amplitudes. We
look for overlaps in parameter space where both the co-annihilation condition
as well as the cancellations within the amplitudes occur. In mSUGRA, such
overlap regions are not existent, whereas they are present in models with non-
universal Higgs boundary conditions (NUHM). The effect of flavor violation is
two fold: (a) it shifts the co-annihilation regions towards lighter neutralino
masses (b) the co-annihilation cross sections would be modified with the
inclusion of flavor violating diagrams which can contribute significantly.
Even if flavor violation is within the presently allowed limits, this is
sufficient to modify the thermally averaged cross-sections by about (10-15)%
in mSUGRA and (20-30)% in NUHM, depending on the parameter space. In the
overlap regions, the flavor violating cross sections become comparable and in
some cases even dominant to the flavor conserving ones. A comparative study of
the channels is presented for mSUGRA and NUHM cases.
###### Keywords:
mSUGRA, NUHM, Lepton Flavor Violation
††arxiv: 1104.4467
## 1 Introduction
Supersymmetric standard models have a natural dark matter candidate namely,
the lightest supersymmetric particle (LSP) if R-parity is conserved
Jungman:1995df . In mSUGRA/CMSSM models, the LSP typically is the lightest
neutralino Goldberg:1983nd ; Ellis:1983ew ; Chankowski:1998za . In most of
mSUGRA /CMSSM parameter space, the lightest neutralino is mostly a bino
($\widetilde{B^{0}}$); the bino component being close to 99%. With the bino
cross-section being small, the neutralinos are overproduced resulting in a
larger dark matter relic density compared to WMAP wmap7 allowed range. There
are however, some special regions in the mSUGRA parameter space where the
neutralino is able111See also Ref. ArkaniHamed:2006mb to satisfy the relic
density limits Baer:2003wx ; Djouadi:2006be . These are the (i) Bulk region,
(ii) Stop ($\tilde{t}$) co-annihilation region, (iii) Stau ($\tilde{\tau}$)
co-annihilation region, (iv) $A-$pole funnel region and (v) Focus point/
Hyperbolic branch regions. The various processes which play an important role
in each of these sub-cases is shown in Fig. (1).
Figure 1: Annihilation channels appearing in the $\Omega_{DM}$ calculation.
$V$ and $Z$ are the chargino and neutralino mixing matrices cmv .
The stau–co-annihilation region requires the mass of the lightest stau,
$\tilde{\tau}_{1}$ to be close to the mass of the LSP. The stop–co-
annihilation is typically realized with large $A-$terms, which is also the
case with the bulk region utpala-term . Among the above depicted regions,
discounting the case of large $A-$terms, $\tilde{\tau}$–co-annihilation and
the focus point regions are most sensitive to pre-GUT scale effects and the
see-saw mechanism cmv ; barger ; gomez-lola-kang ; Kang:2009pj ; Biggio:2010me
; Esteves:2010ff . It has been shown that the co-annihilation region gets
completely modified in the $SU(5)$ GUT theory and leads to upper bounds in the
neutralino masses cmv . Similarly, in the presence of type I, type II or type
III see-saw mechanisms cmv ; barger ; Biggio:2010me ; Esteves:2010ff
$\tilde{\tau}$–co-annihilation regions get completely modified. Strong
implications can also be felt in the focus point regions unless the right
handed neutrino masses are larger than the GUT scale barger . GUT scale
effects can even revive no-scale models olivereview . It has also been shown
that in the presence of large $A-$terms ‘new’ regions with $\tilde{\tau}$–co-
annihilation appear cmv ; olive1 .
In the present work, we consider flavor violation in the sleptonic sector and
study its implications for the co-annihilation regions. In generic MSSM,
flavor violation can appear either in the left handed slepton sector (LL),
right handed slepton sector (RR) or left-right mixing sector (LR/RL) of the
sleptonic mass matrix. However, we concentrate on the flavor violation in RR
sector as it has some interesting properties related to cancellations in the
lepton flavor violating amplitudes as discussed below. Such flavor mixing is
not difficult to imagine. It appears generically in most supersymmetric grand
unified theories. A classic example is the SUSY SU(5) GUT model. If the
supersymmetry breaking soft terms are considered universal at scales much
above the gauge coupling unification scale ($M_{GUT}$), typically the Planck
scale, then the running of the soft terms between the Planck scale and the GUT
scale could generate the RR flavor violating entries in the sleptonic sector
BHS ; Calibbi:2006nq .
For demonstration purposes, lets consider the superpotential of the $SU(5)$
SUSY-GUT:
$W=h^{u}_{ij}{\bf 10}_{i}{\bf 10}_{j}{\bf\bar{5}}_{H}+h^{d}_{ij}{\bf
10}_{i}{\bf\bar{5}}_{j}{\bf 5}_{H}+\cdots$ (1)
where ${\bf 10}$ contains $\\{q,u^{c},e^{c}\\}$ and ${\bf\bar{5}}$ contains
$\\{d^{c},l\\}$. As supersymmetry is broken above the GUT scale, the soft
terms receive RG (renormalisation group) corrections between the high scale
$M_{X}$ and $M_{GUT}$, which can be estimated using the leading log solution
of the relevant RG equation. For example, the soft mass of ${\bf 10}$ would
receive corrections:
$\Delta^{RR}_{ij}=\left(m^{2}\right)_{{\bf\widetilde{10}}_{ij}}\approx-{3\over
16\pi^{2}}\,h^{2}_{t}\;V_{ti}\,V_{tj}\,\left(3m_{0}^{2}+A_{0}^{2}\right)\,\log\left({M_{X}^{2}\over
M_{GUT}^{2}}\right),$ (2)
where $V_{ij}$ stands for the $ij^{th}$ element of the CKM matrix. Since ${\bf
10}$ contains $e^{c}$, the flavor violation in the CKM matrix (in the basis
where charged leptons and down quarks are diagonal) now appears in the right
handed slepton sector. Below the GUT scale, the RG scaling of the soft masses
just follows the standard mSUGRA evolution and no further flavor violation is
generated in the sleptonic sector in the absence of right handed neutrinos or
any other seesaw mechanism. Assuming $M_{X}\approx 10^{18}$ GeV, the leading
log estimates of the ratios of flavor violating entries to the flavor
conserving ones, $\delta^{RR}_{ij}\equiv\Delta^{RR}_{ij}/m_{\tilde{l}}^{2}$,
are222$m_{\tilde{l}}^{2}$ is the flavor conserving average slepton mass. given
in the Table 1. We have taken $A_{0}=0$ and $h_{t}\approx 1$. At 1-loop level
$\delta$ it is roughly independent of $m_{0}$.
Table 1: Flavor Violation generated in $SU(5)$ Model $|\delta|$ | Value
---|---
$\left|\delta_{\mu e}^{RR}\right|$ | $7.8\cdot 10^{-5}$
$\left|\delta_{\tau e}^{RR}\right|$ | $2.0\cdot 10^{-3}$
$\left|\delta_{\tau\mu}^{RR}\right|$ | $1.4\cdot 10^{-2}$
From the Table 1, we see that the RG generated $\delta^{RR}_{ij}$ is typically
of $\mathcal{O}(10^{-3}-10^{-5})$. Such small values will not have any
implications on the co-annihilation regions or rare flavor violating decays.
While non-universality at the GUT scale in this case is RG induced, there are
models where non-universal soft terms can arise from non-trivial Kähler
metrics in supergravity, this could be the case in models with flavor symmetry
at the high scale à la Froggatt-Nielsen models (see for example, discussions
in fnsoftterms ; Dudas:1996fe ; Barbieri:1997tu ; Kobayashi:2002mx ;
Chankowski:2005qp ; Antusch:2008jf ; Scrucca:2007pj ). In such cases, the
$\delta_{RR}$’s could be much larger, even close to $\mathcal{O}(1)$. These
terms would then receive little corrections through RG as they are evolved
from the GUT scale to the electroweak scale. Recently, in an interesting paper
susylr , supersymmetric models with Left-Right symmetry have been studied with
particular emphasis on leptonic flavor violation. In these models, both left
handed and right handed sleptonic sectors have flavor violation with the
constraint that $\delta_{RR}(\Lambda_{r})\;=\;\delta_{LL}(\Lambda_{r})$, where
$\Lambda_{r}$ is the left right symmetry breaking scale. In such cases it
could be possible333Subsequent to the appearance to this work on arXiv
flavored co-annihilations have been studied by the group Esteves:2011gk . to
generate $\delta_{RR}\sim\mathcal{O}(10^{-1})$.
In this present work, we will follow a model-independent approach and assume
the presence of a single flavor violating parameter $\Delta^{RR}_{\mu\tau}$
and study the implications of it for the co-annihilation region. We will
consider the simplistic case of universal soft-masses at the $M_{GUT}$ scale
with non-zero $\delta_{23}^{RR}$ which is treated as a free parameter. To
distinguish from the standard mSUGRA model, we will call this model
$\delta$-mSUGRA and similar nomenclature also holds for the other
supersymmetry breaking models which we consider in this work.
While flavor violating entries in the sleptonic mass matrices are strongly
constrained in general, the constraints on leptonic $\delta_{23}^{RR}$ entries
are weak in some regions of the parameter space Hisano:1995nq ; Masina:2002mv
; Paradisi:2005fk . This leads to the possibility that large flavor violation
could be present in the sleptonic right handed sector. In these regions
cancellations happen between various contributions to the lepton flavor
violating (LFV) amplitudes. If such cancellation regions overlap with regions
where sleptonic co-annihilations are important, flavor violation has to be
considered in evaluating the co-annihilation cross-sections in the early
universe. This is the basic point of the paper where we show that flavor
violating processes can play a dominant role in the co-annihilation regions of
the supersymmetric breaking soft parameter space. The processes contributing
to relic density in these regions are called flavored co-annihilations.
It turns out that with mSUGRA/CMSSM boundary conditions, the parameter space
where the flavor violating constraints are relaxed does not overlap with the
$\tilde{\tau}_{1}$ co-annihilation regions unless one considers extremely
large values of $\delta\geqslant 0.8$. The overlap is not very significant and
is mostly ruled out by other phenomenological constraints. However, if one
relaxes the complete universality in the Higgs sector i.e., within non-
universal Higgs mass models (NUHM), there is an overlap between these regions,
paving way for large flavor violation to coexist with co-annihilation regions.
The fact that in $\delta$-NUHM these regions do overlap has already been
observed independently by Hisano et al. Hisano:2002iy ; Hisano:2008ng .
However, they have studied $\mu\,\rightarrow\,e\,\gamma$ transitions and their
co-annihilating partner is not really a mixed flavor state. Further, they have
not studied the relic density regions in detail.
In this present work we elaborate on these regions and study the consequences
of it. The rest of the paper is organized as follows: In section [2] we
discuss the effect on $\delta$ in the co-annihilation regions both in the mass
of the co-annihilating partner and in the cross section. We also show that
overlap between regions of LFV cancellations and co-annihilations are not
possible in $\delta$-mSUGRA. In section [3] we show that in $\delta$-NUHM
regions do exist where flavored co-annihilations become important. Relative
importance of various cross-sections in the flavored co-annihilation regions
is elaborated in section [4]. We close with a summary and brief implications
for LHC in [5]. In Appendix [A] we have written down the approximate
expression of the soft-masses for mSUGRA and NUHM scenario for three different
values to $\tan\beta$. In Appendix [B] we present $\delta$-mSUGRA in more
detail using approximate results. Description of numerical packages used and
numerical procedures followed are in Appendix [C]. In Appendix [D], we present
loop functions which are relevant to the discussion in the text. In Appendix
[E] we present the analytic form of the cross-sections for some scattering
processes relevant for the present discussions.
Figure 2: The Co-annihilation region with and without flavor mixing. In the
above figure we plot the condition
$m_{\tilde{\tau}_{1}}-m_{\tilde{\chi}_{1}^{0}}=0$ for $\delta=0$ (blue line)
and for $\delta=0.5$ (green line). Here we have chosen $\tan\beta=5$ and
$A_{0}=0$.
## 2 Co-annihilation with Flavor Violation
Co-annihilations play an important role in reducing the (relic) number density
of the dark matter particle by increasing its interactions at the decoupling
point. It requires having another particle which is almost degenerate in mass
with the dark matter particle and should share a quantum number with it
Griest:1990kh . In mSUGRA, $\tilde{\chi}_{1}^{0}$ can have co-annihilations
with $\tilde{\tau}_{1}$ in regions of the parameter space where
$m_{\tilde{\tau}_{1}}\approx m_{\tilde{\chi}_{1}^{0}}$. We will now generalize
this condition444The condition can be more accurately expressed as
$m_{\tilde{\tau}_{1}}=m_{\tilde{\chi}_{1}^{0}}+\delta m$, where $\delta m$
lies within 10-15 GeV. in the presence of flavor violation. As discussed in
the introduction, we will consider a single $\mu-\tau$ flavor mixing term in
the RR sector, $\Delta^{\mu\tau}_{RR}$ to be present at the weak scale.
Similar analysis also holds for the $e-\tau$ flavor mixing. The slepton mass
matrix is defined by
$\displaystyle\qquad\qquad\mathcal{L}_{int}\supset-\frac{1}{2}\,\Phi^{T}\,\mathcal{M}^{2}_{\tilde{l}}\,\Phi$
(3) where
$\Phi^{T}=\Big{\\{}\tilde{e}_{L},\tilde{\mu}_{L},\tilde{\tau}_{L},\tilde{e}_{R},\tilde{\mu}_{R},\tilde{\tau}_{R}\Big{\\}}$
and $\displaystyle\mathcal{M}^{2}_{\tilde{l}}$
$\displaystyle=\begin{pmatrix}m_{\tilde{e}_{L}}^{2}&0&0&m_{\tilde{e}_{LR}}^{2}&0&0\\\
0&m_{\tilde{\mu}_{L}}^{2}&0&0&m_{\tilde{\mu}_{LR}}^{2}&0\\\
0&0&m_{\tilde{\tau}_{L}}^{2}&0&0&m^{2}_{\tilde{\tau}_{LR}}\\\
m_{\tilde{e}_{LR}}^{2}&0&0&m_{\tilde{e}_{R}}^{2}&0&0\\\
0&m_{\tilde{\mu}_{LR}}^{2}&0&0&m_{\tilde{\mu}_{R}}^{2}&\Delta^{\mu\tau}_{RR}\\\
0&0&m_{\tilde{\tau}_{LR}}^{2}&0&\Delta^{\mu\tau}_{RR}&m^{2}_{\tilde{\tau}_{R}}\\\
\end{pmatrix},$ (4)
where, $m_{\tilde{f}_{LR}}^{2}=m_{f}\left(A_{f}-\mu\tan\beta\right)$’s are the
flavor conserving left-right mixing term, $m_{\tilde{f}_{L}}^{2}$’s are the
left handed slepton mass term and $m_{\tilde{f}_{R}}^{2}$’s denote the right
handed slepton masses. In the limit of vanishing electron mass555In all our
numerical calculations, we have used the full $6\times 6$ mass matrix without
any approximations. This approximation is valid only in models with universal
scalar masses, like mSUGRA, NUHM etc. and zero flavor mixing in the selectron
sector, we can consider the following reduced $4\times 4$ mass matrix. This
matrix is sufficient and convenient to understand most of the discussion in
the paper. It is given by
$\mathcal{M}^{2}_{\tilde{l}}=\begin{pmatrix}m_{\tilde{\mu}_{L}}^{2}&0&m_{\tilde{\mu}_{LR}}^{2}&0\\\
0&m_{\tilde{\tau}_{L}}^{2}&0&m^{2}_{\tilde{\tau}_{LR}}\\\
m^{2}_{\tilde{\mu}_{LR}}&0&m_{\tilde{\mu}_{R}}^{2}&\Delta^{\mu\tau}_{RR}\\\
0&m^{2}_{\tilde{\tau}_{LR}}&\Delta^{\mu\tau}_{RR}&m_{\tilde{\tau}_{R}}^{2}\end{pmatrix},$
(5)
where, we have taken it to be real for simplicity. The lightest eigenvalue of
the above matrix can be easily estimated. The lower $2\times 2$ block can be
diagonalized assuming that the flavor violating $\Delta^{\mu\tau}_{RR}$ is
much smaller than the flavor diagonal entries. A second diagonalization for
the stau LR mixing entry can be done in a similar manner. This leads to a
rough estimate of the lightest eigenvalue as:
$m_{\tilde{l}_{1}}^{2}\;\simeq\;m_{\tilde{\tau}_{R}}^{2}(1-\delta)-m_{\tau}\mu\tan\beta,$
(6)
where
$\delta=\frac{\Delta^{\mu\tau}_{RR}}{\sqrt{m_{\tilde{\mu}^{2}_{R}}m_{\tilde{\tau}^{2}_{R}}}}$.
Requiring that the lightest eigenvalue not to be tachyonic, we find an upper
bound on $\delta$ as follows:
$\delta\;<\;1-{m_{\tau}\mu\tan\beta\over m_{\tilde{\tau}_{R}}^{2}}$ (7)
This condition becomes important in regions of the parameter space where
$\mu\gg m_{\tilde{\tau}_{R}}^{2}$ and in regions where $\tan\beta$ is very
large such that the second term approaches unity. For co-annihilations,
$\delta$ lowers the lightest eigenvalue of the sleptonic mass matrix. Non-zero
$\delta$ shifts the ‘standard regions’ in mSUGRA towards lower values of
$M_{1/2}$, for a fixed $m_{0}$. In other words, since the sleptons become
lighter, the co-annihilations happen with lighter neutralino masses. To
illustrate this point let us consider mSUGRA like universal boundary
conditions at the GUT scale. The one exception to the universality of the
scalar mass terms particularly slepton mass terms at GUT scale is in terms of
the flavor violating mass term $(\Delta_{RR}^{\mu\tau})$. We will call this
model as $\delta$-mSUGRA. Given that the $\Delta^{\mu\tau}_{RR}$ parameter
does not run significantly under RG corrections666This is true as long as we
stick to MSSM like particle spectrum and interactions. Additional interactions
and particles can modify the flavor structure., we can use the MSSM RGE with
mSUGRA boundary conditions to study the low energy phenomenology. In Appendix
[A.1], we have presented approximate solutions for the RGE of soft masses and
couplings in mSUGRA. Using approximate formulae, in Fig. (2) we have plotted,
the $\tilde{\tau}-$co-annihilation condition,
$m_{\tilde{\chi}^{0}_{1}}-m_{\tilde{l}_{1}}\simeq 0$, with and without flavor
mixing. We have chosen $\delta=0.0$, $0.5$ and $\tan\beta=5$. As expected from
the Eq.(6), the presence of flavor violating $\delta$ shifts the co-
annihilation regions more towards the diagonal in the $m_{0}-M_{\frac{1}{2}}$
plane. In table 2, we show the spectrum for two points with $\delta=0$ and
$\delta=0.5$ which demonstrate that for fixed $m_{0}$, a lighter neutralino
can be degenerate with $m_{\tilde{l}_{1}}$ in the presence of $\delta$.
Table 2: Spectrum in co-annihilation region with and without $\delta$. Parameters | Mass (GeV)
---|---
$m_{0}$ | 200.0 | 200.0
$M_{\frac{1}{2}}$ | 1031.0 | 458.0
$\tan\beta$ | 20 | 20
$\delta$ | 0.0 | 0.5
$m_{\chi_{1}^{0}}$ | 439.22 | 188.69
$m_{\tilde{\tau}_{1}}$ | 439.24 | 188.70
Figure 3: Co-annihilation channels appearing in the $\Omega_{DM}$ calculation
with $\mu-\tau$ flavor violation in the right handed sector. Notice that there
are now new final states where either $\mu$ or a $\tau$ could appear.
Eq.(6) is a rough estimate and not valid for large $\delta$. A more accurate
expression is presented in Appendix [ B]. As we will see, this will not change
the conclusions of the present discussion much. We will revisit this point
again in the next section.
The presence of $\delta$ also affects the relic density computations in the
co-annihilation regions . The thermally averaged cross section on which relic
density crucially depends can get significantly modified with $\delta$, where
flavor violating scatterings are also now allowed. The typical $\tilde{\tau}$
co-annihilation processes in the absence of flavor violation are
$\chi_{1}^{0}\chi_{1}^{0}\rightarrow\tau\bar{\tau},\mu\bar{\mu},e\bar{e}$,
$\tilde{\chi}_{1}^{0}\tilde{\tau}_{1}\rightarrow\tau\gamma$,
$\tilde{\tau}_{1}\tilde{\tau}_{1}\rightarrow\tau\tau$,
$\tilde{\tau}_{1}\tilde{\tau}_{1}^{*}\rightarrow\tau\bar{\tau}$,
$\tilde{\chi}_{1}^{0}\tilde{\tau}_{1}\rightarrow Z\tau$,
$\tilde{\tau}_{1}\tilde{\tau}_{1}^{*}\rightarrow\gamma\gamma$. In the presence
of $\tilde{\mu}_{R}-\tilde{\tau}_{R}$ flavor mixing, the new vertices related
to flavor mixing would contribute to the processes with flavor violating final
states. The corresponding Feynman diagrams are shown in Fig.(3), where
$\mu/\tau$ would mean that the final state could either be a $\mu$ or a
$\tau$. The relevant Boltzmann equations for the neutralino and the lightest
slepton ($\tilde{l}_{1}$), continue to remain as in the unflavored co-
annihilation case, though the masses and the cross-sections appearing in them
change.
We have computed all the possible co-annihilation channels including flavor
violation by adding the flavor violating couplings in the MSSM model file of
well known relic density calculator, MicrOMEGAs Belanger:2010gh . The flavor
violating co-annihilations contribute significantly to the total cross section
and their relative importance increases with increasing $\delta$ as expected.
So far we have not addressed the question whether such large flavor violating
entries in the sleptonic mass matrix are compatible with the existing flavor
violating constraints from rare decay processes like $\tau\to\mu+\gamma$ or
$\tau\to\mu ee$ etc. Constraints from such processes have been discussed in
several works. The constraints on right handed (RR) flavor violating sector
are different compared to those of left handed (LL) sector as they only have
neutralino contributions and have no chargino contributions. Furthermore the
two neutralino contributions777These are the pure $\tilde{B}^{0}$ and the
mixed $\tilde{B}^{0}-\tilde{H}^{0}$ diagrams, as depicted in Fig.(4). can have
cancellations amongst each other in certain regions of the parameter space as
elaborated in refs. Hisano:1995nq ; Masina:2002mv ; Paradisi:2005fk .
Following Masina:2002mv , the branching ratio for $\tau\to\mu+\gamma$ can be
written as in the generalized mass insertion approximation
$\displaystyle\text{BR}(\tau\rightarrow\mu\gamma)=\;$ $\displaystyle
5.78\times
10^{-5}\;\frac{M_{W}^{4}M_{1}^{2}\tan^{2}\beta}{|\mu|^{2}}\times\left|\delta^{RR}_{23}(I_{B,R}-I_{R})\right|^{2},$
(8)
where $I_{B,R}$ and $I_{R}$ are loop functions are given in Appendix [D].
This amplitude is resultant from the two diagrams shown in the mass-insertion
approximation in Fig. (4). The first one is a pure Bino ($\tilde{B}^{0}$)
contribution whereas the second one is a mixed Bino-Higgsino
($\tilde{B}-\tilde{H}_{1}^{0}-\tilde{H}_{2}^{0}$) contribution. There is a
relative sign difference between these two contributions and thus leads to
cancellations in some regions of the parameter space. In $\delta$-mSUGRA,
these cancellations occur when $m_{\tilde{\tau}_{R}}\approx 6M_{1}$ or
equivalently $\mu^{2}\simeq m^{2}_{\tilde{\tau}_{R}}$Masina:2002mv . In
regions outside the cancellation region the limit on $\delta_{RR}$ is of
$\mathcal{O}(10^{-1})$ for $\tan\beta=10$ and for a slepton mass of around 400
GeV lucanpb using the present on $\text{BR}(\tau\to\mu+\gamma)\leq 4.4\times
10^{-8}$ Nakamura:2010zzi . In the cancellation region however the bound on
$\delta$ is very weak and $\delta$ could be $\mathcal{O}(1)$.
Figure 4: $\tilde{B}^{0}$ and $\tilde{B}^{0}-\tilde{H}^{0}$ contribution in RR-insertion. The photon can be attached with the charged internal lines. |
---|---
|
Figure 5: $m_{0}-M_{\frac{1}{2}}$ plane in $\delta$-mSUGRA: The different
contour shows branching ratio, BR($\tau\,\rightarrow\,\mu\,\gamma$) for
$\delta=0.2,\,0.4,\,0.6$ and $0.8$ (from top left clockwise) and for
$\tan\beta=20,A_{0}=0$ and sign$(\mu)>0$. The blue line indicates WMAP bound
satisfied region. The black shaded region is excluded by direct search in LEP
for the Higgs boson. The violet dots represent the present limits form LHC
lhcbounds . The red dot-dashed line indicates 1 TeV contour for gluino and
blue dotted line marks the 1 TeV contours for first generation squark mass.
The regions where the contours of BR($\tau\,\rightarrow\,\mu\,\gamma$) reaches
$\lesssim 10^{-10}$ are the places where cancellations happen. In this region
$\delta^{23}_{RR}$ becomes unbounded because of the cancellation between the
$\tilde{B}^{0}$ and $\tilde{B}^{0}-\tilde{H}^{0}$ diagrams in Fig.(4).
A large $\delta\sim\mathcal{O}(1)$888By definition $\delta$ cannot be larger
than 1. Here $\mathcal{O}(1)$ means close to 1. would increase the flavor
violating cross sections in the early universe. The current bounds already
push the value of $\delta\sim 10^{-1}$ for reasonable values of slepton mass
$\sim 400$ GeV and $\tan\beta\sim 10$. We look for regions where the bound is
significantly weakened due to cancellations. This would require that there
should be significant amount of cancellations among the flavor violating
amplitudes to escape the bound from $\tau\to\mu+\gamma$. In Fig. [5], we have
presented the numerical results for mSUGRA with each panel representing a
different value of $\delta$ ($0.2,\,0.4,\,0.6$ and $0.8$). $\tan\beta$ is
fixed to be 20 and sign($\mu$) is positive. The details of the numerical
procedures we have followed are presented in Appendix [C]. In all these plots,
we have shown contours of BR$(\tau\rightarrow\mu\gamma)$ and the co-
annihilation regions. The other constraints shown on the plot include, the
purple region which is excluded as the LSP is charged,
($m_{\tilde{l}_{1}}<m_{\chi_{1}^{0}}$); the translucent black shaded region is
excluded by search for a light neutral higgs boson at LEP, $m_{h}<114.5\ {\rm
GeV}$, the light green region where the chargino mass is excluded by Tevatron,
$m_{\chi^{\pm}_{i}}<103.5$ GeV. The co-annihilation region has been computed
including the flavor violating diagrams in the thermally averaged cross-
sections. The relic density is fixed by the recent 7-year data of WMAP which
sets it to be wmap7 ,
$\Omega_{CDM}h^{2}=0.1109\pm 0.0056$ (9)
In the blue shaded region the neutralino relic density ($\Omega_{DM}$) is
within the $3\sigma$ limit of wmap7 , i.e., we require it to be
$0.09\leq\Omega_{DM}h^{2}\leq 0.12\,.$ (10)
From the first panel of the figure, for $\delta=0.2$ we see that there is no
overlap in the regions where cancellation in the amplitudes for
$\tau\to\mu+\gamma$ happens (around BR$(\tau\rightarrow\mu\gamma)\lesssim
10^{-10}$ ) and the co-annihilation region (blue region). With increasing
$\delta$, as can be seen from subsequent panels, the co-annihilation region
moves towards the diagonal of the plane as the slepton mass becomes lighter,
and the cancellation region which requires $m_{\tilde{\tau}_{R}}\approx
6M_{1}$ also moves towards the diagonal with increasing $\delta$. However,
within $\delta$-mSUGRA these two regions do not coincide except partially at
the top end of the spectrum close to the upper bound of the the co-
annihilation region.
|
---|---
|
Figure 6: Panels (from top-right in clockwise direction) depict
$m_{0}$-$M_{\frac{1}{2}}$ plane with $m_{10}=m_{hd}=0.5\cdot m_{0}$ and
$m_{20}=m_{hu}=1.5\cdot m_{0}$ for $\tan\beta=20$, $A_{0}=0$ and
sign$(\mu)>0$, with $\delta=0.2,0.4,0.6,0.7$ respectively. Dark green region
indicates inefficient REWSB. Purple region indicates $\tilde{l}_{1}$ LSP.
Black shade marks the region excluded by unsuccessful search by LEP,
$m_{h}<114.5\,{\rm GeV}$. The violet dots represent the present limits form
LHC lhcbounds . The red dot-dashed line indicates 1 TeV contour for gluino and
blue dotted line marks the 1 TeV contours for first two generation squark
mass. Blue strip bordering $\tilde{l}_{1}$ LSP is the co-annihilation region.
The different contour marks $BR(\tau\rightarrow\mu\gamma)$. The regions where
the contours of BR($\tau\,\rightarrow\,\mu\,\gamma)\lesssim 10^{-10}$ and
below are the places where cancellations happen, which can be identified by
their ‘band’ like structure.
From Fig. (5) we can see that a very large ($\delta_{RR}^{23}\gtrsim 0.8$) is
required to make the cancellation region consistent with the co-annihilation
region. In $\delta$-mSUGRA having such large $\delta$ is consistent only very
specific points of the parameter space (please see Appendix B for more
discussion). Hence, we can infer from the above figures that within the
$\delta$-mSUGRA scenario the cancellation and co-annihilation region are
disparate and no simultaneous solution exists. While the present discussion
was based on numerical solutions for a particular $\tan\beta$, one can easily
convince oneself that it would be true for any tan$\beta$ by looking at the
analytical formulae. In fact, in the co-annihilation region, the branching
fraction can be evaluated in the limit $\left(m_{\tilde{\tau}_{R}}\rightarrow
M_{1}\right)$ and is given as
$\displaystyle\text{BR}(\tau\rightarrow\mu\gamma)\approx\;$ $\displaystyle
1.134\times
10^{-6}\times\frac{M_{W}^{4}\left|\delta_{23}^{RR}\right|^{2}\tan^{2}\beta}{M_{1}^{4}}$
(11)
where, we have used $|\mu|^{2}\approx 0.5m^{2}_{\tilde{\tau}_{R}}+20M^{2}_{1}$
and $m^{2}_{\tilde{\tau}_{L}}\approx m^{2}_{\tilde{\tau}_{R}}+2.5M^{2}_{1}$.
It is important to note that, the above expression obviously does not permit
any cancellations. Thus within $\delta$-mSUGRA, flavor violation in the co-
annihilation region even if present would be constrained by the existing
leptonic flavor violating constraints. In the following we see that this
situation is no longer true in case, when, one relaxes the strict universality
of the $\delta$-mSUGRA and considers simple extensions like non-universal
Higgs mass models.
Before proceeding to $\delta$-NUHM, a couple of observations are important.
Firstly, apart from the cancellation regions, the present limits on
$BR(\tau\to\mu+\gamma)$ constraint $|\delta|\lesssim 0.11-0.12$ for tan$\beta$
of 20 and slepton mass of around 200 GeV ($M_{{1\over 2}}\sim 500$ GeV) in the
co-annihilation regions. Since such values of $\delta$ are allowed by the
data, one can consider them to be present in $\delta$-mSUGRA. A larger value
of $\delta$ would be valid for larger slepton masses. As discussed, this would
lead to shifts in the parameter space of the co-annihilation region
corresponding to mSUGRA. As a result, there is a shift in the spectrum also
compared to mSUGRA. The thermally averaged cross-section are also modified.
The shifts would be largest in the absence of any constraint from lepton
flavor violation. For this reason, we look for overlapping regions between the
cancellation and co-annihilation regions. Secondly, the cancellation region
lies within a small narrow band. To the left and right of this band there
could be regions of partial cancellations. These are present in Figs. (5). A
discussion connected with this issue is present in Appendix [B].
## 3 Flavored Co-annihilation in $\delta$-NUHM
As we have seen in the previous section, in $\delta$-mSUGRA, the $\mu$
parameter gets tied up with the neutralino mass in the co-annihilation region,
thus leaving little room for cancellations within the flavor violating
amplitudes. In the NUHM models, which are characterized by non-universal soft
masses for the Higgs alone Ellis:2002iu , the $\mu$ remains no longer
restricted. This can be demonstrated with approximate formulae presented in
Appendix [A.2]. We denote the high scale mass parameters as
$m^{2}_{H_{u}}(M_{\text{GUT}})\equiv m^{2}_{20}$ and
$m^{2}_{H_{d}}(M_{\text{GUT}})\equiv m^{2}_{10}$. For tan$\beta$ = 20, using
the approximate expressions in the Appendix [A.2], we see that $|\mu|^{2}$ has
the form:
$|\mu|^{2}\approx
0.67~{}m_{0}^{2}+2.87~{}M_{\frac{1}{2}}^{2}-0.027~{}m_{10}^{2}-0.64~{}m_{20}^{2}$
(12)
Setting $m_{0}^{2}\approx m_{\tilde{\tau}_{R}}^{2}-0.15M_{\frac{1}{2}}^{2}$
and $M_{1}\approx 0.411M_{\frac{1}{2}}$ and taking the limit
$m_{\tilde{\tau}_{R}}\rightarrow M_{1}$ in the co-annihilation region, we have
$|\mu|^{2}\approx 17~{}M_{1}^{2}-0.027~{}m_{10}^{2}-0.64~{}m_{20}^{2}$ (13)
thus providing enough freedom999$|\mu|^{2}\approx 20.5M_{1}^{2}$ in this limit
in mSUGRA as can be seen from the expression below Eq. (11) in terms of
$m_{10}$ and $m_{20}$ to allow cancellations in the LFV amplitudes to co-exist
with co-annihilation regions.
The dark matter phenomenology of NUHM models has been studied by several
authors Ellis:2002iu ; nonunivHiggs ; Ellis:2008eu ; Ellis:2007by ;
Roszkowski:2009sm ; Das:2010kb . The LSP is a neutralino in large regions of
the parameter space and further, it can admit large Higgsino fractions in its
composition unlike in mSUGRA. For simplicity, we concentrate on Bino dominated
regions in the following. In such a case the lightest neutralino mass, in
terms of SUSY parameters is as in mSUGRA:
$m_{\chi^{0}_{1}}\,\approx\,0.411M_{\frac{1}{2}}$ (14)
For the lightest slepton mass one can use Eq.(6) where now
$m_{\tilde{\tau}_{R}}^{2}$ at weak scale will be determined by the NUHM
boundary conditions at the GUT scale. Similar to the mSUGRA case, approximate
solutions can be derived for the NUHM case also and they are presented in
Appendix (A.2). Using the co-annihilation condition
$m_{\tilde{l}_{1}}\,\approx\,m_{\chi^{0}_{1}}$ and the cancellation condition
$m^{2}_{\tilde{\tau}_{R}}\approx\mu^{2}$, one can derive expressions for
$m_{10}^{2}$ and $m_{20}^{2}$ where flavored co-annihilations are of maximal
importance.
The derived expressions for $m^{2}_{10},m^{2}_{20}$ are however, complicated.
We found simpler parameterizations for regions where the LSP is Bino dominated
and co-annihilations with the $\tilde{l}_{1}$ are important. Examples of such
regions are (i) $m_{20}=1.5\cdot m_{0}$ and $m_{10}=0.5\cdot m_{0}$ and (ii)
$m_{20}=3\cdot m_{0}$ and $m_{10}=m_{0}$. For these values of $m_{10}$ and
$m_{20}$, flavored co-annihilations can exist for non-zero $\delta$. In Fig.
(6), we present in $m_{0},M_{\frac{1}{2}}$ plane regions consistent with all
constraints for $\delta=0.2,0.4,0.6$ and $0.7$, in an analogous fashion as to
those presented in $\delta$-mSUGRA section, Fig. (5). We have chosen
$m_{20}=1.5\cdot m_{0}$ and $m_{10}=0.5\cdot m_{0}$ for this plots. The purple
region is excluded as the LSP is charged, here
$m_{\tilde{l}_{1}}<m_{\chi_{1}^{0}}$. Dark green region indicates no radiative
electroweak symmetry breaking, $|\mu|^{2}<0$. The translucent black shaded
region is excluded by search for light neutral higgs boson at LEP,
$m_{h}<114.5\,{\rm GeV}$. As in $\delta$-mSUGRA, we see that with increase in
$\delta_{23}^{RR}$, $\tilde{l}_{1}-$LSP region increases owing to the
reduction of mass of $\tilde{l}_{1}$. The impact of non-universality in the
Higgs sector is negligible for $m_{\tilde{l}_{1}}$ in these regions.
Analogously, regions excluded by light higgs search ($m_{h}<114.5\,{\rm GeV}$)
are weakly affected in the presence of $\delta$. Moreover, region with
$|\mu|^{2}<0$ is not affected by $\delta$ as it is entirely governed by
$m_{0},m_{10}$ and $m_{20}$ with maximum contribution from $m_{20}$ and
$m_{0}$. However, as expected the magnitude of $BR(\tau\rightarrow\mu\gamma)$
governed by eq.(11), increases with $\delta_{23}^{RR}$. The last panel of the
figure shows regions where cancellation regions overlaps with the co-
annihilation regions for $\delta=0.7$. For a different set of values of
$m_{10}$ and $m_{20}$, for example, $m_{20}=3\cdot m_{0}$, $m_{10}=m_{0}$ the
overlap regions can be found for even smaller values of $\delta$. In these
regions flavored co-annihilations play a dominant role.
## 4 Channels
The individual scattering processes involved in the computation of thermally
averaged cross-section are called channels. The typical channels which are
dominant in the co-annihilation region are
$\tilde{l}_{1}\tilde{l}_{1}\rightarrow l\bar{l}$,
$\tilde{\chi}_{1}^{0}\tilde{l}_{1}\rightarrow\gamma l$,
$\tilde{\chi}_{1}^{0}\tilde{l}_{1}\rightarrow Zl$,
$\tilde{\chi}_{1}^{0}\tilde{\chi}_{1}^{0}\rightarrow l\bar{l}$,
$\tilde{l}_{1}\tilde{l}_{1}^{*}\rightarrow l\bar{l}$ etc. (they are about
thirty of them in total). In the presence of flavor violation the number of
these processes would be enlarged to include flavor violating final states. In
the present section, we analyze the relative importance of the new flavor
violating channels with the corresponding flavor conserving ones as a function
of $\delta$. To, do this we fix $M_{{1\over 2}}$ and vary $\delta$ and
$m_{0}$. In effect, this corresponds to the combination of horizontal sections
of the co-annihilation regions of all the panels in Fig .(5) (Fig. (6)) for
$\delta$-mSUGRA ($\delta$-NUHM). In Fig. (7) we plot the dominant channels as
a function of $\delta$ in $\delta$-mSUGRA. All the points satisfy relic
density within WMAP $3\sigma$ bound and lie in the co-annihilation region.
Rest of the phenomenological constraints are also imposed. $m_{0}$ is varied
from 100 to 600 GeV, whereas $M_{1/2}$ is fixed at 500 GeV, $\tan\beta=20$ and
sign$(\mu)>0$. The Y-axis is percentage contribution to the thermally averaged
cross section, $\langle\sigma v\rangle$ defined by
$\displaystyle\%\;\langle\sigma v\rangle_{ij\rightarrow
mn}=\frac{\langle\sigma v\rangle_{ij\rightarrow mn}}{\langle\sigma
v\rangle_{total}}\times 100$ (15)
|
---|---
|
|
Figure 7: Channels in $\delta$-mSUGRA: The colored dots show relative contribution of a particular channel to $\langle\sigma v\rangle_{tot}$. $M_{1/2}=500$ GeV and $m_{0}$ and $\delta$ are varied to fit the co-annihilation condition. Here all the points satisfy WMAP $3\sigma$ bound (10). For the above plots $\tan\beta$ is fixed to 20 and sign$(\mu)>0$. Flavor violating constraints are not imposed here. Table 3: ${\displaystyle\frac{\langle\sigma v\rangle_{channel}}{\langle\sigma v\rangle_{total}}}$ for dominant channels for $\delta$-mSUGRA Parameters | Point I $M_{\frac{1}{2}}=500.0$ GeV, $\tan\beta=20$, $m_{0}=165.6$ GeV | Point II $M_{\frac{1}{2}}=500.0$ GeV, $\tan\beta=20$, $m_{0}=169.6$ GeV | Point III $M_{\frac{1}{2}}=500.0$ GeV, $\tan\beta=20$, $m_{0}=249.0$ GeV
---|---|---|---
$\delta$ | 0.197 | 0.202 | 0.5
$\Omega h^{2}$ | 0.0910 | 0.119 | 0.120
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow\gamma\;\tau$ | 0.206 | 0.227 | 0.181
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow\gamma\;\mu$ | $6.53\times 10^{-2}$ | $7.47\times 10^{-2}$ | 0.13
$\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\tau\;\tau$ | 0.211 | 0.181 | 0.116
$\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\tau\;\mu$ | 0.130 | 0.117 | 0.165
$\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\mu\;\mu$ | $2.10\times 10^{-2}$ | $1.97\times 10^{-2}$ | $5.97\times 10^{-2}$
$\tilde{l}_{1}\;\tilde{l}_{1}^{*}\rightarrow\gamma\;\gamma$ | 0.110 | $9.65\times 10^{-2}$ | $9.93\times 10^{-2}$
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow Z\;\tau$ | $5.67\times 10^{-2}$ | $6.23\times 10^{-2}$ | $4.96\times 10^{-2}$
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow Z\;\mu$ | $1.76\times 10^{-2}$ | $2.02\times 10^{-2}$ | $3.53\times 10^{-2}$
$\tilde{l}_{1}\;\tilde{l}_{1}^{*}\rightarrow Z\;\gamma$ | $5.00\times 10^{-2}$ | $4.42\times 10^{-2}$ | $5.18\times 10^{-2}$
$\tilde{\chi}_{1}^{0}\;\tilde{\chi}_{1}^{0}\rightarrow\tau\;\bar{\tau}$ | $2.02\times 10^{-2}$ | $2.81\times 10^{-2}$ | $2.27\times 10^{-2}$
$\tilde{\chi}_{1}^{0}\;\tilde{\chi}_{1}^{0}\rightarrow\tau\;\bar{\mu}$ | $6.76\times 10^{-3}$ | $9.50\times 10^{-3}$ | $8.29\times 10^{-3}$
$\tilde{\chi}_{1}^{0}\;\tilde{\chi}_{1}^{0}\rightarrow\mu\;\bar{\mu}$ | $1.73\times 10^{-2}$ | $2.42\times 10^{-2}$ | $1.80\times 10^{-2}$
It should be noted that flavor violating constraints are not imposed for
$\delta$-mSUGRA in this analysis. The current limits on
BR($\tau\to\mu+\gamma$) constraint $|\delta|\lesssim 0.11$ in the parameter
space presented in the figure. For those values of $\delta$ we see that the
flavor violating channels contribute up to $5\%$ of the dominant channel
contribution. Larger values of $\delta$ are not allowed after the imposition
of this constraint as there is no overlap between cancellation regions and co-
annihilation regions in $\delta$-mSUGRA. However, to study the features of the
channels with respect to $\delta$ it would be useful not to impose the
BR($\tau\to\mu+\gamma$) constraint for the present.
|
---|---
|
|
Figure 8: Channels in $\delta$-NUHM: The colored dots show relative contribution of a particular channel to $\langle\sigma v\rangle_{tot}$. $M_{1/2}=750$ GeV and $m_{0}$ and $\delta$ are varied to fit the co-annihilation condition. Here all the points satisfy WMAP $3\sigma$ bound (10). For the above plots $\tan\beta$ is fixed to 20 and sign$(\mu)>0$. Flavor violating constraints are imposed here, which causes the discontinuous regions in each of the channels. Table 4: ${\displaystyle\frac{\langle\sigma v\rangle_{channel}}{\langle\sigma v\rangle_{total}}}$ for dominant channels for $\delta$-NUHM Parameters | Point IV $M_{\frac{1}{2}}=750.0$ GeV, $\tan\beta=20$, $m_{0}=199.3$ GeV | Point V $M_{\frac{1}{2}}=750.0$ GeV, $\tan\beta=20$, $m_{0}=216.0$ GeV | Point VI $M_{\frac{1}{2}}=750.0$ GeV, $\tan\beta=20$, $m_{0}=592.1$ GeV
---|---|---|---
$\delta$ | 0.01 | 0.12 | 0.767
$\Omega h^{2}$ | 0.115 | 0.116 | 0.111
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow\gamma\;\tau$ | 0.190 | 0.168 | 0.116
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow\gamma\;\mu$ | $4.74\times 10^{-4}$ | $3.89\times 10^{-2}$ | $9.89\times 10^{-2}$
$\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\tau\;\tau$ | 0.388 | 0.280 | 0.134
$\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\tau\;\mu$ | $1.90\times 10^{-3}$ | 0.127 | 0.227
$\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\mu\;\mu$ | $2.39\times 10^{-6}$ | $1.48\times 10^{-2}$ | $9.37\times 10^{-2}$
$\tilde{l}_{1}\;\tilde{l}_{1}^{*}\rightarrow\gamma\;\gamma$ | $0.115$ | 0.123 | $0.129$
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow Z\;\tau$ | $5.50\times 10^{-2}$ | $4.88\times 10^{-2}$ | $3.35\times 10^{-2}$
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow Z\;\mu$ | $2.02\times 10^{-6}$ | $1.11\times 10^{-2}$ | $2.28\times 10^{-2}$
$\tilde{l}_{1}\;\tilde{l}_{1}^{*}\rightarrow Z\;\gamma$ | $5.67\times 10^{-2}$ | $6.36\times 10^{-2}$ | $7.49\times 10^{-2}$
$\tilde{\chi}_{1}^{0}\;\tilde{\chi}_{1}^{0}\rightarrow\tau\;\bar{\tau}$ | $1.14\times 10^{-2}$ | $1.13\times 10^{-2}$ | $3.72\times 10^{-3}$
$\tilde{\chi}_{1}^{0}\;\tilde{\chi}_{1}^{0}\rightarrow\tau\;\bar{\mu}$ | $2.80\times 10^{-5}$ | $1.77\times 10^{-3}$ | $3.53\times 10^{-3}$
$\tilde{\chi}_{1}^{0}\;\tilde{\chi}_{1}^{0}\rightarrow\mu\;\bar{\mu}$ | $9.53\times 10^{-3}$ | $9.87\times 10^{-3}$ | $4.49\times 10^{-3}$
The upper left panel shows the % $\langle\sigma v\rangle$ for
$\tilde{\chi}_{1}^{0}\tilde{\chi}_{1}^{0}\rightarrow l\bar{l}$, which
contributes about $\lesssim 5\%$ total to $\langle\sigma v\rangle$ in this
region of parameter space. In this case, the initial state masses are
independent of $\delta$ and $m_{0}$, and thus, the only variation comes from
the mass of intermediate state particle $(\tilde{l}_{1})$. In Table (3), we
presented the sample points which are represented in the plot. From the
points, I and II of table 3, we see that a slight shift of 5 GeV in $m_{0}$ is
still allowed by WMAP $3\sigma$ limits, which changes the
$\tilde{\chi}_{1}^{0}\tilde{\chi}_{1}^{0}$ cross-section by about $40\%$. This
is the reason why the band of allowed points is broad in this channel. Other
dominant channels are represented in subsequent panels of the figure. From the
panel it is obvious that the dominant contribution comes from
$\tilde{\chi}_{1}^{0}\;\tilde{l}_{1}\rightarrow\gamma\;\tau$ and
$\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\tau\;\tau$ channels. Each of which
contribute to about $35\%$ and $25\%$ respectively to $\langle\sigma
v\rangle$. Most of the flavor violating counterparts of these channels behave
as expected, i.e. at large $\delta$, they become comparable to the flavor
conserving ones. One exception of this is the $\tilde{l}_{1}\tilde{l}_{1}$
channel. Here the initial state composition crucially depends on ‘$\delta$’
and also on $\tilde{\tau}_{L}\tilde{\tau}_{R}$ mixing. In such a situation,
its clear that the initial state cannot be attributed any flavor quantum
number. In fact we find that the $\tilde{\mu}$ (smuon) component of
$\tilde{l}_{1}$ can be large $\sim 50\%$ even for $\delta\approx 0.2$ in some
regions of parameter space. We see from the figure that the flavor violating
final states dominates over the flavor conserving ones, as $\delta$ grows
beyond $\delta\geqslant 0.2$. The exact point of crossing of the flavor
violating channels over flavor conserving ones is dependent on the parameter
space chosen, crucially on $\tan\beta$ and $\mu$. This is because the
effective $\tilde{\mu}_{L}\tilde{\tau}_{R}$ and/or
$\tilde{\mu}_{R}\tilde{\tau}_{L}$ coupling generated play an important role in
determining the initial state composition. The last two panels shows some of
the channels, which contribute negligibly to the $\langle\sigma v\rangle$. In
Appendix [E] we have given approximate formulae in
$m_{\tau}/m_{\mu}\rightarrow 0$ limit for the dominant cross-sections. Using
these and approximate formulae presented in appendix A features of full
numerical analysis can be verified. More detailed analysis of cross-sections
in the presence of flavor violation is various dark matter allowed regions
will be presented elsewhere upcoming .
Figure 9: Dominant Channels contribution to the $\langle\sigma
v\rangle_{tot}$. Here $\tan\beta$ is fixed to 20, $M_{1/2}=750$ GeV and
$m_{0}$ and $\delta$ are varied to fit the co-annihilation condition. Here all
the points satisfy WMAP $3\sigma$ bound (10).
In Fig. 8, we present similar plots form channels in $\delta$-NUHM case for
the parametrization chosen in the previous section. Here we have imposed
BR$(\tau\rightarrow\mu\gamma)\leqslant 4.4\times 10^{-8}$ to be satisfied
along with relic density constraints. These channel show a similar pattern
here as in $\delta$-mSUGRA. However, as we can see from the panels, there is a
gap between $\delta=0.2$ to $\delta=0.7$ where the parameter space does not
satisfy BR$(\tau\rightarrow\mu\gamma)\leqslant 4.4\times 10^{-8}$. For points
below $\delta\leqslant 0.2$, this constraint is satisfied as ‘$\delta$’ is too
small to generate appreciable $\tau\to\mu+\gamma$ amplitudes. For
$\delta\geqslant 0.7$, the constraint is now satisfied because of the overlap
between the cancellation regions and co-annihilation regions. The relative
contribution in the overlap region is magnified in Fig. 9 where all the
channels contributions are presented between $0.70\leqslant\delta\leqslant
0.85$. As we can see, flavor violating channels strongly compete with flavor
conserving ones. A sample of the points in $\delta$-NUHM is presented in Table
4, where points IV and V represent low $\delta$ values whereas point VI
represent the large $\delta$ value signifying overlapping regions.
Finally a note about relative contribution to relic density. We have
$\displaystyle\Omega h^{2}$ $\displaystyle\propto\frac{1}{\langle\sigma
v\rangle_{tot}}=\frac{1}{\displaystyle\sum_{\text{all channels}}\langle\sigma
v\rangle_{i}}$ $\displaystyle\propto\frac{1}{\langle\sigma
v\rangle_{tot}\displaystyle\sum_{\text{all channels}}\frac{\langle\sigma
v\rangle_{i}}{\langle\sigma v\rangle_{tot}}}$ (16)
For small $\delta$ $\left(\sim\mathcal{O}\left(10^{-2}\right)\right)$ where
$\langle\sigma v\rangle$ contribution to flavor violating channel is small,
the estimate of relic density does not modify much from the flavor conserving
case. However for large enough $\delta$
$\left(\sim\mathcal{O}\left(10^{-1}\right)\right)$, one tends to overestimate
relic density, if one does not consider flavor violating scatterings while
computing the thermally averaged cross-section.
## 5 Summary and Outlook
We have generalized the co-annihilation process by including flavor violation
in the sleptonic $\mu-\tau$ (RR) sector. The amount of flavor violation
admissible is constrained to be small by the limit on the
BR($\tau\to\mu+\gamma$). This constraint is significantly weakened in regions
of the parameter space where cancellations in the amplitudes takes place. We
look for regions of parameter space where there is a significant overlap
between cancellation regions and co-annihilation regions. The search is done
in mSUGRA and NUHM augmented with one single flavor violating parameter in the
$\mu-\tau$ (RR) sector. We found that while no significant overlap is possible
in $\delta$-mSUGRA, $\delta$-NUHM allows for large regions where significant
overlap is possible.
The presence of flavor violation shifts the lightest slepton co-annihilation
regions towards lighter neutralino masses compared to mSUGRA. While computing
the thermally averaged cross-sections in the overlap regions, we found that
flavor violating processes could contribute with equal strength and in some
cases even dominantly compared to the flavor conserving ones. This is true
even for $\delta\gtrsim 0.2$ in some regions of the parameter space.
Neglecting the flavor violating channels would lead to underestimating the
cross section and thus in overestimating the relic density. A point to note is
that if flavor violation is present even within the presently allowed limits,
it could still change the dominant channels by about $5\%$ in $\delta$-mSUGRA
and more in $\delta$-NUHM. Finally, We have probed only a minor region of the
parameter space in the present work demonstrating the existence of such
regions. A comprehensive analysis of such regions and the associated
phenomenology of their spectrum would be interesting in their own right.
In this respect, a few comments on flavor violation at the LHC and ILC are in
order. Detection of lepton flavor violation at the colliders like LHC is
strongly constrained by experimental limits on rare lepton flavor violating
decays. One standard technique to detect flavor violation at colliders is to
study the slepton mass differences using end-point kinematics of cascade
decays Hinchliffe:2000np . The typical sensitivity being discussed in the
literature is $\frac{\Delta
m_{\tilde{l}}}{m_{\tilde{l}}}(l_{i},l_{j})\,=\,\frac{|m_{\tilde{l}_{i}}-m_{\tilde{l}_{j}}|}{\sqrt{m_{\tilde{l}_{i}}m_{\tilde{l}_{j}}}}\,\simeq\,\mathcal{O}(0.1)\%$
for $\tilde{e}_{L}-\tilde{\mu}_{L}$ and $\mathcal{O}(1)\%$ for
$\tilde{\mu}_{L}-\tilde{\tau}_{L}$ Allanach:2008ib . In the presence of
$\Delta^{\mu\tau}_{RR}$ splittings are generated in all the three eigenvalues
calibbi2 , $e-\mu\,,\mu-\tau\,,e-\tau$ sectors. In the case discussed in this
work, the typical splittings are $\mathcal{O}(20)\%$ to $\mathcal{O}(70)\%$ as
the constraints from LFV experiments are evaded. Thus, far less sensitivity is
required to measure these splittings compared to the regular case. Further
investigations in this direction are however needed. Another interesting
aspect of this scenario would be to measure widths for LFV decay processes
like
$\tilde{\chi}_{2}^{0}\rightarrow\tilde{\chi}_{1}^{0}l_{i}^{\pm}l_{j}^{\mp}$.
These widths have been studied for the case of right handed slepton flavor
violation in bartl . In NUHM, with a comparatively smaller value of $\mu$ one
could expect large production cross sections for $\tilde{\chi}_{4}^{0}$ and
$\tilde{\chi}_{2}^{\pm}$ in the decays of colored particles. In fact, a full
Monte Carlo study has been reported by Hisano et al. Hisano:2008ng for a
particular parameter space point in the model.
At the linear collider, it should be possible to identify the $\tilde{\tau}$
co-annihilation region Nojiri:1994it ; Nojiri:1996fp ; Guchait:2002xh ;
Hamaguchi:2004df ; Godbole:2008it by studying the polarization of the decay
$\tilde{\tau}_{1}\rightarrow\tilde{\chi}_{1}^{0}\tau$. In the presence of
flavored co-annihilations one should be able to see flavor violating decays of
$\tilde{\tau}_{1}$. Heavier particles like $\tilde{\tau}_{2}$ and charginos
would also have flavor violating decays.
Finally lets note that we have considered the cancellations in the dipole
operator of the $\tau\,\rightarrow\,\mu$ transitions, it does not guarantee us
suppression in amplitudes associated with other operators. For example, in
this region $\tau\,\rightarrow\,\mu\,\eta$ or
$\tau\,\rightarrow\,\mu\,\eta^{\prime}$ could be sizable ($\sim
10^{-9}-10^{-10}$) Brignole:2004ah , which could be probed in future
B-factories. Whereas, $\tau\,\rightarrow\,\mu\,\gamma$ will continue to remain
constrained and thus will not be detected.
The focus of the present work has been to introduce new regions of parameter
space where flavor effects in the co-annihilation regions could be important.
More generally flavor effects could play a role in any dark matter ‘regions’
of the SUSY parameter space. Such studies are being explored in upcoming .
###### Acknowledgements.
We thank Ranjan Laha for participating in this project at the initial stages.
We also thank Yann Mambrini, Utpal Chattopadhyay and Alexander Pukhov for
discussions and useful inputs. SKV acknowledges support from DST project
“Complementarity between direct and indirect searches for Supersymmetry” and
also support from DST Ramanujan Fellowship SR/S2/RJN-25/2008. RG acknowledges
support from SR/S2/RJN-25/2008. DC acknowledges partial support from
SR/S2/RJN-25/2008.
## Appendix A Approximate Solutions
### A.1 mSUGRA Case
In the approximation of small Yukawa couplings, we retain only
$Y_{t},\,Y_{b},\,Y_{\tau}$ and solve the RGEs semi-analytically. For the first
two generations of the particles the dependence on $\tan\beta$ is very weak,
so we take them to be valid for all $\tan\beta$. In deriving the approximate
expressions we have taken $m_{t}(M_{Z})=165{\rm~{}GeV}$,
$m_{b}(M_{Z})=3{\rm~{}GeV}$ and $m_{\tau}(M_{Z})=1.77{\rm~{}GeV}$. For
$\tan\beta=5$, the first two generation masses at the weak scale are
$\displaystyle(m^{2}_{Q})_{1,2}(M_{Z})\;\simeq\quad$
$\displaystyle\,m_{0}^{2}+6.66\,M_{\frac{1}{2}}^{2}$ (17)
$\displaystyle(m^{2}_{D})_{1,2}(M_{Z})\;\simeq\quad$
$\displaystyle\,m_{0}^{2}+6.19\,M_{\frac{1}{2}}^{2}$ (18)
$\displaystyle(m^{2}_{U})_{1,2}(M_{Z})\;\simeq\quad$
$\displaystyle\,m_{0}^{2}+6.22\,M_{\frac{1}{2}}^{2}$ (19)
$\displaystyle(m^{2}_{L})_{1,2}(M_{Z})\;\simeq\quad$
$\displaystyle\,m_{0}^{2}+0.51\,M_{\frac{1}{2}}^{2}$ (20)
$\displaystyle(m^{2}_{E})_{1,2}(M_{Z})\;\simeq\quad$
$\displaystyle\,m_{0}^{2}+0.17\,M_{\frac{1}{2}}^{2}$ (21)
Third generation masses strongly depend on $\tan\beta$ than the first two
generations. For low $\tan\beta=5$ their values are as follows
$\displaystyle(m^{2}_{Q})_{3}(M_{Z})\simeq$
$\displaystyle-0.036\,A_{0}^{2}+0.65\,m_{0}^{2}+0.16\,A_{0}M_{\frac{1}{2}}+5.66\,M_{\frac{1}{2}}^{2}$
(22) $\displaystyle(m^{2}_{U})_{3}(M_{Z})\simeq$
$\displaystyle-0.070\,A_{0}^{2}+0.31\,m_{0}^{2}+0.30\,A_{0}M_{\frac{1}{2}}+4.26\,M_{\frac{1}{2}}^{2}$
(23) $\displaystyle(m^{2}_{D})_{3}(M_{Z})\simeq$ $\displaystyle-1.70\times
10^{-3}\,A_{0}^{2}+m_{0}^{2}+7.23\times
10^{-3}\,A_{0}M_{\frac{1}{2}}+6.17\,M_{\frac{1}{2}}^{2}$ (24)
$\displaystyle(m^{2}_{L})_{3}(M_{Z})\simeq$ $\displaystyle-7.34\times
10^{-4}\,A_{0}^{2}+m_{0}^{2}+6.29\times
10^{-4}\,A_{0}M_{\frac{1}{2}}+0.51\,M_{\frac{1}{2}}^{2}$ (25)
$\displaystyle(m^{2}_{E})_{3}(M_{Z})\simeq$ $\displaystyle-1.47\times
10^{-3}\,A_{0}^{2}+m_{0}^{2}+1.26\times
10^{-3}\,A_{0}M_{\frac{1}{2}}+0.16\,M_{\frac{1}{2}}^{2}$ (26) $\displaystyle
m_{H_{d}}^{2}(M_{Z})\simeq$ $\displaystyle-3.30\times
10^{-3}\,A_{0}^{2}+0.99\,m_{0}^{2}+0.01\,A_{0}M_{\frac{1}{2}}+0.48\,M_{\frac{1}{2}}^{2}$
(27) $\displaystyle m_{H_{u}}^{2}(M_{Z})\simeq$
$\displaystyle-0.105\,A_{0}^{2}-0.046\,m_{0}^{2}+0.46\,A_{0}M_{\frac{1}{2}}-2.95\,M_{\frac{1}{2}}^{2}$
(28) $\displaystyle|\mu|^{2}(M_{Z})=$
$\displaystyle-4158.72+0.110\,A_{0}^{2}+0.084\,m_{0}^{2}-0.47\,A_{0}M_{\frac{1}{2}}+3.09\,M_{\frac{1}{2}}^{2}$
(29)
For medium $\tan\beta=20$ their values are as follows
$\displaystyle(m^{2}_{Q})_{3}(M_{Z})\simeq$
$\displaystyle-0.048\,A_{0}^{2}+0.62\,m_{0}^{2}+0.20\,A_{0}M_{\frac{1}{2}}+5.54\,M_{\frac{1}{2}}^{2}$
(30) $\displaystyle(m^{2}_{U})_{3}(M_{Z})\simeq$
$\displaystyle-0.070\,A_{0}^{2}+0.33\,m_{0}^{2}+0.30\,A_{0}M_{\frac{1}{2}}+4.32\,M_{\frac{1}{2}}^{2}$
(31) $\displaystyle(m^{2}_{D})_{3}(M_{Z})\simeq$
$\displaystyle-0.023\,A_{0}^{2}+0.91\,m_{0}^{2}+0.10\,A_{0}M_{\frac{1}{2}}+5.86\,M_{\frac{1}{2}}^{2}$
(32) $\displaystyle(m^{2}_{L})_{3}(M_{Z})\simeq$
$\displaystyle-0.011\,A_{0}^{2}+0.97\,m_{0}^{2}+8.38\times
10^{-3}\,A_{0}M_{\frac{1}{2}}+0.50\,M_{\frac{1}{2}}^{2}$ (33)
$\displaystyle(m^{2}_{E})_{3}(M_{Z})\simeq$
$\displaystyle-0.021\,A_{0}^{2}+0.93\,m_{0}^{2}+0.017\,A_{0}M_{\frac{1}{2}}+0.15\,M_{\frac{1}{2}}^{2}$
(34) $\displaystyle m_{H_{d}}^{2}(M_{Z})\simeq$
$\displaystyle-0.046\,A_{0}^{2}+0.83\,m_{0}^{2}+0.16\,A_{0}M_{\frac{1}{2}}+0.01\,M_{\frac{1}{2}}^{2}$
(35) $\displaystyle m_{H_{u}}^{2}(M_{Z})\simeq$
$\displaystyle-0.105\,A_{0}^{2}-0.007\,m_{0}^{2}+0.46\,A_{0}M_{\frac{1}{2}}-2.86\,M_{\frac{1}{2}}^{2}$
(36) $\displaystyle|\mu|^{2}(M_{Z})=$
$\displaystyle-4158.72+0.106\,A_{0}^{2}+0.009\,m_{0}^{2}-0.46\,A_{0}M_{\frac{1}{2}}+2.87\,M_{\frac{1}{2}}^{2}$
(37)
For high $\tan\beta=35$ their values are as follows
$\displaystyle(m^{2}_{Q})_{3}(M_{Z})\simeq$
$\displaystyle-0.058\,A_{0}^{2}+0.53\,m_{0}^{2}+0.25\,A_{0}M_{\frac{1}{2}}+5.26\,M_{\frac{1}{2}}^{2}$
(38) $\displaystyle(m^{2}_{U})_{3}(M_{Z})\simeq$
$\displaystyle-0.064\,A_{0}^{2}+0.33\,m_{0}^{2}+0.27\,A_{0}M_{\frac{1}{2}}+4.35\,M_{\frac{1}{2}}^{2}$
(39) $\displaystyle(m^{2}_{D})_{3}(M_{Z})\simeq$
$\displaystyle-0.052\,A_{0}^{2}+0.727\,m_{0}^{2}+0.23\,A_{0}M_{\frac{1}{2}}+5.26\,M_{\frac{1}{2}}^{2}$
(40) $\displaystyle(m^{2}_{L})_{3}(M_{Z})\simeq$
$\displaystyle-0.027\,A_{0}^{2}+0.89\,m_{0}^{2}+0.02\,A_{0}M_{\frac{1}{2}}+0.49\,M_{\frac{1}{2}}^{2}$
(41) $\displaystyle(m^{2}_{E})_{3}(M_{Z})\simeq$
$\displaystyle-0.055\,A_{0}^{2}+0.78\,m_{0}^{2}+0.03\,A_{0}M_{\frac{1}{2}}+0.12\,M_{\frac{1}{2}}^{2}$
(42) $\displaystyle m_{H_{d}}^{2}(M_{Z})\simeq$
$\displaystyle-0.105\,A_{0}^{2}+0.48\,m_{0}^{2}+0.36\,A_{0}M_{\frac{1}{2}}-0.91\,M_{\frac{1}{2}}^{2}$
(43) $\displaystyle m_{H_{u}}^{2}(M_{Z})\simeq$
$\displaystyle-0.095\,A_{0}^{2}-0.005\,m_{0}^{2}+0.41\,A_{0}M_{\frac{1}{2}}-2.81\,M_{\frac{1}{2}}^{2}$
(44) $\displaystyle|\mu|^{2}(M_{Z})=$
$\displaystyle-4158.72+0.095\,A_{0}^{2}+0.005\,m_{0}^{2}-0.41\,A_{0}M_{\frac{1}{2}}+2.81\,M_{\frac{1}{2}}^{2}$
(45)
### A.2 NUHM case
In our notation $m_{10}=m_{H_{d}}(M_{GUT})$ and $m_{20}=m_{H_{u}}(M_{GUT})$.
For $\tan\beta=5$, at the weak scale the first two generation masses are
$\displaystyle(m^{2}_{Q})_{1,2}(M_{Z})\;\simeq$ $\displaystyle\quad
m_{0}^{2}+6.66\,M_{\frac{1}{2}}^{2}+0.009\,(m_{10}^{2}-m_{20}^{2})$ (46)
$\displaystyle(m^{2}_{D})_{1,2}(M_{Z})\;\simeq$ $\displaystyle\quad
m_{0}^{2}+6.19\,M_{\frac{1}{2}}^{2}+0.018\,(m_{10}^{2}-m_{20}^{2})$ (47)
$\displaystyle(m^{2}_{U})_{1,2}(M_{Z})\;\simeq$ $\displaystyle\quad
m_{0}^{2}+6.22\,M_{\frac{1}{2}}^{2}-0.036\,(m_{10}^{2}-m_{20}^{2})$ (48)
$\displaystyle(m^{2}_{L})_{1,2}(M_{Z})\;\simeq$ $\displaystyle\quad
m_{0}^{2}+0.51\,M_{\frac{1}{2}}^{2}-0.027\,(m_{10}^{2}-m_{20}^{2})$ (49)
$\displaystyle(m^{2}_{E})_{1,2}(M_{Z})\;\simeq$ $\displaystyle\quad
m_{0}^{2}+0.17\,M_{\frac{1}{2}}^{2}+0.053\,(m_{10}^{2}-m_{20}^{2})$ (50)
Third generation masses strongly depend on $\tan\beta$ than the first two
generations. For low $\tan\beta=5$ their values are as follows
$\displaystyle(m^{2}_{Q})_{3}(M_{Z})\simeq$
$\displaystyle-0.036\,A_{0}^{2}+0.77\,m_{0}^{2}+0.16\,A_{0}M_{\frac{1}{2}}+5.66\,M_{\frac{1}{2}}^{2}+7.90\times
10^{-3}\,m_{10}^{2}$ $\displaystyle-0.125\,m_{20}^{2}$ (51)
$\displaystyle(m^{2}_{U})_{3}(M_{Z})\simeq$
$\displaystyle-0.070\,A_{0}^{2}+0.54\,m_{0}^{2}+0.30\,A_{0}M_{\frac{1}{2}}+4.26\,M_{\frac{1}{2}}^{2}-0.035\,m_{10}^{2}$
$\displaystyle-0.196\,m_{20}^{2}$ (52)
$\displaystyle(m^{2}_{D})_{3}(M_{Z})\simeq$ $\displaystyle-1.70\times
10^{-3}\,A_{0}^{2}+\,m_{0}^{2}+7.23\times
10^{-3}\,A_{0}M_{\frac{1}{2}}+6.17\,M_{\frac{1}{2}}^{2}+0.016\,m_{10}^{2}$
$\displaystyle-0.018\,m_{20}^{2}$ (53)
$\displaystyle(m^{2}_{L})_{3}(M_{Z})\simeq$ $\displaystyle-7.34\times
10^{-4}\,A_{0}^{2}+\,m_{0}^{2}+6.29\times
10^{-4}\,A_{0}M_{\frac{1}{2}}+0.51\,M_{\frac{1}{2}}^{2}-0.027\,m_{10}^{2}$
$\displaystyle+0.027\,m_{20}^{2}$ (54)
$\displaystyle(m^{2}_{E})_{3}(M_{Z})\simeq$ $\displaystyle-1.47\times
10^{-3}\,A_{0}^{2}+\,m_{0}^{2}+1.26\times
10^{-3}\,A_{0}M_{\frac{1}{2}}+0.16\,M_{\frac{1}{2}}^{2}+0.052\,m_{10}^{2}$
$\displaystyle-0.053\,m_{20}^{2}$ (55) $\displaystyle
m_{H_{d}}^{2}(M_{Z})\simeq$ $\displaystyle-3.30\times
10^{-3}\,A_{0}^{2}-7.32\times
10^{-3}\,m_{0}^{2}+0.01\,A_{0}M_{\frac{1}{2}}+0.48\,M_{\frac{1}{2}}^{2}+0.969\,m_{10}^{2}$
$\displaystyle+0.027\,m_{20}^{2}$ (56) $\displaystyle
m_{H_{u}}^{2}(M_{Z})\simeq$
$\displaystyle-0.105\,A_{0}^{2}-0.70\,m_{0}^{2}+0.46\,A_{0}M_{\frac{1}{2}}-2.95\,M_{\frac{1}{2}}^{2}+0.027\,m_{10}^{2}$
$\displaystyle+0.625\,m_{20}^{2}$ (57) $\displaystyle|\mu|^{2}(M_{Z})=$
$\displaystyle-4158.72+0.110\,A_{0}^{2}+0.72\,m_{0}^{2}-0.47\,A_{0}M_{\frac{1}{2}}+3.09\,M_{\frac{1}{2}}^{2}+0.012\,m_{10}^{2}$
$\displaystyle-0.650\,m_{20}^{2}$ (58)
For medium $\tan\beta=20$ their values are as follows
$\displaystyle(m^{2}_{Q})_{3}(M_{Z})\simeq$
$\displaystyle-0.048\,A_{0}^{2}+0.75\,m_{0}^{2}+0.20\,A_{0}M_{\frac{1}{2}}+5.54\,M_{\frac{1}{2}}^{2}-6.30\times
10^{-3}\,m_{10}^{2}$ $\displaystyle-0.120\,m_{20}^{2}$ (59)
$\displaystyle(m^{2}_{U})_{3}(M_{Z})\simeq$
$\displaystyle-0.070\,A_{0}^{2}+0.55\,m_{0}^{2}+0.30\,A_{0}M_{\frac{1}{2}}+4.32\,M_{\frac{1}{2}}^{2}-0.034\,m_{10}^{2}$
$\displaystyle-0.190\,m_{20}^{2}$ (60)
$\displaystyle(m^{2}_{D})_{3}(M_{Z})\simeq$
$\displaystyle-0.023\,A_{0}^{2}+0.94\,m_{0}^{2}+0.10\,A_{0}M_{\frac{1}{2}}+5.86\,M_{\frac{1}{2}}^{2}-0.015\,m_{10}^{2}$
$\displaystyle-0.015\,m_{20}^{2}$ (61)
$\displaystyle(m^{2}_{L})_{3}(M_{Z})\simeq$
$\displaystyle-0.011\,A_{0}^{2}+0.98\,m_{0}^{2}+8.38\times
10^{-3}\,A_{0}M_{\frac{1}{2}}+0.50\,M_{\frac{1}{2}}^{2}-0.038\,m_{10}^{2}$
$\displaystyle+0.027\,m_{20}^{2}$ (62)
$\displaystyle(m^{2}_{E})_{3}(M_{Z})\simeq$
$\displaystyle-0.021\,A_{0}^{2}+0.95\,m_{0}^{2}+0.017\,A_{0}M_{\frac{1}{2}}+0.15\,M_{\frac{1}{2}}^{2}+0.030\,m_{10}^{2}$
$\displaystyle-0.053\,m_{20}^{2}$ (63) $\displaystyle
m_{H_{d}}^{2}(M_{Z})\simeq$
$\displaystyle-0.046\,A_{0}^{2}-0.11\,m_{0}^{2}+0.16\,A_{0}M_{\frac{1}{2}}+0.01\,M_{\frac{1}{2}}^{2}+0.913\,m_{10}^{2}$
$\displaystyle+0.030\,m_{20}^{2}$ (64) $\displaystyle
m_{H_{u}}^{2}(M_{Z})\simeq$
$\displaystyle-0.105\,A_{0}^{2}-0.67\,m_{0}^{2}+0.46\,A_{0}M_{\frac{1}{2}}-2.86\,M_{\frac{1}{2}}^{2}+0.030\,m_{10}^{2}$
$\displaystyle+0.634\,m_{20}^{2}$ (65) $\displaystyle|\mu|^{2}(M_{Z})=$
$\displaystyle-4158.72+0.106\,A_{0}^{2}+0.67\,m_{0}^{2}-0.46\,A_{0}M_{\frac{1}{2}}+2.87\,M_{\frac{1}{2}}^{2}-0.027\,m_{10}^{2}$
$\displaystyle-0.636\,m_{20}^{2}$ (66)
For high $\tan\beta=35$ their values are as follows
$\displaystyle(m^{2}_{Q})_{3}(M_{Z})\simeq$
$\displaystyle-0.058\,A_{0}^{2}+0.69\,m_{0}^{2}+0.25\,A_{0}M_{\frac{1}{2}}+5.26\,M_{\frac{1}{2}}^{2}-0.037\,m_{10}^{2}$
$\displaystyle-0.120\,m_{20}^{2}$ (67)
$\displaystyle(m^{2}_{U})_{3}(M_{Z})\simeq$
$\displaystyle-0.064\,A_{0}^{2}+0.55\,m_{0}^{2}+0.27\,A_{0}M_{\frac{1}{2}}+4.35\,M_{\frac{1}{2}}^{2}-0.029\,m_{10}^{2}$
$\displaystyle-0.194\,m_{20}^{2}$ (68)
$\displaystyle(m^{2}_{D})_{3}(M_{Z})\simeq$
$\displaystyle-0.052\,A_{0}^{2}+0.82\,m_{0}^{2}+0.23\,A_{0}M_{\frac{1}{2}}+5.26\,M_{\frac{1}{2}}^{2}-0.081\,m_{10}^{2}$
$\displaystyle-0.010\,m_{20}^{2}$ (69)
$\displaystyle(m^{2}_{L})_{3}(M_{Z})\simeq$
$\displaystyle-0.027\,A_{0}^{2}+0.93\,m_{0}^{2}+0.02\,A_{0}M_{\frac{1}{2}}+0.49\,M_{\frac{1}{2}}^{2}-0.063\,m_{10}^{2}$
$\displaystyle+0.027\,m_{20}^{2}$ (70)
$\displaystyle(m^{2}_{E})_{3}(M_{Z})\simeq$
$\displaystyle-0.055\,A_{0}^{2}+0.85\,m_{0}^{2}+0.03\,A_{0}M_{\frac{1}{2}}+0.12\,M_{\frac{1}{2}}^{2}-0.019\,m_{10}^{2}$
$\displaystyle-0.054\,m_{20}^{2}$ (71) $\displaystyle
m_{H_{d}}^{2}(M_{Z})\simeq$
$\displaystyle-0.105\,A_{0}^{2}-0.35\,m_{0}^{2}+0.36\,A_{0}M_{\frac{1}{2}}-0.91\,M_{\frac{1}{2}}^{2}+0.789\,m_{10}^{2}$
$\displaystyle+0.038\,m_{20}^{2}$ (72) $\displaystyle
m_{H_{u}}^{2}(M_{Z})\simeq$
$\displaystyle-0.095\,A_{0}^{2}-0.67\,m_{0}^{2}+0.41\,A_{0}M_{\frac{1}{2}}-2.81\,M_{\frac{1}{2}}^{2}+0.036\,m_{10}^{2}$
$\displaystyle+0.629\,m_{20}^{2}$ (73) $\displaystyle|\mu|^{2}(M_{Z})=$
$\displaystyle-4158.72+0.095\,A_{0}^{2}+0.67\,m_{0}^{2}-0.41\,A_{0}M_{\frac{1}{2}}+2.81\,M_{\frac{1}{2}}^{2}-0.036\,m_{10}^{2}$
$\displaystyle-0.629\,m_{20}^{2}$ (74)
## Appendix B Lightest Slepton Mass in $\delta$-mSUGRA at Large $\delta$
From the plots presented in section 2, Fig.( 5), for the case of
$\delta$-mSUGRA, the following two things can be inferred: (a) the co-
annihilation condition increasingly moves towards the diagonal in
$\left(m_{0},M_{1/2}\right)$ plane with increasing $\delta$ and (b) the
cancellation region are almost independent of the value of $\delta$ in
$\left(m_{0},M_{1/2}\right)$ plane. The question then arises if there is some
region at large ‘$\delta$’ where the two regions coincide. In the present
appendix, we explore this question. The analysis presented here is based on
the approximate solutions of Appendix [A.1] and we will comment on the full
numerical solutions at the end of the section.
The effective $4\times 4$ matrix of eq. (4) can be diagonalized as follows.
First the lower $2\times 2$ block is rotated by an angle $\theta$, given by,
$\displaystyle\tan\,2\theta_{\mu\tau}=\frac{2\,\Delta_{RR}}{m^{2}_{\tilde{\mu}_{R}}-m^{2}_{\tilde{\tau}_{R}}}.$
(75)
The eigenvlaues of this lower block can be easily read off from the mass
matrix. They are
|
---|---
Figure 10: $\delta$ Contours: Upper bounds on $\delta$ in various of the
parameter space using the non-tachyonic condition.
$\lambda^{2}_{\pm}=\frac{1}{2}\Bigg{[}(m^{2}_{\tilde{\mu}_{R}}+m^{2}_{\tilde{\tau}_{R}})\pm\sqrt{(m^{2}_{\tilde{\mu}_{R}}-m^{2}_{\tilde{\tau}_{R}})^{2}+4\,\Delta_{RR}^{2}}\Bigg{]}$
(76)
For $m^{2}_{\tilde{\mu}_{R}}\simeq m^{2}_{\tilde{\tau}_{R}}$ (which is true
for low $\tan\beta$ regions), the eigenvalues have the following form:
$\displaystyle\lambda^{2}_{\pm}$
$\displaystyle\simeq\bar{m}^{2}\pm\Delta_{RR}$ (77)
$\displaystyle\simeq\bar{m}^{2}(1\pm\delta_{RR})$ (78)
where
$\bar{m}^{2}\equiv\displaystyle\frac{1}{2}\left(m^{2}_{\tilde{\mu}_{R}}+m^{2}_{\tilde{\tau}_{R}}\right)$.
Next we have to diagonalize the $\tilde{\tau}_{LR}$ entry. The eigenvalues
after this rotation are approximately given as
$\displaystyle\Gamma^{2}_{\pm}$
$\displaystyle\simeq\frac{1}{2}\Bigg{[}(m^{2}_{\tilde{\tau}_{L}}+\lambda^{2}_{-})\pm\sqrt{(m^{2}_{\tilde{\tau}_{L}}-\lambda^{2}_{-})^{2}+4\,\cos^{2}\theta_{\mu\tau}\,\Delta^{2}_{{\tilde{\tau}}_{LR}}}\Bigg{]}$
(79) In the limit
$\left(m^{2}_{\tilde{\tau}_{L}}-\lambda^{2}_{-}\right)\gg\Delta_{\tilde{\tau}_{LR}}$
(the corresponding angle is very small in this limit)101010We will consider
the opposite limit at the end of this section., which is the case for large
$\delta$, we can write the above eigenvalues as
$\displaystyle\Gamma^{2}_{\pm}$
$\displaystyle\simeq\frac{1}{2}\Bigg{[}(m^{2}_{\tilde{\tau}_{L}}+\lambda^{2}_{-})\pm(m^{2}_{\tilde{\tau}_{L}}-\lambda^{2}_{-})\left\\{1+\frac{2\,\cos^{2}\theta_{\mu\tau}\,\Delta^{2}_{{\tilde{\tau}}_{LR}}}{(m^{2}_{\tilde{\tau}_{L}}-\lambda^{2}_{-})^{2}}\right\\}\Bigg{]}$
(80)
|
---|---
Figure 11: Cancellation and Co-annihilation region in $\delta$-mSUGRA
So, the lightest eigenvalue of the effective $4\times 4$ mass matrix of eq.(4)
is given as
$\displaystyle\Gamma^{2}_{-}$
$\displaystyle\simeq\lambda^{2}_{-}-\frac{\cos^{2}\theta_{\mu\tau}\,\Delta^{2}_{{\tilde{\tau}}_{LR}}}{m^{2}_{\tilde{\tau}_{L}}-\lambda^{2}_{-}}$
(81)
$\displaystyle\simeq\bar{m}^{2}(1-\delta_{RR})-\frac{\cos^{2}\theta_{\mu\tau}\,\Delta^{2}_{{\tilde{\tau}}_{LR}}}{m^{2}_{\tilde{\tau}_{L}}-\bar{m}^{2}(1-\delta_{RR})}$
(82)
Which essentially suppresses the left-right mixing term compared to eq. (6).
And demanding the lightest eigenvalue to be non-tachyonic we get an upper
bound on $\delta_{RR}$ as below
$\displaystyle\delta_{RR}\leq
1-\frac{\cos^{2}\theta_{\mu\tau}\,\Delta^{2}_{{\tilde{\tau}}_{LR}}}{m^{2}_{\tilde{\tau}_{L}}m^{2}_{\tilde{\tau}_{R}}}$
(83)
Which matches with eq. (7) in the limit $\cos\theta_{\mu\tau}\rightarrow 1$.
In Fig.(10) we have plotted the tachyonic condition (R.H.S of eq. (83)) using
the approximate results of Appendix [A]. It has been plotted for two values of
$\tan\beta$ 20 and 35. The contours represents the upper bounds on $\delta$ in
those regions of the parameter space to avoid tachyonic leptons. As we can
see, increasing $\tan\beta$, tightens the bound a bit. In Fig.(11) we have
shown the cancellation condition $\mu^{2}\simeq m^{2}_{\tilde{\tau}_{R}}$ and
the co-annihilation condition $m_{\tilde{l}_{1}}\simeq
m_{\tilde{\chi}_{1}^{0}}$ for two values of $\delta=0.8$ and 0.9. In both the
pannels, the brown and magenta solid lines indicate co-annihilation condition
for $\delta=0.8$ and $0.9$ respectively. The green dashed line satisfy the
cancellation condition, whereas the orange and red dashed lines satisfy the
cancellation condition with the $\mu$ parameter being 30% corrected than its
tree level value. Comparing the Figs.(10) and (11) we can see that there could
be some points which could evade both the tachyonic condition as well as have
cancellations amongst the LFV amplitudes and still satisfy the co-annihilation
condition. However in practice in full numerical calculation, we could not
find any points consistent with both these conditions as other
phenomenological constraints rule them out. As can be seen from the figure, a
30% correction to the $\mu$ parameter could shift the overlapping region to
very small values of $\left(m_{0},M_{1/2}\right)$ or no overlap at all for
$\delta\simeq 0.9$. This approximates the implications of adding the full
1-loop effective corrections to the SUSY scalar potential. However, the co-
annihilation region could allow for partial calculations in LFV amplitudes.
Such regions are difficult to distinguish in a numerical analysis.
We will now return to Eq. (79) and consider the limit
$\left(m^{2}_{\tilde{\tau}_{L}}-\lambda^{2}_{-}\right)\ll\Delta_{\tilde{\tau}_{LR}}$
, which is an interesting limit as it is relevant for the regions which appear
in channels plots discussed in section 4. From eq. (80), there could be a
value of $\delta$ as well as parameter space in
$\left(m_{0},M_{1/2},\tan\beta\right)$ where
$m^{2}_{\tilde{\tau}_{L}}\simeq\lambda^{2}_{-}$. In these regions, the
corresponding mixing angle is very large and the subsequent diagonalization is
very different. It turns out that at least three mixing angles in the slepton
mass matrix are large in this parameter space. The plots presented in Figs.(7)
and (8) contain these regions. More details of these regions will be discussed
in upcoming .
## Appendix C Numerical Procedures
### C.1 SuSeFLAV and MicrOMEGAs
The numerical analysis is done using publicly available package MicrOMEGAs
Belanger:2010gh and SuSeFLAV suseflav_docu . SuSeFLAV is a fortran package
which computes the supersymmetric spectrum by considering lepton flavor
violation. The program solves complete MSSM RGEs with complete $3\times 3$
flavor mixing at 2-loop level and full one loop threshold corrections
Pierce:1996zz to all MSSM parameters and relevant SM parameters, with
conserved R-parity. Also, the program computes branching ratios and decay
rates for rare flavor violating processes such as $\mu$ $\rightarrow$
e$\gamma$, $\tau$ $\rightarrow$ e$\gamma$, $\tau$ $\rightarrow$ $\mu$
$\gamma$, $\mu$ $\rightarrow$ e$\gamma$, $\mu^{-}$ $\rightarrow$ $e^{+}$
$e^{-}$ $e^{-}$, $\tau^{-}$ $\rightarrow$ $\mu^{+}$ $\mu^{-}$ $\mu^{-}$,
$\tau^{-}$ $\rightarrow$ $e^{+}$ $e^{-}$ $e^{-}$, $B\,\rightarrow\,s\,\gamma$
and $(g-2)_{\mu}$.
In the present analysis we use $M_{t}^{pole}=173.2\,{\rm GeV}$,
$M_{b}^{pole}=4.23\,{\rm GeV}$ and $M_{\tau}^{pole}=1.77\,{\rm GeV}$. In
determining the lightest higgs mass ($m_{h}$) we use approximations for one
loop correction which are mostly top-stop enhanced Heinemeyer:1999be . We use
complete $6\times 6$ slepton mass matrix to correctly evaluate the inter-
generational mixings and masses in the presence of flavor violation.
Moreover we consider flavor violating couplings stemming from lepton flavor
violation in the RR sector of $\tilde{\tau}-\tilde{\mu}$.
(a) | (b)
---|---
|
Figure 12: (a) Neutralino-slepton-lepton vertex and (b) Slepton-lepton-
chargino vertex.
* •
Neutralino-slepton-lepton:
The interaction Lagrangian for neutralino-slepton-lepton is written as
$\displaystyle\mathcal{L}\ =\
\bar{l}_{i}\left(\Sigma^{L}_{iAX}\,P_{L}+\Sigma^{R}_{iAX}\,P_{R}\right)\,\chi^{0}_{A}\,\tilde{l}_{X}+h.c.$
(84)
Where the coefficients are defined as
$\displaystyle\Sigma^{R}_{iAX}\ $ $\displaystyle=\ K_{1}\left[\cos\theta_{\rm
W}(O_{N})_{A2}+\sin\theta_{\rm W}(O_{N})_{A1}\right]U_{X,i}M_{\rm W}\cos\beta-
m_{l_{i}}\cos\theta_{\rm W}(O_{N})_{A3}U_{X,i+3}$ (85) and
$\displaystyle\Sigma^{L}_{iAX}\ $ $\displaystyle=-K_{1}\left[2\sin\theta_{\rm
W}\ M_{\rm W}\ \cos\beta\ U_{X,i+3}\ (O_{N})_{A1}+m_{l_{i}}\cos\theta_{\rm W}\
U_{X,i}\ (O_{N})_{A3}\right]$ (86) where $\displaystyle\quad
K_{1}=\frac{e}{\sqrt{2}\sin\theta_{\rm W}}\frac{1}{M_{\rm
W}\cos\beta\cos\theta_{\rm W}}$ (87)
The Interaction Lagrangian for chargino-slepton-neutrino is
$\displaystyle\mathcal{L}\ =\
\bar{\nu}_{i}\left(\Pi^{L}_{iBX}\,P_{L}+\Pi^{R}_{iBX}\,P_{R}\right)\,\chi^{+}_{B}\,\tilde{l}_{X}+h.c.$
(88)
Where the coefficients are
$\displaystyle\Pi^{R}_{iBX}$ $\displaystyle=-\frac{e}{\sin\theta_{\rm
W}}\,(O_{L})_{B1}U_{X,i}$ (89) $\displaystyle\Pi^{R}_{iBX}$
$\displaystyle=\frac{e}{\sin\theta_{\rm W}}\,\frac{m_{l_{i}}}{\sqrt{2}M_{\rm
W}\cos\beta}\,(O_{L})_{B2}U_{X,i+3}$ (90)
Where, $U_{X,i}$ is the $6\times 6$ matrix which diagonalizes the sleptonic
mass matrix, here the indices $i=1$ to $3$ and $X=1$ to $6$. $(O_{N})_{Am}$ is
the $4\times 4$ neutralino mixing matrix, where $A,m=1$ to $4$ and
$(O_{L})_{Bn}$ is the $2\times 2$ chargino left eigenvector matrix, where
$B,n=1,2$. $m_{l_{i}}$ is the mass of the lepton $l_{i}$. In our notation
$P_{L}=\frac{1-\gamma_{5}}{2}$ and $P_{R}=\frac{1+\gamma_{5}}{2}$.
These couplings are programmed into MicrOMEGAs through CalcHEP Pukhov:1999gg
package.
### C.2 Constraints Imposed
* •
We check for efficient radiative electroweak symmetry breaking, requiring
$|\mu|^{2}>0$ for valid points.
* •
We require $m_{\tilde{\tau}}>m_{\chi^{0}}$ as LSP is neutral. Regions for
which this condition is not true is excluded as $\tilde{\tau}$ LSP regions.
* •
We impose lower bounds on various sparticle masses that results from collider
experiments. $m_{h}>114.1(GeV)$, $m_{\chi^{\pm}}>103.5(GeV)$ and
$m_{\tilde{\tau}}>90(GeV)$ Barate:2003sz .
* •
$2.0\times 10^{-4}\leq\,BR(b\,\rightarrow\,s\,\gamma)\,\leq\,4.5\times
10^{-4}$ Nakamura:2010zzi .
* •
We also check for the D-flat directions, while checking for the EWSB condition
and charge and color breaking minima Frere:1983ag ; AlvarezGaume:1983gj ;
Claudson:1983et .
## Appendix D Loop Functions
In this appendix we define the relevant loop functions that contribute to the
amplitudes of flavor violating leptonic process $BR(\tau\rightarrow\mu\gamma)$
as presented in the appendix of Masina:2002mv
$x_{L}=\frac{M_{1}^{2}}{m_{L}^{2}},\,\,\,\,\,x_{R}=\frac{M_{1}^{2}}{m_{R}^{2}},\,\,\,\,\,y_{L}=\frac{|\mu^{2}|}{m_{L}^{2}},\,\,\,\,\,y_{R}=\frac{|\mu^{2}|}{m_{R}^{2}}$
(91)
$I_{B,R}$ and $I_{R}$ are defined as follows,
$\displaystyle
I_{B,R}(M_{1}^{2},m_{L}^{2},m_{R}^{2})=-\frac{1}{m_{R}^{2}-m_{L}^{2}}\left[y_{R}\,h_{1}(x_{R})-\frac{y_{L}\,g_{1}(x_{L})-y_{R}\,g_{1}(x_{R})}{1-\frac{m_{L}^{2}}{m_{R}^{2}}}\right]$
(92)
$I_{R}(m_{R}^{2},M_{1}^{2},\mu^{2})=\frac{1}{m_{R}^{2}}\frac{y_{R}}{y_{R}-x_{R}}[h_{1}(x_{R})-h_{1}(y_{R})]$
(93)
The functions $g_{1}$ and $h_{1}$ are defined as follows,
$g_{1}(x)=\frac{1-x^{2}+2x\ln(x)}{(1-x)^{3}},\,\,\,\,\,h_{1}(x)=\frac{1+4x-5x^{2}+(2x^{2}+4x)\ln(x)}{(1-x)^{4}}$
(94)
## Appendix E Cross-Sections
In this appendix we present the approximate formulae for the relevant cross
sections. We do not attempt to discuss a complete comparison of the analytical
expressions and full numerical results in the present paper, that is left for
an upcoming publication. These expressions generalize the existing expressions
Nihei:2002sc in the literature to include full flavor violation in the
sleptonic sector. The expressions are presented only for the dominant channels
and in the limit $m_{\tau},m_{\mu}\rightarrow 0$. More detailed expressions
and their simplifications will be discussed elsewhere upcoming .
$\displaystyle\sigma_{channel}=\frac{\rm Numerator}{\rm Denominator}$ (95)
### E.1 $\tilde{l}_{1}\tilde{l}_{1}\rightarrow\tau\tau$
The cross-section of $\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\tau\;\tau$
process is as follows. This process involves $t$\- and $u$-channel
$\tilde{\chi}_{1}^{0}$ exchange. In the following and in rest of the cross-
sections $e$ is the electric charge, $\theta_{\rm W}$ is the weak mixing angle
and $M_{\rm W}$ is the mass of the W-boson. We get the simplified form of the
above cross-section in the limit of $m_{\tau}\rightarrow 0$ as below, where
the numerator is
$\displaystyle e^{4}$
$\displaystyle\left[-\Sigma_{+}^{2}\Sigma_{-}^{2}\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}-\Sigma_{+}^{2}\Sigma_{-}^{2}\left(s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}\right)\right.$
$\displaystyle\times\log\left|\frac{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}-\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}+\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}\right|$
$\displaystyle-\frac{1}{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}}\log\left|\frac{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}-\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}+\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}\right|$
$\displaystyle\times\bigg{\\{}2\Sigma_{+}^{2}\Sigma_{-}^{2}m_{\tilde{\chi}_{1}^{0}}^{4}+2\Sigma_{+}^{2}\Sigma_{-}^{2}m_{\tilde{l}_{1}}^{4}$
$\displaystyle+m_{\tilde{\chi}_{1}^{0}}^{2}\left(\left(\Sigma_{+}^{2}+\Sigma_{-}^{2}\right)^{2}s-4\Sigma_{+}^{2}\Sigma_{-}^{2}m_{\tilde{l}_{1}}^{2}\right)\bigg{\\}}$
$\displaystyle-\frac{\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}{2\left(m_{\tilde{\chi}_{1}^{0}}^{4}+m_{\tilde{l}_{1}}^{4}+m_{\tilde{\chi}_{1}^{0}}^{2}\left(s-2m_{\tilde{l}_{1}}^{2}\right)\right)}\left.\bigg{\\{}\left.4\Sigma_{+}^{2}\Sigma_{-}^{2}m_{\tilde{\chi}_{1}^{0}}^{4}\right.\right.$
$\displaystyle+\left.\left.4\Sigma_{+}^{2}\Sigma_{-}^{2}m_{\tilde{l}_{1}}^{4}-\left.\left(\Sigma_{+}^{2}-\Sigma_{-}^{2}\right)^{2}m_{\tilde{\chi}_{1}^{0}}^{2}\,s\right.\right.\right.$
$\displaystyle+\left.\left.\left.2\Sigma_{+}^{2}\Sigma_{-}^{2}m_{\tilde{\chi}_{1}^{0}}^{2}\,s-8\Sigma_{+}^{2}\Sigma_{-}^{2}m_{\tilde{\chi}_{1}^{0}}^{2}\,m_{\tilde{l}_{1}}^{2}\right.\right.\right.\bigg{\\}}\bigg{]}$
(96)
And the denominator is
$\displaystyle 32\pi s\;M^{4}_{\rm W}\cos^{4}\beta\cos^{4}\theta_{\rm
W}\sin^{4}\theta_{\rm W}\left(s-4m_{\tilde{l}_{1}}^{2}\right)$ (97)
Following appendix C the coupling structure is:
* •
$\bar{\tau}-\tilde{\chi}_{1}^{0}-\tilde{l}_{1}$:
$\displaystyle K_{1}\left(\Sigma_{+}P_{R}+\Sigma_{-}P_{L}\right)$ (98) where
$\displaystyle\Sigma_{+}$ $\displaystyle=\left[\cos\theta_{\rm W}\
ON(1,2)+\sin\theta_{\rm W}\ ON(1,1)\right]\cos\beta\ M_{\rm W}\ U(1,3)$ (99)
and $\displaystyle\Sigma_{-}$ $\displaystyle=-\,2\sin\theta_{\rm W}\ M_{\rm
W}\ \cos\beta\ U(1,6)\ ON(1,1)$ (100)
Where $K_{1}$ is already defined in eq. (87).
### E.2 $\tilde{l}_{1}\tilde{l}_{1}\rightarrow\mu\tau$
The simplified form of the $\tilde{l}_{1}\;\tilde{l}_{1}\rightarrow\mu\;\tau$
cross-section in the limit of $m_{\tau},m_{\mu}\rightarrow 0$ is calculated
below. This process involves $t$\- and $u$-channel $\tilde{\chi}_{1}^{0}$
exchange. The numerator of the cross-section is
$\displaystyle e^{4}$ $\displaystyle\
\left[-\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}-\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)\right.$
$\displaystyle\
\quad\times\left(s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}\right)\log\left|\frac{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}-\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}+\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}\right|$
$\displaystyle\
\quad-\frac{2}{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}}\log\left|\frac{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}-\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}{s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}+\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}\right|$
$\displaystyle\
\quad\times\bigg{\\{}\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)m_{\tilde{\chi}_{1}^{0}}^{4}+\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)m_{\tilde{l}_{1}}^{4}$
$\displaystyle\
\quad+m_{\tilde{\chi}_{1}^{0}}^{2}\left(\left(\Sigma_{+}^{2}+\Sigma_{-}^{2}\right)\left(\Lambda_{-}^{2}+\Lambda_{+}^{2}\right)s-2\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)m_{\tilde{l}_{1}}^{2}\right)\bigg{\\}}$
$\displaystyle\
\quad-\frac{\sqrt{s\left(s-4m_{\tilde{l}_{1}}^{2}\right)}}{\left(m_{\tilde{\chi}_{1}^{0}}^{4}+m_{\tilde{l}_{1}}^{4}+m_{\tilde{\chi}_{1}^{0}}^{2}\left(s-2m_{\tilde{l}_{1}}^{2}\right)\right)}\left.\bigg{\\{}\left.2\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)m_{\tilde{\chi}_{1}^{0}}^{4}\right.\right.$
$\displaystyle\
\quad+\left.\left.2\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)m_{\tilde{l}_{1}}^{4}+\left.\left(\Sigma_{+}^{2}\left(2\Lambda_{-}^{2}-\Lambda_{+}^{2}\right)-\Sigma_{-}^{2}\left(\Lambda_{-}^{2}-2\Lambda_{+}^{2}\right)\right)\,m_{\tilde{\chi}_{1}^{0}}^{2}\,s\right.\right.\right.$
$\displaystyle\
\quad-\left.\left.\left.4\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)m_{\tilde{\chi}_{1}^{0}}^{2}\,m_{\tilde{l}_{1}}^{2}\right.\right.\right.\bigg{\\}}\bigg{]}$
(101)
And the denominator is
$\displaystyle 32\pi s\;M^{4}_{\rm W}\cos^{4}\beta\cos^{4}\theta_{\rm
W}\sin^{4}\theta_{\rm W}\left(s-4m_{\tilde{l}_{1}}^{2}\right)$ (102)
Here the coupling structure is:
* •
$\bar{\mu}-\tilde{\chi}_{1}^{0}-\tilde{l}_{1}$:
$\displaystyle K_{1}\left(\Lambda_{+}P_{R}+\Lambda_{-}P_{L}\right)$ (103)
where $\displaystyle\Lambda_{+}$ $\displaystyle=\left[\cos\theta_{\rm W}\
ON(1,2)+\sin\theta_{\rm W}\ ON(1,1)\right]\cos\beta\ M_{\rm W}\ U(1,2)$ (104)
and $\displaystyle\Lambda_{-}$ $\displaystyle=-\,2\sin\theta_{\rm W}\ M_{\rm
W}\ \cos\beta\ U(1,5)\ ON(1,1)$ (105)
### E.3
$\tilde{\chi}_{1}^{0}\tilde{\chi}_{1}^{0}\rightarrow\bar{\tau}/\bar{\mu}\tau$
In the limit $m_{\tau}\rightarrow 0$ the cross-section for
$\tilde{\chi}_{1}^{0}\ \tilde{\chi}_{1}^{0}\rightarrow\bar{\tau}\ \tau$ is
calculated. This process involves $t$\- and $u$-channel $\tilde{l}_{1}$
exchange. The numerator is
$\displaystyle e^{4}$ $\displaystyle\
\left\\{\frac{\sqrt{s\left(s-4m_{\tilde{\chi}_{1}^{0}}^{2}\right)}}{sm_{\tilde{l}_{1}}^{2}+\left(m_{\tilde{\chi}_{1}^{0}}^{2}-m_{\tilde{l}_{1}}^{2}\right)^{2}}\Big{\\{}\left(\Sigma_{+}^{4}+4\Sigma_{+}^{2}\Sigma_{-}^{2}+\Sigma_{-}^{4}\right)s\,m_{\tilde{l}_{1}}^{2}+2\left(\Sigma_{+}^{4}+3\Sigma_{+}^{2}\Sigma_{-}^{2}+\Sigma_{-}^{4}\right)\right.$
$\displaystyle\
\quad\left.\times\left(m_{\tilde{\chi}_{1}^{0}}^{2}-m_{\tilde{l}_{1}}^{2}\right)^{2}\right\\}-\frac{2}{-s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}}\log\left[\frac{-\sqrt{s\left(s-4m_{\tilde{\chi}_{1}^{0}}^{2}\right)}+\left(s-2m_{\tilde{\chi}_{1}^{0}}^{2}+2m_{\tilde{l}_{1}}^{2}\right)}{\sqrt{s\left(s-4m_{\tilde{\chi}_{1}^{0}}^{2}\right)}+\left(s-2m_{\tilde{\chi}_{1}^{0}}^{2}+2m_{\tilde{l}_{1}}^{2}\right)}\right]$
$\displaystyle\
\quad\times\Big{\\{}s\left(-2\Sigma_{+}^{2}\Sigma_{-}^{2}m_{\tilde{\chi}_{1}^{0}}^{2}+\left(\Sigma_{+}^{4}+4\Sigma_{+}^{2}\Sigma_{-}^{2}+\Sigma_{-}^{4}\right)m_{\tilde{l}_{1}}^{2}\right)+2\left(\Sigma_{+}^{4}+3\Sigma_{+}^{2}\Sigma_{-}^{2}+\Sigma_{-}^{4}\right)$
$\displaystyle\
\quad\times\left(m_{\tilde{\chi}_{1}^{0}}^{2}-m_{\tilde{l}_{1}}^{2}\right)^{2}\Big{\\}}\Bigg{\\}}$
(106) And the denominator is $\displaystyle\ \qquad\qquad\qquad
128\pi\,s\,M^{4}_{\rm W}\cos^{4}\beta\sin^{4}\theta_{\rm W}\cos^{4}\theta_{\rm
W}\left(s-4m_{\tilde{\chi}_{1}^{0}}^{2}\right)$ (107)
Whereas in the limit $m_{\tau},m_{\mu}\rightarrow 0$ the cross-section for
$\tilde{\chi}_{1}^{0}\ \tilde{\chi}_{1}^{0}\rightarrow\bar{\mu}\ \tau$ is
calculated. This process involves $t$\- and $u$-channel $\tilde{l}_{1}$
exchange. The numerator is
$\displaystyle e^{4}$ $\displaystyle\
\left\\{\frac{\sqrt{s\left(s-4m_{\tilde{\chi}_{1}^{0}}^{2}\right)}}{sm_{\tilde{l}_{1}}^{2}+\left(m_{\tilde{\chi}_{1}^{0}}^{2}-m_{\tilde{l}_{1}}^{2}\right)^{2}}\bigg{\\{}\left\\{\Sigma_{+}^{2}\left(2\Lambda_{-}^{2}+\Lambda_{+}^{2}\right)+\Sigma_{-}^{2}\left(\Lambda_{-}^{2}+2\Lambda_{+}^{2}\right)\right\\}s\,m_{\tilde{l}_{1}}^{2}\right.$
$\displaystyle\
\quad+\left\\{\Sigma_{+}^{2}\left(3\Lambda_{-}^{2}+2\Lambda_{+}^{2}\right)+\Sigma_{-}^{2}\left(2\Lambda_{-}^{2}+3\Lambda_{+}^{2}\right)\right\\}\left(m^{2}_{\tilde{\chi}_{1}^{0}}-m^{2}_{\tilde{l}_{1}}\right)^{2}\bigg{\\}}$
$\displaystyle\
\quad-\frac{2}{-s+2m_{\tilde{\chi}_{1}^{0}}^{2}-2m_{\tilde{l}_{1}}^{2}}\log\left[\frac{-\sqrt{s\left(s-4m_{\tilde{\chi}_{1}^{0}}^{2}\right)}+\left(s-2m_{\tilde{\chi}_{1}^{0}}^{2}+2m_{\tilde{l}_{1}}^{2}\right)}{\sqrt{s\left(s-4m_{\tilde{\chi}_{1}^{0}}^{2}\right)}+\left(s-2m_{\tilde{\chi}_{1}^{0}}^{2}+2m_{\tilde{l}_{1}}^{2}\right)}\right]$
$\displaystyle\
\quad\times\bigg{\\{}\left\\{\Sigma_{+}^{2}\left(3\Lambda_{-}^{2}+2\Lambda_{+}^{2}\right)+\Sigma_{-}^{2}\left(2\Lambda_{-}^{2}+3\Lambda_{+}^{2}\right)\right\\}\left(m^{2}_{\tilde{\chi}_{1}^{0}}-m^{2}_{\tilde{l}_{1}}\right)^{2}$
$\displaystyle\
\quad+s\left\\{-\left(\Sigma_{+}^{2}\Lambda_{-}^{2}+\Sigma_{-}^{2}\Lambda_{+}^{2}\right)m_{\tilde{\chi}_{1}^{0}}^{2}+\left(\Sigma_{+}^{2}\left(2\Lambda_{-}^{2}+\Lambda_{+}^{2}\right)+\Sigma_{-}^{2}\left(\Lambda_{-}^{2}+2\Lambda_{+}^{2}\right)\right)m_{\tilde{l}_{1}}^{2}\right\\}\bigg{\\}}\Bigg{\\}}$
(108) And the denominator is $\displaystyle\ \qquad\qquad\qquad
128\pi\,s\,M^{4}_{\rm W}\cos^{4}\beta\sin^{4}\theta_{\rm W}\cos^{4}\theta_{\rm
W}\left(s-4m_{\tilde{\chi}_{1}^{0}}^{2}\right)$ (109)
### E.4 $\tilde{\chi}_{1}^{0}\tilde{l}_{1}\rightarrow\gamma\tau/\mu$
In the limit $m_{\tau}\rightarrow 0$ the cross-section for
$\tilde{\chi}_{1}^{0}\ \tilde{l}_{1}\rightarrow\gamma\ \tau$ is calculated.
This process involves $s$-channel $\tau$ mediation and $t$-channel
$\tilde{l}_{1}$ exchange. The numerator is
$\displaystyle\left(\Sigma_{+}^{2}+\Sigma_{-}^{2}\right)e^{4}$
$\displaystyle\left\\{\log\left[\frac{m_{\tilde{\chi}_{1}^{0}}^{2}-\left(s+m_{\tilde{l}_{1}}^{2}\right)-\sqrt{m_{\tilde{\chi}_{1}^{0}}^{4}+\left(-s+m_{\tilde{l}_{1}}^{2}\right)^{2}-2m_{\tilde{\chi}_{1}^{0}}^{2}\left(s+m_{\tilde{l}_{1}}^{2}\right)}}{m_{\tilde{\chi}_{1}^{0}}^{2}-\left(s+m_{\tilde{\tau}_{1}}^{2}\right)+\sqrt{m_{\tilde{\chi}_{1}^{0}}^{4}+\left(-s+m_{\tilde{l}_{1}}^{2}\right)^{2}-2m_{\tilde{\chi}_{1}^{0}}^{2}\left(s+m_{\tilde{l}_{1}}^{2}\right)}}\right]\right.$
$\displaystyle\quad\times
s\left(m_{\tilde{\chi}_{1}^{0}}^{2}-3m_{\tilde{l}_{1}}^{2}\right)+\left(s-2m_{\tilde{\chi}_{1}^{0}}^{2}+2m_{\tilde{l}_{1}}^{2}\right)\sqrt{m_{\tilde{\chi}_{1}^{0}}^{4}+\left(-s+m_{\tilde{\tau}_{1}}^{2}\right)^{2}-2m_{\tilde{\chi}_{1}^{0}}^{2}\left(s+m_{\tilde{l}_{1}}^{2}\right)}\Bigg{\\}}$
(110) And the denominator is $\displaystyle\ 32\pi\,s\,M^{2}_{\rm
W}\cos^{2}\beta\sin^{2}\theta_{\rm W}\cos^{2}\theta_{\rm
W}\left\\{m_{\tilde{\chi}_{1}^{0}}^{4}+\left(-s+m_{\tilde{l}_{1}}^{2}\right)^{2}-2m_{\tilde{\chi}_{1}^{0}}^{2}\left(s+m_{\tilde{l}_{1}}^{2}\right)\right\\}$
(111)
Whereas in the limit $m_{\mu}\rightarrow 0$ the cross-section for
$\tilde{\chi}_{1}^{0}\ \tilde{l}_{1}\rightarrow\gamma\ \mu$ is calculated.
This process involves $s$-channel $\mu$ mediation and $t$-channel
$\tilde{l}_{1}$ exchange. The numerator is
$\displaystyle\left(\Lambda_{+}^{2}+\Lambda_{-}^{2}\right)e^{4}$
$\displaystyle\left\\{\log\left[\frac{m_{\tilde{\chi}_{1}^{0}}^{2}-\left(s+m_{\tilde{l}_{1}}^{2}\right)-\sqrt{m_{\tilde{\chi}_{1}^{0}}^{4}+\left(-s+m_{\tilde{l}_{1}}^{2}\right)^{2}-2m_{\tilde{\chi}_{1}^{0}}^{2}\left(s+m_{\tilde{l}_{1}}^{2}\right)}}{m_{\tilde{\chi}_{1}^{0}}^{2}-\left(s+m_{\tilde{\tau}_{1}}^{2}\right)+\sqrt{m_{\tilde{\chi}_{1}^{0}}^{4}+\left(-s+m_{\tilde{l}_{1}}^{2}\right)^{2}-2m_{\tilde{\chi}_{1}^{0}}^{2}\left(s+m_{\tilde{l}_{1}}^{2}\right)}}\right]\right.$
$\displaystyle\quad\times
s\left(m_{\tilde{\chi}_{1}^{0}}^{2}-3m_{\tilde{l}_{1}}^{2}\right)+\left(s-2m_{\tilde{\chi}_{1}^{0}}^{2}+2m_{\tilde{l}_{1}}^{2}\right)\sqrt{m_{\tilde{\chi}_{1}^{0}}^{4}+\left(-s+m_{\tilde{\tau}_{1}}^{2}\right)^{2}-2m_{\tilde{\chi}_{1}^{0}}^{2}\left(s+m_{\tilde{l}_{1}}^{2}\right)}\Bigg{\\}}$
(112) And the denominator is $\displaystyle\ 32\pi\,s\,M^{2}_{\rm
W}\cos^{2}\beta\sin^{2}\theta_{\rm W}\cos^{2}\theta_{\rm
W}\left\\{m_{\tilde{\chi}_{1}^{0}}^{4}+\left(-s+m_{\tilde{l}_{1}}^{2}\right)^{2}-2m_{\tilde{\chi}_{1}^{0}}^{2}\left(s+m_{\tilde{l}_{1}}^{2}\right)\right\\}$
(113)
## References
* (1) G. Jungman, M. Kamionkowski, and K. Griest, Supersymmetric dark matter, Phys. Rept. 267 (1996) 195–373, [hep-ph/9506380].
* (2) H. Goldberg, Constraint on the Photino Mass from Cosmology, Phys. Rev. Lett. 50 (1983) 1419.
* (3) J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos, K. A. Olive, and M. Srednicki, Supersymmetric relics from the big bang, Nucl. Phys. B238 (1984) 453–476.
* (4) P. H. Chankowski, J. R. Ellis, K. A. Olive, and S. Pokorski, Cosmological fine tuning, supersymmetry, and the gauge hierarchy problem, Phys.Lett. B452 (1999) 28–38, [hep-ph/9811284].
* (5) D. Larson, J. Dunkley, G. Hinshaw, E. Komatsu, M. Nolta, et. al., Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Power Spectra and WMAP-Derived Parameters, Astrophys.J.Suppl. 192 (2011) 16, [arXiv:1001.4635].
* (6) N. Arkani-Hamed, A. Delgado, and G. Giudice, The Well-tempered neutralino, Nucl.Phys. B741 (2006) 108–130, [hep-ph/0601041].
* (7) H. Baer, C. Balazs, A. Belyaev, T. Krupovnickas, and X. Tata, Updated reach of the CERN LHC and constraints from relic density, $b\to s\gamma$ and a($\mu$) in the mSUGRA model, JHEP 0306 (2003) 054, [hep-ph/0304303].
* (8) A. Djouadi, M. Drees, and J.-L. Kneur, Updated constraints on the minimal supergravity model, JHEP 0603 (2006) 033, [hep-ph/0602001].
* (9) L. Calibbi, Y. Mambrini, and S. Vempati, SUSY-GUTs, SUSY-seesaw and the neutralino dark matter, JHEP 0709 (2007) 081, [arXiv:0704.3518].
* (10) U. Chattopadhyay, D. Das, A. Datta, and S. Poddar, Non-zero trilinear parameter in the mSUGRA model: Dark matter and collider signals at Tevatron and LHC, Phys.Rev. D76 (2007) 055008, [arXiv:0705.0921].
* (11) V. Barger, D. Marfatia, and A. Mustafayev, Neutrino sector impacts SUSY dark matter, Phys.Lett. B665 (2008) 242–251, [arXiv:0804.3601].
* (12) M. Gomez, S. Lola, P. Naranjo, and J. Rodriguez-Quintero, WMAP Dark Matter Constraints on Yukawa Unification with Massive Neutrinos, JHEP 0904 (2009) 043, [arXiv:0901.4013].
* (13) S. K. Kang, A. Kato, T. Morozumi, and N. Yokozaki, Threshold corrections to the radiative breaking of electroweak symmetry and neutralino dark matter in supersymmetric seesaw model, Phys.Rev. D81 (2010) 016011, [arXiv:0909.2484].
* (14) C. Biggio and L. Calibbi, Phenomenology of SUSY $SU(5)$ with Type I+Iii Seesaw, JHEP 10 (2010) 037, [arXiv:1007.3750].
* (15) J. N. Esteves, J. C. Romao, M. Hirsch, F. Staub, and W. Porod, Supersymmetric Type-Iii Seesaw: Lepton Flavour Violating Decays and Dark Matter, Phys. Rev. D83 (2011) 013003, [arXiv:1010.6000].
* (16) J. Ellis, A. Mustafayev, and K. A. Olive, Resurrecting No-Scale Supergravity Phenomenology, Eur.Phys.J. C69 (2010) 219–233, [arXiv:1004.5399].
* (17) K. Kadota, K. A. Olive, and L. Velasco-Sevilla, A Sneutrino NLSP in the nu CMSSM, Phys.Rev. D79 (2009) 055018, [arXiv:0902.2510].
* (18) R. Barbieri, L. J. Hall, and A. Strumia, Violations of lepton flavor and CP in supersymmetric unified theories, Nucl.Phys. B445 (1995) 219–251, [hep-ph/9501334].
* (19) L. Calibbi, A. Faccia, A. Masiero, and S. K. Vempati, Lepton Flavour Violation from Susy-Guts: Where Do We Stand for Meg, Prism / Prime and a Super Flavour Factory, Phys. Rev. D74 (2006) 116002, [hep-ph/0605139].
* (20) E. Dudas, S. Pokorski, and C. A. Savoy, Soft scalar masses in supergravity with horizontal U(1)-x gauge symmetry, Phys.Lett. B369 (1996) 255–261, [hep-ph/9509410].
* (21) E. Dudas, C. Grojean, S. Pokorski, and C. A. Savoy, Abelian flavor symmetries in supersymmetric models, Nucl.Phys. B481 (1996) 85–108, [hep-ph/9606383].
* (22) R. Barbieri, L. J. Hall, and A. Romanino, Consequences of a U(2) flavor symmetry, Phys.Lett. B401 (1997) 47–53, [hep-ph/9702315].
* (23) T. Kobayashi, H. Nakano, H. Terao, and K. Yoshioka, Flavor violation in supersymmetric theories with gauged flavor symmetries, Prog.Theor.Phys. 110 (2003) 247–267, [hep-ph/0211347].
* (24) P. H. Chankowski, K. Kowalska, S. Lavignac, and S. Pokorski, Update on fermion mass models with an anomalous horizontal U(1) symmetry, Phys.Rev. D71 (2005) 055004, [hep-ph/0501071].
* (25) S. Antusch, S. F. King, M. Malinsky, and G. G. Ross, Solving the SUSY Flavour and CP Problems with Non-Abelian Family Symmetry and Supergravity, Phys.Lett. B670 (2009) 383–389, [arXiv:0807.5047].
* (26) C. A. Scrucca, Soft masses in superstring models with anomalous U(1) symmetries, JHEP 0712 (2007) 092, [arXiv:0710.5105].
* (27) J. Esteves, J. Romao, M. Hirsch, A. Vicente, W. Porod, et. al., LHC and lepton flavour violation phenomenology of a left-right extension of the MSSM, JHEP 1012 (2010) 077, [arXiv:1011.0348].
* (28) J. Esteves, J. Romao, M. Hirsch, W. Porod, F. Staub, et. al., Dark matter and LHC phenomenology in a left-right supersymmetric model, JHEP 1201 (2012) 095, [arXiv:1109.6478].
* (29) J. Hisano, T. Moroi, K. Tobe, M. Yamaguchi, and T. Yanagida, Lepton Flavor Violation in the Supersymmetric Standard Model with Seesaw Induced Neutrino Masses, Phys. Lett. B357 (1995) 579–587, [hep-ph/9501407].
* (30) I. Masina and C. A. Savoy, Sleptonarium (Constraints on the CP and Flavour Pattern of Scalar Lepton Masses), Nucl. Phys. B661 (2003) 365–393, [hep-ph/0211283].
* (31) P. Paradisi, Constraints on SUSY Lepton Flavour Violation by Rare Processes, JHEP 10 (2005) 006, [hep-ph/0505046].
* (32) J. Hisano, R. Kitano, and M. M. Nojiri, Slepton Oscillation at Large Hadron Collider, Phys. Rev. D65 (2002) 116002, [hep-ph/0202129].
* (33) J. Hisano, M. M. Nojiri, and W. Sreethawong, Discriminating Electroweak-Ino Parameter Ordering at the Lhc and Its Impact on Lfv Studies, JHEP 06 (2009) 044, [arXiv:0812.4496].
* (34) K. Griest and D. Seckel, Three exceptions in the calculation of relic abundances, Phys.Rev. D43 (1991) 3191–3203.
* (35) G. Belanger, F. Boudjema, P. Brun, A. Pukhov, S. Rosier-Lees, et. al., Indirect search for dark matter with micrOMEGAs2.4, Comput.Phys.Commun. 182 (2011) 842–856, [arXiv:1004.1092].
* (36) M. Ciuchini, A. Masiero, P. Paradisi, L. Silvestrini, S. Vempati, et. al., Soft SUSY breaking grand unification: Leptons versus quarks on the flavor playground, Nucl.Phys. B783 (2007) 112–142, [hep-ph/0702144].
* (37) Particle Data Group Collaboration, K. Nakamura et. al., Review of Particle Physics, J. Phys. G37 (2010) 075021.
* (38) https://twiki.cern.ch/twiki/bin/view/AtlasPublic, https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsHIG.
* (39) J. R. Ellis, T. Falk, K. A. Olive, and Y. Santoso, Exploration of the MSSM with Non-Universal Higgs Masses, Nucl. Phys. B652 (2003) 259–347, [hep-ph/0210205].
* (40) H. Baer, A. Mustafayev, S. Profumo, A. Belyaev, and X. Tata, Direct, indirect and collider detection of neutralino dark matter in SUSY models with non-universal Higgs masses, JHEP 0507 (2005) 065, [hep-ph/0504001].
* (41) J. R. Ellis, K. A. Olive, and P. Sandick, Varying the Universality of Supersymmetry-Breaking Contributions to MSSM Higgs Boson Masses, Phys.Rev. D78 (2008) 075012, [arXiv:0805.2343].
* (42) J. R. Ellis, S. King, and J. Roberts, The Fine-Tuning Price of Neutralino Dark Matter in Models with Non-Universal Higgs Masses, JHEP 0804 (2008) 099, [arXiv:0711.2741].
* (43) L. Roszkowski, R. Ruiz de Austri, R. Trotta, Y.-L. S. Tsai, and T. A. Varley, Global fits of the Non-Universal Higgs Model, Phys.Rev. D83 (2011) 015014, [arXiv:0903.1279].
* (44) D. Das, A. Goudelis, and Y. Mambrini, Exploring SUSY light Higgs boson scenarios via dark matter experiments, JCAP 1012 (2010) 018, [arXiv:1007.4812].
* (45) D. Chowdhury and S. K. Vempati [In Preparation].
* (46) I. Hinchliffe and F. Paige, Lepton flavor violation at the CERN LHC, Phys.Rev. D63 (2001) 115006, [hep-ph/0010086].
* (47) B. Allanach, J. Conlon, and C. Lester, Measuring Smuon-Selectron Mass Splitting at the CERN LHC and Patterns of Supersymmetry Breaking, Phys.Rev. D77 (2008) 076006, [arXiv:0801.3666].
* (48) A. J. Buras, L. Calibbi, and P. Paradisi, Slepton mass-splittings as a signal of LFV at the LHC, JHEP 1006 (2010) 042, [arXiv:0912.1309].
* (49) A. Bartl, K. Hidaka, K. Hohenwarter-Sodek, T. Kernreiter, W. Majerotto, et. al., Test of lepton flavor violation at LHC, Eur.Phys.J. C46 (2006) 783–789, [hep-ph/0510074].
* (50) M. M. Nojiri, Polarization of $\tau$ lepton from scalar $\tau$ decay as a probe of neutralino mixing, Phys.Rev. D51 (1995) 6281–6291, [hep-ph/9412374].
* (51) M. M. Nojiri, K. Fujii, and T. Tsukamoto, Confronting the minimal supersymmetric standard model with the study of scalar leptons at future linear e+ e- colliders, Phys.Rev. D54 (1996) 6756–6776, [hep-ph/9606370].
* (52) M. Guchait and D. Roy, Using $\tau$ polarization as a distinctive SUGRA signature at LHC, Phys.Lett. B541 (2002) 356–361, [hep-ph/0205015].
* (53) K. Hamaguchi, Y. Kuno, T. Nakaya, and M. M. Nojiri, A Study of late decaying charged particles at future colliders, Phys.Rev. D70 (2004) 115007, [hep-ph/0409248].
* (54) R. Godbole, M. Guchait, and D. Roy, Using Tau Polarization to probe the Stau Co-annihilation Region of mSUGRA Model at LHC, Phys.Rev. D79 (2009) 095015, [arXiv:0807.2390].
* (55) A. Brignole and A. Rossi, Anatomy and phenomenology of mu-tau lepton flavor violation in the MSSM, Nucl.Phys. B701 (2004) 3–53, [hep-ph/0404211].
* (56) D. Chowdhury, R. Garani, and S. K. Vempati, SUSEFLAV: Program for supersymmetric mass spectra with seesaw mechanism and rare lepton flavor violating decays, arXiv:1109.3551.
* (57) D. M. Pierce, J. A. Bagger, K. T. Matchev, and R.-j. Zhang, Precision Corrections in the Minimal Supersymmetric Standard Model, Nucl. Phys. B491 (1997) 3–67, [hep-ph/9606211].
* (58) S. Heinemeyer, W. Hollik, and G. Weiglein, The Mass of the Lightest MSSM Higgs Boson: a Compact Analytical Expression at the Two-Loop Level, Phys. Lett. B455 (1999) 179–191, [hep-ph/9903404].
* (59) A. Pukhov et. al., Comphep: a Package for Evaluation of Feynman Diagrams and Integration over Multi-Particle Phase Space. User’s Manual for Version 33, hep-ph/9908288.
* (60) LEP Working Group for Higgs boson searches Collaboration, R. Barate et. al., Search for the Standard Model Higgs Boson at Lep, Phys. Lett. B565 (2003) 61–75, [hep-ex/0306033].
* (61) J. M. Frere, D. R. T. Jones, and S. Raby, Fermion Masses and Induction of the Weak Scale by Supergravity, Nucl. Phys. B222 (1983) 11.
* (62) L. Alvarez-Gaume, J. Polchinski, and M. B. Wise, Minimal Low-Energy Supergravity, Nucl.Phys. B221 (1983) 495. Revised version.
* (63) M. Claudson, L. J. Hall, and I. Hinchliffe, Low-Energy Supergravity: False Vacua and Vacuous Predictions, Nucl.Phys. B228 (1983) 501\.
* (64) T. Nihei, L. Roszkowski, and R. Ruiz de Austri, Exact cross-sections for the neutralino slepton coannihilation, JHEP 0207 (2002) 024, [hep-ph/0206266].
|
arxiv-papers
| 2011-04-22T15:35:33 |
2024-09-04T02:49:18.383539
|
{
"license": "Public Domain",
"authors": "Debtosh Choudhury, Raghuveer Garani and Sudhir K. Vempati",
"submitter": "Sudhir Vempati",
"url": "https://arxiv.org/abs/1104.4467"
}
|
1104.4484
|
Dogiel et al.Neutral Iron Line from Sgr B2 by Subrelativistic Protons
2000/12/312001/01/01
Galaxy: center — X-rays: ISM – ISM: clouds — cosmic rays
# K-shell Emission of Neutral Iron Line from Sgr B2 Excited by
Subrelativistic Protons.
Vladimir Dogiel1 Dmitrii Chernyshov1,3 Katsuji Koyama2 Masayoshi Nobukawa2 and
Kwong-Sang Cheng3 1P.N.Lebedev Institute, Leninskii pr, 53, 119991 Moscow,
Russia dogiel@lpi.ru 2Department of Physics, Graduate school of Science,
Kyoto University, Sakyo-ku, Kyoto 606-8502 3Department of Physics, University
of Hong Kong, Pokfulam Road, Hong Kong, China
###### Abstract
We investigated the emission of K$\alpha$ iron line from the massive molecular
clouds in the Galactic center (GC). We assume that at present the total flux
of this emission consists of time variable component generated by primary
X-ray photons ejected by Sagittarius A∗ (Sgr A∗) in the past and a relatively
weak quasi-stationary component excited by impact of protons which were
generated by star accretion onto the central black hole. The level of
background emission was estimated from a rise of the 6.4 keV line intensity in
the direction of several molecular clouds, that we interpreted as a stage when
the X-ray front ejected by Sgr A∗ entered into these clouds. The 6.4 keV
emission before this intensity jump we interpreted as emission generated by
subrelativistic cosmic rays there. The cross-section of K$\alpha$ vacancies
produced by protons differs from that of electrons or X-rays. Therefore, we
expect that this processes can be distinguished from the analysis of the
equivalent width of the iron line and time variations of the width can be
predicted. The line intensity from the clouds depends on their distance from
Sgr A∗ and the coefficient of spacial diffusion near the Galactic center. We
expect that in a few years the line intensity for the cloud G 0.11$-$0.11
which is relatively close to Sgr A∗ will decreases to the level $\lesssim$ 10%
from its present value. For the cloud Sagittarius B2 (Sgr B2) the situation is
more intricate. If the diffusion coefficient $D\gtrsim 10^{27}$ cm2 s-1 then
the expected stationary flux should be about 10% of its level in 2000. In the
opposite case the line intensity from Sgr B2 should drop down to zero because
the protons do not reach the cloud.
## 1 Introduction
The bright iron fluorescent K$\alpha$ line in the direction of the molecular
clouds in the Galactic center (GC) region was predicted (Sunyaev et al., 1993)
and then discovered (Koyama et al., 1996) more than twenty years ago. It was
assumed that this flux arose due to the K-absorption of keV photons by dense
molecular clouds irradiated by external X-rays, possibly from the super-
massive black hole, Sagittarius A∗ (Sgr A∗), which was active in the recent
past, ($300$ – $400$ years ago (Sunyaev et al., 1993; Koyama et al., 1996)),
but is almost unseen at present (see e.g. Baganoff et al. (2003) and Porquet
et al. (2003)). Recent observations found a steady decrease of the X-ray flux
from Sagittarius B2 (Sgr B2) (Koyama et al., 2008; Inui et al., 2009; Terrier
et al., 2010; Nobukawa et al., 2011). This is a strong evidence that the
origin of the variable component is, indeed, a reflection of the primary X-ray
flare.
The duration of Sgr A∗ activity is uncertain. Thus, Murakami et al. (2003)
obtained the luminosity history of the galactic nuclei Sgr A∗ during the last
500 years. They concluded that Sgr A∗ was as luminous as $F_{fl}\sim 10^{39}$
erg s-1 a few hundred years ago, and has dimmed gradually since then.
Revnivtsev et al. (2004) found no significant variability of the line flux
from Sgr B2 during the period 1993 -2001. The constancy of the line flux meant
that the luminosity of Sgr A∗ remained approximately constant for more than 10
years a few hundred years ago. Inui et al. (2009) confirmed this variability
of Sgr A∗ activity with a time scale $\sim$10 years. And, finally Ponti et al.
(2010) concluded that this activity might have started a few hundreds of years
ago and lasted until about 70 – 150 years ago.
An appropriate duration of Sgr A∗ X-ray activity can be caused by shocks
resulting from interaction of jets with the dense interstellar medium (Yu et
al., 2010).
K$\alpha$ emission from the clouds can be generated by subrelativistic
electrons with energies above 7 keV. This model was proposed by Yusef-Zadeh et
al. (2002) (see also Yusef-Zadeh et al. (2007a, b)) who assumed that a
correlation between the nonthermal radio filaments and the X-ray features when
combined with the distribution of molecular gas might suggested that the
impact of the subrelativistic electrons with energies 10–100 keV from local
sources with diffuse neutral gas produced both bremsstrahlung X-ray continuum
and diffuse 6.4 keV line emission. The excess of supernova remnants detected
in the GC region was supposed to be responsible for enhancing the flux of
subrelativistic electrons. The characteristic time of K$\alpha$ emission in
this case is about $\geq 1000$ years, i.e. about the lifetime of
subrelativistic electrons (for the rate of electron energy losses see e.g.
Hayakawa (1969)). The total energy release of a supernova is about $10^{51}$
erg.
Observations indicated on even more energetic phenomena which might occur in
the GC. Thus, a hot plasma with the temperature about 10 keV was found in the
GC which can be heated if there are sources with a power $\sim 10^{41}$ erg
s-1 (see e.g. Koyama et al. (1996)), which could be generated by events of
huge energy release in the past. It was shown that the energy about $10^{53}$
erg can be released if the central black hole captured a star (see e.g.
Alexander (2005); Cheng et al. (2006, 2007)). As a result, a flux of
subrelativistic protons is ejected from the GC, which heats the central region
(Dogiel et al., 2009b). These protons can also produce 6.4 keV line emission
from molecular clouds (Dogiel et al., 2009a), which is, however, stationary
because the lifetime of these protons $\tau_{p}\sim 10^{7}$ yr (Dogiel et al.,
2009c) is much longer than the characteristic time of star capture by the
central black hole ($\tau_{c}\sim 10^{5}$ yr) (Alexander, 2005). This scenario
assumed at least two components of the X-ray line and continuum emission from
the clouds: the first is a time variable component generated by X-rays from
sources in the GC, and the second is a quasi-stationary component produced by
subrelativistic protons interacting with the gas.
The question whether the X-ray emission from the central region (within
$\leq\timeform{0.3D}$ radius) is really diffuse was analysed in Koyama et al.
(2007b) who showed that the hot plasma distribution in the GC, traced by the
6.7 keV iron line emission, did not correlate with that of point sources whose
distribution was derived from IR observations that differed from the other
disk where the correlation was quite good. Recently, Revnivtsev et al. (2009)
showed from the Chandra data that most ($\sim 88$%) of the ridge emission is
clearly explained by dim and numerous point sources. Therefore, at least in
the ridge emission, accreting white dwarfs and active coronal binaries are
considered to be main emitters. We notice however that Revnivtsev et al.
(2009) observed regions in the disk located at $\timeform{1.5D}$ away from the
GC.
Observations of the 6.4 keV flux from Sgr B2 have not found up to now any
reliable evident stationary component though as predicted by Ponti et al.
(2010) a fast decrease of 6.4 keV emission observed with XMM-Newton for
several molecular clouds suggested that the emission generated by low energy
cosmic rays, if present, might become dominant in several years. Nevertheless,
for several clouds, including Sgr B, observations show temporary variations of
6.4 keV emission both rise and decay of intensity (see Inui et al. (2009);
Ponti et al. (2010)). We interpreted this rise of emission as a stage when the
X-ray front ejected by Sgr A∗ entered into these clouds and the level of
background generated by cosmic rays as the 6.4 keV emission before the
intensity jump.
Below we shall show that if this stationary component exists it can be
predicted from time variations of the line emission from the clouds.
## 2 Equivalent Width of the 6.4 keV Line
In the framework of the reflection model, primary X-rays from an external
source produce fluxes of continuum and line emission from irradiated molecular
clouds. In principle, the surface brightness distribution, the equivalent
width and the shape of the fluorescent line depend on the geometry of the
source-reflector-observer (see Sunyaev & Churazov (1998)) but for rough
estimates we can neglect this effect. The continuum flux from the clouds is
proportional roughly to
$F_{X}\propto n_{H}\sigma_{T}cN_{X}\,,$ (1)
where $n_{H}$ if the hydrogen density in the cloud, $\sigma_{T}$ is the
Thomson cross-section, and $N_{X}$ is the total number of primary photons with
the energy of a produced X-ray $E_{X}\sim 7.1$ keV inside the cloud.
The flux of 6.4 keV line is
$F_{6.4}\propto n_{H}\sigma^{X}_{6.4}c\eta N_{X}\,,$ (2)
where $\eta$ is the iron abundance in the cloud and $\sigma^{X}_{6.4}$ is the
cross-section of the line production by the primary X-ray flux. Then the
equivalent width (eW) of the line in the framework of the reflection model is
$eW=\frac{F^{X}_{6.4}}{F_{X}(E_{X}=6.4~{}keV)}\propto\frac{\sigma^{X}_{6.4}\eta}{\sigma_{T}}=f(\eta)\,.$
(3)
The intensity of the Fe K$\alpha$ line excited by subrelativistic particles
(electrons or protons) in a cloud can be calculated from
$F_{K_{\alpha}}=4\pi\eta\omega_{K}\int\limits_{r}n_{H}(r)r^{2}dr\int\limits_{E}v(E)\sigma_{K}\tilde{N}(E,r)dE\,,$
(4)
where $v$ and $E$ are the velocity and the kinetic energy of subrelativistic
particles, $\sigma_{K}$ is the cross-section for 6.4 keV line production by
subrelativistic particles,
$\sigma_{K}=\sigma_{Z}^{I}\eta\omega_{Z}^{KI}\,.$ (5)
Here $\sigma_{Z}^{I}$ is the cross section for the K-shell ionization of atom
Z by a charged particle of energy $E$ (see Garcia et al. (1973); Quarles
(1976)), $\omega_{Z}^{KI}$ is the Ki fluorescence yield for an atom Z.
The flux of bremsstrahlung radiation is
$\Phi_{x}=4\pi\int\limits_{0}^{\infty}~{}n_{H}(r)r^{2}dr\int\limits_{E}~{}dE\tilde{N}(E,x,t){{d\sigma_{br}\over{dE_{x}}}}v(E)\,.$
(6)
Here $d\sigma_{br}/{dE_{x}}$ is the cross section of bremsstrahlung radiation
(see Hayakawa (1969))
${{d\sigma_{br}\over{dE_{x}}}}={8\over 3}{Z^{2}}{{e^{2}}\over{\hbar
c}}\left({{e^{2}}\over{m{c^{2}}}}\right)^{2}{{m{c^{2}}}\over{E^{\prime}}}{1\over{E_{x}}}\ln{{\left(\sqrt{E^{\prime}}+\sqrt{{E^{\prime}}-{E_{x}}}\right)^{2}}\over{E_{x}}}\,,$
(7)
where $E^{\prime}=E_{e}$ for electrons and
$E^{\prime}=E_{p}=(m_{p}/m_{e})E_{e}$ for protons. One can find also all these
cross-sections in Tatischeff (2003).
In principle the particle and X-ray scenarios can be distinguished from each
other from characteristics of emission from the clouds because the cross-
sections for collisional and photoionization mechanisms are quite different.
If the photoionization cross-sections are steep functions of energy (they vary
approximately as $E_{X}^{-3}$ from ionization thresholds), the cross sections
for collisional ionization have a much harder energy dependence. Therefore,
while fluorescence is essentially produced by photons with energy contained in
a narrow range of a few keV above the ionization threshold, subrelativistic
particles can produce significant X-ray line emission in an extended energy
range above the threshold (Tatischeff, 2003). The continuum emission in these
two models is also generated different processes : by the bremsstrahlung in
the collisional scenario and by the Thomson scattering in the photoionization
scenario.
The cross-sections of bremsstrahlung and K$\alpha$ production by
subrelativistic protons and electrons are shown in figure 1.
(80mm,80mm)figure1.eps
Figure 1: Cross section of electron and proton bremsstrahlung radiation at
the energy 6.4 keV, $d\sigma_{br}/{dE_{x}}$ (dashed line), and the cross-
sections $\sigma_{K}$ of K$\alpha$ production for electron (thick solid line)
and proton (thin solid line). Here $E^{\prime}=E_{e}$ for electrons and
$E^{\prime}=(m_{e}/m_{p})E_{p}$ for protons. Here $\omega_{K}$ equals 0.3 and
$\eta$ is taken to be twice solar.The data for this figure was kindly sent to
us by Vincent Tatischeff.
As one can see from the figure the cross-section of the proton bremsstrahlung
with the energy $E_{p}=(m_{p}/m_{e})E_{e}$ is completely the same as for
electrons with the energy $E_{e}$ and for protons , as shown in figure 1 by
the dashed line. However the cross-sections $\sigma_{K}$ of K$\alpha$ lines
produced by electrons (thick solid line) and by protons (thin solid line) are
quite different. If for electrons the cross-section $\sigma_{K}$ of the iron
line has a sharp cut-off at $E=7.1$ keV, that for protons is rather smooth,
and a contribution from protons with relatively small energies can be
significant.
The photoionization and collisional scenarios can be distinguished from the
equivalent width of iron line. The eW depends on the chemical abundance in the
GC, which is poorly known for the GC. Direct estimations of the iron abundance
there provided by the Suzaku group (Koyama et al., 2007b, 2009) gave the value
from 1 to 3.5 solar. Revnivtsev et al. (2004) got the iron abundance for the
cloud Sgr B2 at about 1.9 solar. Nobukawa et al. (2010) found that the
equivalent width of line emission from a cloud near Sgr A requires the
abundance higher than solar. For the line emission due to impact of
subrelativistic electrons, the iron abundance in Sgr B2 should be about $4$
solar, while the X-ray scenario requires $\sim 1.6$ solar. Therefore, Nobukawa
et al. (2010) concluded that the irradiating model seemed to be more
attractive than the electron impact scenario. This abundance is compatible
with the value $\eta=1.3$ solar estimated by Nobukawa et al. (2011) from the
iron absorption edge at 7.1 keV.
The eW for the case of particle impact depends on their spectrum. Its value
for power-law spectra of particles ($N\propto E^{\gamma}$) is a function of
the spectral index $\gamma$ and the abundance $\eta$:
${\it
eW}=\eta\omega_{K}\frac{\int\limits_{E}v(E)\sigma_{K}(E)E^{\gamma}dE}{\int\limits_{E}~{}E^{\gamma}({d\sigma_{br}({\bar{E}},E)/{dE_{x}}})v(E)~{}dE}=f(\eta,\gamma)\,.$
(8)
For the solar iron abundance the eW for electrons and protons is shown in
figure 2. It was assumed here that the proton spectrum has a cut-off ($N=0$ at
$E>E_{inj}$, see below).
(80mm,80mm)figure2.eps
Figure 2: Equivalent width of K$\alpha$ line for the solar abundance produced
by electrons (thick solid line) and protons (thin solid line for injection
energy $E_{inj}=80$ MeV, dashed line for injection energy $E_{inj}=50$ MeV) as
a function of the spectral index $\gamma$.
One can see that the equivalent width of K$\alpha$ line generated by electrons
depends weakly on $\gamma$, and it varies from $\sim 250$ eV for soft spectra
to $\sim 500$ eV for hard electron spectra (see also in this respect Yusef-
Zadeh et al. (2007a)). In the case of protons the width variations are
significant reaching its maximum for very soft proton spectra. As one can see
from this figure the equivalent width weakly depends on the maximum energy of
protons, $E_{inj}$.
Sources of high energy particles in the Galaxy generate quite wide range of
characteristics of their spectra, though the most effective process in the
cosmic space, acceleration by shocks, provide particle spectra with the
spectral index $\gamma$ close to -2. For the case of accretion we approximated
the spectrum of proton injection by the delta-function distribution which was
modified then by Coulomb losses into a power-law spectrum with $\gamma=0.5$
(see Dogiel et al. (2009c)). We notice, however, that this delta-function
approximation is a simplification of the injection process. As it was shown by
Ginzburg et al. (2004) for jets, at first stages of evolution the jet material
moves by inertia. Then because of excitation of plasma instabilities in the
flux, the particle distribution functions, which were initially delta
functions both in angle and in energy, transform into complex angular and
energy dependencies.
Below we present briefly parameters of the proton spectrum for the case of a
star capture by a massive black hole (for details see Dogiel et al. (2009a,
c)).
## 3 Model of Proton Injection in the GC
We mention first of all that penetration of subrelativistic protons into
molecular clouds is supposed to be a rather natural process in the Galaxy.
Thus, investigations showed that heating and ionization of Galactic molecular
clouds can be produced by subrelativistic protons penetrating there (see e.g.
Dalgarno & McCray (1972); Spitzer & Jenkins (1975); Nath & Biermann (1994);
Dogiel et al. (2009a); Crocker et al. (2010)). If so, one expect also a flux
of the 6.4 keV line and continuum emission from these clouds generated by
these protons.
In the GC subrelativistic protons can be generated by processes of star
accretion on the super-massive black hole. Once passing the pericenter, the
star is tidally disrupted into a very long and dilute gas stream. The outcome
of tidal disruption is that some energy is extracted out of the orbit to
unbind the star and accelerate the debris. Initially about 50% of the stellar
mass becomes tightly bound to the black hole, while the remainder 50% of the
stellar mass is forcefully ejected (Ayal et al., 2000). Then the total number
of subrelativistic protons produced in each capture of one solar mass star is
$N_{k}\simeq 10^{57}$.
The kinetic energy carried by the ejected debris is a function of the
penetration parameter $b^{-1}=r_{t}/r_{p}$, where $r_{p}$ is the periapse
distance (distance of closest approach) and $r_{t}$ is the tidal radius. This
energy can significantly exceed that released by a normal supernova ($\sim
10^{51}$ erg) if the orbit is highly penetrating (Alexander, 2005),
$W\sim 4\times
10^{52}\left(\frac{M_{\ast}}{M_{\odot}}\right)^{2}\left(\frac{R_{\ast}}{R_{\odot}}\right)^{-1}\left(\frac{M_{\rm
bh}/M_{\ast}}{10^{6}}\right)^{1/3}\left(\frac{b}{0.1}\right)^{-2}~{}\mbox{erg}\,.$
(9)
where $M_{\ast}$ and $R_{\ast}$ is the mass and the radius of the captured
star, and $M_{\rm bh}$ is the mass of black hole.
For the star capture time $\tau_{s}\sim 10^{4}-10^{5}$ years (Alexander, 2005)
it gives a power input $W\sim 10^{41}$ erg s-1. The mean kinetic energy per
escaping nucleon is given by
$E_{\rm inj}\sim
42\left(\frac{\eta}{0.5}\right)^{-1}\left(\frac{M_{\ast}}{M_{\odot}}\right)\left(\frac{R_{\ast}}{R_{\odot}}\right)^{-1}\left(\frac{M_{\rm
bh}/M_{\ast}}{10^{6}}\right)^{1/3}\left(\frac{b}{0.1}\right)^{-2}~{}\mbox{MeV}\,,$
(10)
where $\eta M_{\ast}$ is the mass of escaping material. For the black-hole
mass $M_{\rm bh}=4.31\times 10^{6}~{}M_{\odot}$ the energy of escaping
particles is
$E_{\rm inj}\sim 68(\eta/0.5)^{-1}(b/0.1)^{-2}~{}\mbox{MeV nucleon${}^{-1}$}$
(11)
when a one-solar mass star is captured.
Subrelativistic protons lose their energy by collision with background
particles and the lifetime of subrelativistic protons in the GC with energies
$E\leq 100$ MeV is of the order of
$\tau_{p}\sim\sqrt{\frac{E_{p}^{3}}{2m_{p}}}\frac{m_{e}}{3\pi
ne^{4}\ln\Lambda}\sim 10^{7}~{}\mbox{years}$ (12)
where $n\sim 0.1$ cm-3 is the plasma density in the GC, $e$ and $m$ are the
electron charge and its rest mass, respectively, and $ln\Lambda$ is the
Coulomb logarithm. Because $\tau_{s}\ll\tau_{p}$, then the proton injection
can be considered as quasi-stationary.
The spatial and energy distribution of these protons in the central GC region
can be calculated from the equation
$\frac{\partial N}{\partial t}-\nabla\left(D\nabla
N\right)+\frac{\partial}{\partial E}\left(\frac{dE}{dt}N\right)=Q(E,t)\,,$
(13)
where $dE/dt$ is the rate of Coulomb energy losses, $D$ is the spatial
diffusion coefficient in the intercloud medium and the rhs term $Q$ describes
the process proton injection in the GC
$Q(E,{\bf
r},t)=\sum\limits_{k=0}N_{k}\delta(E-E_{inj})\delta(t-t_{k})\delta({\bf
r})\,,$ (14)
where $N_{k}$ is the number of injected protons and $t_{k}=k\times T$ is the
injection time.
The proton distribution inside molecular clouds is described by similar
equation but with a different diffusion coefficient and rates of energy losses
$\frac{\partial}{\partial
E}\left(b_{c}(E)\tilde{N}\right)-D_{c}\frac{\partial^{2}}{\partial
x^{2}}\tilde{N}=0\,,$ (15)
with the boundary conditions
$\tilde{N}|_{x=0}=N_{c},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\tilde{N}_{p}|_{x=\infty}=0\,.$
(16)
where $N_{c}$, the proton density at the cloud surface, is calculated with
equation (13), $D_{c}$ and $b_{c}$ are the diffusion coefficient and the rate
of energy losses inside the cloud. The value of $D_{c}$ for the clouds is
uncertain though there are theoretical estimates of this value provided by
Dogiel et al. (1987) who gave the value $\sim 10^{24}-10^{25}$ cm2 s-1. For
details of calculations see Dogiel et al. (2009a).
## 4 Stationary and Time-Variable Components of X-Ray Emission from the GC
Molecular Clouds
The following analysis is based on the cloud observations by XMM-Newton
obtained by Ponti et al. (2010). These clouds showed different time variations
of the line emission which were interpreted by the authors in terms of the
reflection model. The distances of the clouds from Sgr A∗ was chosen in that
way to explain the observed variations of the line emission for each of these
clouds. Several clouds of the Bridge complex show a rather low flux before a
sudden jump of the 6.4 keV intensity in about one order of magnitude that was
interpreted as a result of the X-front radiation which just had reached these
clouds (see figure 5, 6, and 11 in Ponti et al. (2010)). Basing on these
observations we make the two key assumptions:
1. 1.
The low level of 6.4 keV intensity from the clouds before the jump represents
a stationary component of the emission from the clouds. This assumption does
not seem to be incredible. Suzaku observations show also faint 6.4 keV
emission from the GC region which is more or less uniformly distributed there
(see Koyama et al. (2009)). We cannot exclude that this extended diffuse
emission may also represent a stationary line component.
2. 2.
This emission from the clouds before the jump is generated by proton impact.
For our analysis we used also parameters of the two other clouds which showed
time variations of the 6.4 keV emission. With some modeling of proton
penetration into the clouds described in the previous section we can calculate
stationary components of continuum and the 6.4 keV line emission from the
clouds produced by protons. The diffusion coefficient $D$ in the GC is unknown
and therefore is a free parameter of the problem. For calculations we took
parameters for the three clouds which are listed in Ponti et al. (2010):
* •
Bridge, the density $n_{H}=1.9\cdot 10^{4}$ cm-3, the radius of the cloud
$r=1.6$ pc, the distance from Sgr A∗ $R=63$ pc;
* •
the same for the cloud G 0.11$-$0.11, $n_{H}=1.8\cdot 10^{3}$ cm-3, $r=3.7$
pc, $R=30$ pc;
* •
the same for Sgr B2, $n_{H}=3.8\cdot 10^{4}$ cm-3, $r=7$ pc, $R=164$ pc.
Intensity of the 6.4 keV line produced by photoionization depends on the
number of primary photons penetrating into a cloud. The density of primary
X-ray flux from Sgr A∗ decreases with the distance $R$ as: $\propto R^{-2}$.
Then with the known parameters of X-ray and proton production by Sgr A∗ we can
calculate for each of these clouds, the ratio of the stationary component of
the 6.4 keV line produced by the protons, $F^{p}_{6.4}$, to the time variable
component at its peak value from irradiation by primary X-rays, $F^{X}_{6.4}$.
To do this we use the observed ratio $F^{p}_{6.4}/F^{X}_{6.4}=0.1$ for the
Bridge as it follows from the XMM-Newton data (Ponti et al., 2010). For the
clouds G0.11$-$0.11 and Sgr B2 this ratio as a function of the diffusion
coefficient $D$ is shown in figure 3.
(80mm,80mm)figure3.eps
Figure 3: The ratio $F^{p}_{6.4}/F^{X}_{6.4}$ for the cloud G 0.11$-$0.11 and
Sgr B2 as a function of the diffusion coefficient $D$
One can see that protons can contribute to the total 6.4 keV flux from Sgr B2
if the diffusion is large enough, $D\gtrsim 10^{27}$ cm2 s-1. Then the
expected stationary flux should be one order of magnitude less than the
observed 6.4 keV emission from Sgr B2 near its maximum in 2000. For small
values of $D$ there is no chance to observe 6.4 keV emission from Sgr B2 when
the X-ray front has crossed the cloud. For the cloud G 0.11$-$0.11 which
according to Ponti et al. (2010) is relatively close to Sgr A∗ the situation
is different. The intensity of stationary component is quite high almost
independently of $D$ and, in principle, may be detected in several years.
Ponti et al. (2010) estimated the front width of primary X-rays from a non-
detection of 6.4 keV emission from the two molecular (the 20 and 50 km s-1
clouds) with a mass more than $10^{4}M_{\odot}$ (Tsuboi et al. (1999)) which
are within 15 pc of Sgr A∗. Ponti et al. (2010) assumed the X-ray front had
passed already these clouds which were very close to Sgr A∗ and, therefore,
they do not shine anymore. From figure 3 it follows that in this case a
stationary 6.4 keV component should be seen after the front passage. We notice
that the distances to the clouds was estimated from the assumption that the
envelopes of nearby SN remnants interact with these clouds Coil & Ho (2000).
If this is true then it is very surprising that fluxes of continuum and line
emission are not observed from these clouds at all (as expected in the model
of Yusef-Zadeh et al. (2002)). As follows from Bykov et al. (2000) when a
shock front of SN interacts with a molecular cloud, energetic electrons
generated at the shock produces an intensive flux of hard X-rays from the
cloud. So, it is very strange that in such a situation X-ray emission is not
observed at all from these two clouds if the interpretation of cloud - SN
interaction is correct. If one accept this interpretation then very special
conditions for high energy particle propagation should be assumed around the
clouds. Besides, as follows from Sofue (1995) and Sawada et al. (1999) is not
easy to determine the distances between these clouds and Sgr A∗. In principle,
the XMM-Newton data do not exclude also any stationary component of the 6.4
keV flux from these clouds below the derived upper limit.
## 5 Predicted Variations of the Sgr B2 eW in Near Future
Observations show that the flux of the 6.4 keV line emission from Sgr B2 is
rapidly decreasing with time (see the left panel of figure 4). The question is
whether we can find any evidence for a possible stationary component of Sgr
B2. In figure 4 (right panel) we presented the expected variations of the Sgr
B2 equivalent width when the flux generated by the primary X-rays,
$F^{X}_{6.4}$, is dropping down to the level 20% (solid lines) and 10% (dashed
lines) of the maximum value with the rate shown in the left panel of the
figure. The calculations were done for protons with different spectral indexes
$\gamma$ and for electrons with $\gamma=-2.7$. One can see from the figure
that in the case if these particles are electrons the value of eW decreases
(almost independent of the electron spectral index, see figure 2). In the case
of protons the situation is intricate: for soft proton spectra (negative
$\gamma$) the value of $eW$ should increase with time while for spectra with a
positive spectral index it drops down. However, production of spectra with a
positive $\gamma$ in the Galaxy seems doubtful. In this figure we showed also
the measured value of $eW$ for the years 2005 and 2009 (see Nobukawa et al.
(2011)). Unfortunately, it is difficult to derive a time trend of the $eW$
variations because of relatively large error boxes.
These calculations show that the equivalent width should in principle change
if there is a component of Sgr B2 emission generated by subrelativistic
particles. It follows from figure 4 that if the eW is decreasing with time
than the origin of impact component is due to electrons. In the opposite case
stationary component of 6.4 keV emission is produced by subrelativistic
protons. If future observations do not find any time variation of the eW of
the 6.4 keV line that will be a strong evidence in favour of their pure
photoionization origin.
Recent Suzaku observations may find the iron line emission which is produced
by subrelativistic particles (Fukuoka et al., 2009; Tsuru et al., 2010). For
the clumps G 0.174$-$0.233 with $eW\simeq 950$ eV they concluded that the
X-ray reflection nebula (XRN) scenario was favored. On the other hand, for the
clump 6.4 keV G 0.162$-$0.217 with $eW\simeq 200$ eV they assumed that the
emission from there was due to low energy cosmic-ray electron (LECRe). They
found also that the $eW$ of the 6.4 keV emission line detected in the X-ray
faint region (non galactic molecular cloud region) is significantly lower than
one expected in the XRN scenario but higher than that of the LECRe model. In
this respect we notice that for the spectrum of protons in the interstellar
medium of the GC with the spectral index $\gamma=0.5$, as derived by Dogiel et
al. (2009c), the $eW$ of emission produced by protons is smaller than that of
photoionization, that may explain these new Suzaku results (see Figs. 2 and 4
for the proton spectral index $\gamma=0.5$).
(160mm,80mm)figure4.eps
Figure 4: Left: The evolution of the Fe K$\alpha$ line luminosity and X-ray
continuum as observed for Sgr B2. Right: The possible evolution of Fe
K$\alpha$ line equivalent width. Dashed lines correspond to
$F^{p}_{6.4}/F^{X}_{6.4}=0.1$, solid lines correspond to
$F^{p}_{6.4}/F^{X}_{6.4}=0.2$.
Future experiment can also distinguish the line origin from its width. If
electrons and X-rays generate a very narrow 6.4 keV line with the width about
1 eV, the line produced by subrelativistic protons is rather broad, $<100$ eV
(see Dogiel et al. (1998)). The estimated width of the Fe K line for the model
presented in Dogiel et al. (2009a) is about 40 eV. If there is a noticeable
proton component of the 6.4 keV flux from the clouds, the width of the line
should broaden with time.
Measurements of the line with present X-ray telescopes contains broadening
which depends on photon statistics and calibration uncertainties. The energy
resolution of CCD detectors at 6 keV is $\sim$130 eV which can be decreased
after the de-convolution procedure (see Koyama et al. (2007b); Ebisawa et al.
(2008)). However, even with this procedure it is not easy to derive a true
line width from observations, if it is about 40 eV. For more reliable results
a detector with a high energy resolution of eV such as micro-calorimeter
Astro-H/SXS is necessary.
## 6 Conclusion
We investigated parameters of the K$\alpha$ line emission from the molecular
clouds in the GC when it is excited by a flux of subrelativistic protons.
These protons are generated by accretion onto the super-massive black hole. We
concluded that:
* •
If these protons are generated by accretion processes they produce a quasi-
stationary component of 6.4 keV line and continuum hard X-ray emission from
molecular clouds in the GC because of their very long lifetime. In this
situation two components of X-ray radiation should be observed: a time
variable emission due to photoionization by primary X-ray photons emitted by
Sgr A∗ and a quasi-stationary component generated by proton impact.
* •
Since the cross-sections of continuum and the iron line production are
different for these two processes, we expect that they can be distinguished
from the analysis of the equivalent width of the iron line and we can predict
time variations of eW when the photoionization flux drops down after the
passage of X-ray front injected by Sgr A∗.
* •
Whether or not the stationary component excited by protons can be observed,
depends on a distance of a cloud from Sgr A∗ and the coefficient of spacial
diffusion in the GC medium. For the cloud G 0.11$-$0.11 which is relatively
close to Sgr A∗ we expect to observe in a few years a stationary component of
the 6.4 keV emission at the level $\lesssim$ 10% from its present value. For
the cloud Sgr B2 the situation is more intricate. If the diffusion coefficient
$D\gtrsim 10^{27}$ cm2s-1 then the expected stationary flux should be about
10% of its level in 2000. In the opposite case the line intensity from Sgr B2
should drop down to zero because the protons do not reach the cloud.
* •
When the front of primary X-rays is passing through the clouds, the density of
primary X-ray photons decreases and the relative contribution of the
stationary iron line emission, if presents, into the total flux increases.
Therefore, parameters of the emission from clouds changes with time. We expect
that the spectrum of charged particles generating the stationary component can
be derived from time variability of the line equivalent width.
* •
We showed that the equivalent width of the iron line excited by charged
particles depends of their charge composition and spectral index $\gamma$. The
equivalent width of K$\alpha$ line generated by electrons depends weakly on
$\gamma$, and it varies from $\sim 250$ eV for soft spectra to $\sim 500$ eV
for hard electron spectra. In the case of protons the width variations are
significant reaching its maximum for very soft proton spectra.
* •
If future observations find any time variation of the eW of the 6.4 keV line,
then in the case of decrease the impact line component is produced by
electron, in the opposite case - by subrelativistic protons.
The authors are grateful to Vincent Tatischeff for the data shown in figure 1
and to the unknown referee who made much to improve the text.
VAD and DOC are partly supported by the NSC-RFBR Joint Research Project
RP09N04 and 09-02-92000-HHC-a. This work is also supported by Grant-in-Aids
from the Ministry of Education, Culture, Sports, Science and Technology (MEXT)
of Japan, Scientific Research A, No. 18204015 (KK). MN is supported by JSPS
Research Fellowship for Young Scientists. KSC is supported by a GRF grant of
Hong Kong Government 7011/10p.
## References
* Alexander (2005) Alexander, T. 2005, PhR, 419, 65
* Ayal et al. (2000) Ayal, S., Livio, M., & Piran, T. 2000, ApJ, 545, 772
* Baganoff et al. (2003) Baganoff, F. K. et al. 2003, ApJ, 591, 891
* Bykov et al. (2000) Bykov, A. M., Chevalier, R. A., Ellison, D. C., & Uvarov, Yu. A. 2000, ApJ, 538, 203
* Cheng et al. (2006) Cheng, K.-S., Chernyshov, D. O., & Dogiel, V. A. 2006, ApJ, 645, 1138\.
* Cheng et al. (2007) Cheng, K.-S., Chernyshov, D. O., & Dogiel, V. A. 2007, A&A, 473, 351\.
* Coil & Ho (2000) Coil, A. L., & Ho, P. T. P. 2000, ApJ, 533, 245
* Crocker et al. (2010) Crocker, R. M., Jones, D. I., Aharonian, F., Law, C. J., Melia, F., Oka, T., & Ott, J. 2010, arXiv1011.0206
* Dalgarno & McCray (1972) Dalgarno A., & McCray R. A. 1972, ARA&A, 10, 375
* Dogiel et al. (1987) Dogiel, V. A., Gurevich, A. V., Istomin, Ia. N., & Zybin, K. P. 1987, MNRAS, 228, 843
* Dogiel et al. (1998) Dogiel, V. A., Ichimura, A., Inoue, H., & Masai, K. 1998, PASJ, 50, 567
* Dogiel et al. (2009a) Dogiel, V. et al. 2009a, PASJ, 61, 901
* Dogiel et al. (2009b) Dogiel, V. et al. 2009b, PASJ, 61, 1099
* Dogiel et al. (2009c) Dogiel, V. A., Tatischeff, V., Cheng, K.-S., Chernyshov, D. O., Ko, C. M., & Ip, W. H. 2009c, A&A, 508, 1
* Ebisawa et al. (2008) Ebisawa, K. et al. 2008, PASJ, 60, 223
* Fukuoka et al. (2009) Fukuoka, R., Koyama, K., Ryu, S. G., Tsuru, T. G. 2009, PASJ, 61, 593
* Garcia et al. (1973) Garcia, J. D., Fortner, R. J., & Kavanagh, T. M. 1973, RvMP, 45, 111
* Ginzburg et al. (2004) Ginzburg, S. L., D’Yachenko, V. F., Paleychik, V. V., Sudarikov, A. L., & Chechetkin, V. M. 2004, Astronomy Letters, 30, 376
* Hayakawa (1969) Hayakawa, S. 1969, Cosmic Ray Physics (Wiley-Interscience)
* Inui et al. (2009) Inui, T., Koyama, K., Matsumoto, H., & Tsuru, T. Go 2009, PASJ, 61, S241
* Koyama et al. (1996) Koyama, K., Maeda, Y., Sonobe, T., Takeshima, T., Tanaka, Y., & Yamauchi, S. 1996, PASJ, 48, 249
* Koyama et al. (2007a) Koyama, K. et al. 2007a, PASJ, 59, S221
* Koyama et al. (2007b) Koyama, K. et al. 2007b, PASJ, 59, S245
* Koyama et al. (2008) Koyama, K., Inui, T., Matsumoto, H. & Tsuru, T. G. 2008, PASJ, 60, S201
* Koyama et al. (2009) Koyama, K., Takikawa, Y., Hyodo, Y., Inui, T., Nobukawa, M., Matsumoto, H., & Tsuru, T. G. 2009, PASJ, 61, S255
* Murakami et al. (2003) Murakami, H., Senda, A., Maeda, Y., & Koyama, K. 2003, Astronom. Nachr. Suppl., 324, 125
* Nath & Biermann (1994) Nath, B. B., & Biermann, P. L. 1994, MNRAS, 267, 447
* Nobukawa et al. (2010) Nobukawa, M., Koyama, K., Tsuru, T. G., Ryu, S. G., & Tatischeff, V. 2010, PASJ, 62, 423
* Nobukawa et al. (2011) Nobukawa, M., Ryu, S. G., Tsuru, T. G., & Koyama, K. 2011, submitted to ApJL
* Ponti et al. (2010) Ponti, G., Terrier, R., Goldwurm, A., Belanger, G., & Trap, G. 2010, ApJ, 714, 732
* Porquet et al. (2003) Porquet, D. et al. 2003, A&A, 407, L17
* Quarles (1976) Quarles, C. A. 1976, PhRvA, 13, 1278
* Revnivtsev et al. (2004) Revnivtsev, M. G. et al. 2004, A&A, 425, L49
* Revnivtsev et al. (2009) Revnivtsev, M., Sazonov, S., Churazov, E. et al. 2009 Nature, 458, 1142
* Sawada et al. (1999) Sawada, T. et al. 1999, AdSpR, 23, 985
* Sofue (1995) Sofue, Y. 1995, PASJ, 47, S527
* Spitzer & Jenkins (1975) Spitzer L., & Jenkins E. B. 1975, ARA&A, 13, 133
* Sunyaev et al. (1993) Sunyaev, R. A., Markevitch, M., & Pavlinsky, M. 1993, ApJ, 407, 606
* Sunyaev & Churazov (1998) Sunyaev, R., & Churazov, E. 1998, MNRAS, 297, 1279
* Tatischeff (2003) Tatischeff, V. 2003, EAS, 7, 79
* Terrier et al. (2010) Terrier, R. et al. 2010, ApJ, 719, 143
* Tsuboi et al. (1999) Tsuboi, M., Handa, T., & Ukita, N., 1999, ApJS, 120, 1
* Tsuru et al. (2010) Tsuru, T. G., Uchiyama, H., Nobukawa, M., Sawada, M., Ryu, S. G., Fukuoka, R., & Koyama, K. 2010, astro-ph 1007.4863
* Yu et al. (2010) Yu, Y.-W., Cheng, K.-S., Chernyshov, D. O., & Dogiel, V. A. 2010, astro-ph 1010.1312, accepted in MNRAS
* Yusef-Zadeh et al. (2002) Yusef-Zadeh, F., Law, C., & Wardle, M. 2002, ApJ, 568, L121
* Yusef-Zadeh et al. (2007a) Yusef-Zadeh, F., Muno, M., Wardle, M., & Lis, D. C. 2007a, ApJ, 656, 847
* Yusef-Zadeh et al. (2007b) Yusef-Zadeh, F., Wardle, M., & Roy, S. 2007b, ApJL, 665, 123
|
arxiv-papers
| 2011-04-22T18:10:10 |
2024-09-04T02:49:18.392616
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vladimir Dogiel, Dmitrii Chernyshov, Katsuji Koyama, Masayoshi\n Nobukawa and Kwong-Sang Cheng",
"submitter": "Vladimir Dogiel",
"url": "https://arxiv.org/abs/1104.4484"
}
|
1104.4674
|
# K-Median Clustering, Model-Based Compressive Sensing, and Sparse Recovery
for Earth Mover Distance††thanks: This research has been supported in part by
the David and Lucille Packard Fellowship, MADALGO (the Center for Massive Data
Algorithmics, funded by the Danish National Research Association) and NSF
grant CCF-0728645. E. Price has been supported in part by an NSF Graduate
Research Fellowship.
Piotr Indyk Eric Price
(24 April 2011)
###### Abstract
We initiate the study of sparse recovery problems under the Earth-Mover
Distance (EMD). Specifically, we design a distribution over $m\times n$
matrices $A$ such that for any $x$, given $Ax$, we can recover a $k$-sparse
approximation to $x$ under the EMD distance. One construction yields
$m=O(k\log(n/k))$ and a $1+\epsilon$ approximation factor, which matches the
best achievable bound for other error measures, such as the $\ell_{1}$ norm.
Our algorithms are obtained by exploiting novel connections to other problems
and areas, such as streaming algorithms for k-median clustering and model-
based compressive sensing. We also provide novel algorithms and results for
the latter problems.
## 1 Introduction
In recent years, a new “linear” approach for obtaining a succinct approximate
representation of $n$-dimensional vectors (or signals) has been discovered.
For any signal $x$, the representation is equal to $Ax$, where $A$ is an
$m\times n$ matrix, or possibly a random variable chosen from some
distribution over such matrices. The vector $Ax$ is often referred to as the
measurement vector or linear sketch of $x$. Although $m$ is typically much
smaller than $n$, the sketch $Ax$ often contains plenty of useful information
about the signal $x$.
A particularly useful and well-studied problem is that of stable sparse
recovery. The problem is typically defined as follows: for some norm
parameters $p$ and $q$ and an approximation factor $C>0$, given $Ax$, recover
an “approximation” vector $x^{*}$ such that
$\left\lVert x-x^{*}\right\rVert_{p}\leq C\min_{k\text{-sparse
}x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{q}$ (1)
where we say that $x^{\prime}$ is $k$-sparse if it has at most $k$ non-zero
coordinates. Sparse recovery has applications to numerous areas such as data
stream computing [Mut05, Ind07] and compressed sensing [CRT06, Don06], notably
for constructing imaging systems that acquire images directly in compressed
form (e.g., [DDT+08, Rom09]). The problem has been a subject of extensive
study over the last few years, with the goal of designing schemes that enjoy
good “compression rate” (i.e., low values of $m$) as well as good algorithmic
properties (i.e., low encoding and recovery times). It is known by now111In
particular, a random Gaussian matrix [CRT06] or a random sparse binary matrix
([BGI+08], building on [CCFC02, CM04, CM06]) has this property with
overwhelming probability. See [GI10] for an overview. that there exist
matrices $A$ and associated recovery algorithms that produce approximations
$x^{*}$ satisfying Equation (1) with $\ell_{p}=\ell_{q}=\ell_{1}$, constant
approximation factor $C$, and sketch length $m=O(k\log(n/k))$; it is also
known that this sketch length is asymptotically optimal [DIPW10, FPRU10].
Results for other combinations of $\ell_{p}$/$\ell_{q}$ norms are known as
well.
However, limiting the error measures to variants of $\ell_{p}$ norms is quite
inconvenient in many applications. First, the distances induced by $\ell_{p}$
norms are typically only quite raw approximations of the perceptual
differences between images. As a result, in the field of computer vision,
several more elaborate notions have been proposed (e.g., in [RTG00, Low04,
Lyu05, GD05]). Second, there are natural classes of images for which the
distances induced by the $\ell_{p}$ norm are virtually meaningless. For
example, consider images of “point clouds”, e.g., obtained via astronomical
imaging. If we are given two such images, where each point in the second image
is obtained via small random translation of a point in the first image, then
the $\ell_{p}$ distance between the images will be close to the largest
possible, even though the images are quite similar to each other.
Motivated by the above considerations, we initiate the study of sparse
recovery under non-$\ell_{p}$ distances. In particular, we focus on the Earth-
Mover Distance (EMD) [RTG00]. Informally, for the case of two-dimensional
$\Delta\times\Delta$ images (say, $x,y:[\Delta]^{2}\to\mathbb{R}_{+}$) which
have the same $\ell_{1}$ norm, the EMD is defined as the cost of the min-cost
flow that transforms $x$ into $y$, where the cost of transporting a unit of
mass from a pixel $p\in[\Delta]^{2}$ of $x$ to a pixel $q\in[\Delta]^{2}$ of
$y$ is equal to the $\ell_{1}$ distance222One can also use the $\ell_{2}$
distance. Note that the two distances differ by at most a factor of $\sqrt{2}$
for two-dimensional images. between $p$ and $q$. The EMD metric can be viewed
as induced by a norm $\left\lVert\cdot\right\rVert_{EMD}$, such that
$\text{EMD}(x,y)=\left\lVert x-y\right\rVert_{EMD}$; see Section 2 for a
formal definition. Earth-Mover Distance and its variants are popular metrics
for comparing similarity between images, feature sets, etc. [RTG00, GD05].
Results. In this paper we introduce three sparse recovery schemes for the
Earth-Mover Distance. Each scheme provides a matrix (or a distribution of
matrices) $A$, with $m$ rows and $n$ columns for $n=\Delta^{2}$, such that for
any vector $x$, given $Ax$, one can reconstruct a vector $x^{*}$ such that
$\left\lVert x-x^{*}\right\rVert_{EMD}\leq C\min_{k\text{-sparse
}x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{EMD}.$ (2)
for some approximation factor $C>0$. We call any recovery scheme satisfying
Equation (2) an _EMD/EMD recovery scheme_. If $A$ is a distribution over
matrices (that is, the scheme is randomized), the guarantee holds with some
probability. The other parameters of the constructions are depicted in Figure
1.
Determinism | Sketch length | Decode time | Approx.
---|---|---|---
Deterministic | $k\log n\log(n/k)$ | $n\log^{O(1)}n$ | $\epsilon$
Deterministic | $k\log(n/k)$ | $n^{O(1)}$ | $\sqrt{\log(n/k)}$
Randomized | $k\log(n/k)$ | $k\log(n/k)$ | $\epsilon$
Figure 1: Our results
In particular, two of our constructions yield sketch lengths $m$ bounded by
$O(k\log(n/k))$, which mimics the best possible bound achievable for sparse
recovery in the $\ell_{1}$ distance [DIPW10]. Note, however, that we are not
able to show a matching lower bound for the EMD case.
#### Connections and applications
What does sparse recovery with respect to the Earth-Mover Distance mean?
Intuitively, a sparse approximation under EMD yields a short “signature” of
the image $x$ that approximately preserves its “EMD properties”. For example,
if $x$ consists of a small number of sparse point clouds (e.g., as in
astronomical imaging), sparse approximation of $x$ will approximately identify
the locations and weights of the clouds. Our preliminary experiments with a
heuristic algorithm for such data [GIP10] show that this approach can yield
substantial improvements over the usual sparse recovery.
Another application [RTG00] stems from the original paper, where such short
signatures were constructed333In fact, the algorithm in [RTG00] vaguely
resembles our approach, in that it uses a kd-tree decomposition to partition
the images. for general images, to extract their color or texture information.
The images were then replaced by their signatures during the experiments,
which significantly reduced the computation time.
The above intuitions can be formalized as follows. Let $x^{\prime}$ be the
minimizer of $\left\lVert x-x^{\prime}\right\rVert_{EMD}$ over all $k$-sparse
vectors. Then one can observe that the non-zero entries of $x^{\prime}$
correspond to the cluster centers in the best $k$-median444For completeness,
in our context the $k$-median is defined as follows. First, each pixel
$p\in[\Delta]^{2}$ is interpreted as a point with weight $x_{p}$. The goal is
to find a set $C\subset[n]^{2}$ of $k$ “medians” that minimizes the objective
function $\sum_{p\in[n]^{2}}\min_{c\in C}\left\lVert
p-c\right\rVert_{1}x_{p}$. clustering of $x$. Moreover, for each such center
$c$, the value of $x^{\prime}_{c}$ is equal to the total weight of pixels in
the cluster centered at $c$. Thus, a solution to the $k$-median problem
provides a solution to our sparse recovery problem as well555If the algorithm
reports both the medians and the weights of clusters..
There has been prior work on the $k$-median problem in the streaming model
under insertions and deletions of points [FS05, Ind04]. Such algorithms
utilize linear sketches, and therefore implicitly provide schemes for
approximating the $k$-median of $x$ from a linear sketch of $x$ (although they
do not necessarily provide the cluster weights, which are needed for the
sparse recovery problem). Both algorithms666The paper [Ind04] claims
$m=k\log^{O(1)}n$. Unfortunately, that is an error, caused by ignoring the
dependencies between the queries and their answers provided by the randomized
data structure MediEval. Fixing this problem requires reducing the probability
of failure of the algorithm so that it is inversely exponential in $k$, which
yields another factor of $k$ in the space bound. yield a method for
approximating the $k$-median from $\Omega(k^{2}\log^{O(1)}n)$ measurements,
with the algorithm of [FS05] providing an approximation factor of
$1+\epsilon$. In contrast, our result achieves an approximation factor of
$1+\epsilon$ with a sketch length $m$ that is as low as $O(k\log(n/k))$.
Thanks to this connection, our results also yield short sketches for the
$k$-median problem. Although the solution $x^{*}$ output by our algorithm does
not have to be $k$-sparse (i.e., we might output more than $k$ medians), one
can post-process the output by computing the best $k$-sparse approximation to
$x^{*}$ using any off-the-shelf (weighted) $k$-median algorithm (e.g.,
[HPM04])). This reduces the number of clusters to $k$, while (by the triangle
inequality of EMD) multiplying the approximation factor by a constant that
depends on the approximation constant of the chosen $k$-median algorithm. See
Appendix C for more details.
#### Techniques
On a high level, our approach is to reduce the sparse recovery problem under
EMD to sparse recovery under $\ell_{1}$. This is done by constructing a linear
mapping $P$ that maps $\mathbb{R}^{[\Delta]^{2}}$ into some space
$\mathbb{R}^{t}$, that has the property that a “good” sparse approximation to
$y=Px$ under $\ell_{1}$ yields a “good” sparse approximation to $x$ under EMD.
777We note that the aforementioned k-median algorithms implicitly rely on some
form of sparse recovery (e.g., see Remark 3.10 in [FS05] or remarks before
Theorem 5 in [Ind04]). However, the bounds provided by those algorithms fall
short of what we aim for. The list of formal constraints that such a mapping
needs to satisfy are given in Section 3. For concreteness, we define one such
mapping below; another one is given in Section 7. Specifically, the _pyramid_
mapping $P$ [IT03, GD05] (building on [Cha02, AV99]) is defined as follows.
First we impose $\log\Delta+1$ nested grids $G_{i}$ on $[\Delta]^{2}$, with
$G=\bigcup G_{i}$. For each _level_ $i=0\ldots l$, $l=\log_{2}\Delta$, the
grid $G_{i}$ is a partition of the image into _cells_ of side length $2^{i}$.
The cells in the grids can be thought of as forming a $4$-ary tree, with each
node $c$ at level $i$ having a set $C(c)$ of children at level $i-1$. For each
$i$, we define a mapping $P_{i}$ such that each entry in $P_{i}x$ corresponds
to a cell $c$ in $G_{i}$, and its value is equal to the sum of coordinates of
$x$ falling into $c$. The final mapping $P$ is defined as
$\displaystyle Px=[2^{0}P_{0}x,2^{1}P_{1}x,\ldots,2^{l}P_{l}x]$ (3)
It is easy to see that, for a vector $x$ that is $k$-sparse, the vector $Px$
is $O(K)$ sparse for $K=kl$. We also show that for any $x$, there exists an
$O(K)$-sparse $y$ such that the difference $\|y-Px\|_{1}$ is comparable to
$\min_{k\text{-sparse }x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{EMD}$.
We then find a good approximation $x^{*}$ to $x$ (in the EMD norm) by
“inverting” $P$ on $y$. Since we can recover an $O(K)$-sparse approximation to
$y$ (in the $\ell_{1}$ norm) from a sketch of length $O(K\log(n/K))$, we
obtain the first result from Figure 1.
To improve the sketch length we exploit the particular properties of the
mapping $P$ to recover an $O(K)$-sparse approximation from only $O(K)$
measurements. For any non-negative vector $x$, the coordinates of $Px$ have
the following hierarchical structure: (i) the coordinates are organized into
an $r$-ary tree for $r=4$, and (ii) the value of each internal node is non-
negative and equal to the sum of its children times two. Using one or both of
these properties enables us to reduce the number of measurements.
The second algorithm from Figure 1 is obtained using the property (i) alone.
Specifically, the problem of recovering a sparse approximation whose support
forms a tree has been well-studied in signal processing (the question is
motivated by an empirical observation that large wavelet coefficients tend to
co-occur in this fashion). In particular, the insightful paper [BCDH10] on
model-based compressive sensing (see Section 5 for an overview) gave a
deterministic scheme that recovers such approximation from a sketch of length
$O(K)$. Although the setup given in that paper is somewhat different from what
we need here, we show that one can modify and re-analyze their scheme to
achieve the desired guarantee. This approach, however, leads to an
approximation factor of $O(\sqrt{\log(n/k)})$.
In order to achieve a constant approximation factor, we employ both properties
(i) and (ii), as well as randomization. Specifically, we recover the tree
coefficients top-down, starting from the root of the tree. This is done in a
greedy manner: we only recurse on the children of nodes that are estimated to
be “heavy”. This first pass identifies a superset $S$ of the locations where
$Px$ is large, but estimates some of the values $(Px)_{S}$ quite poorly. The
set of locations $S$ has $\left|S\right|=O(K)$, so we can recover $(Px)_{S}$
accurately with $O(K)$ measurements using the set query sketches of [Pri11].
Finally, we show that we can achieve the first and second result in Figure 1
by replacing the pyramid mapping by a variant of an even more basic transform,
namely the (two-dimensional) Haar wavelet mapping. Our variant is obtained by
rescaling the original Haar wavelet vectors using exponential weights, to
mimic the pyramid scheme behavior. This result relates the two well-studied
notions (EMD and wavelets) in a somewhat unexpected way. As a bonus, it also
simplifies the algorithms, since inverting the wavelet mapping can now be done
explicitly and losslessly.
## 2 Preliminaries
#### Notation
We use $[n]$ to denote the set $\\{1\ldots n\\}$. For any set $S\subset[n]$,
we use $\overline{S}$ to denote the complement of $S$, i.e., the set
$[n]\setminus S$. For any $x\in\mathbb{R}^{n}$, $x_{i}$ denotes the $i$th
coordinate of $x$, and $x_{S}$ denotes the vector
$x^{\prime}\in\mathbb{R}^{n}$ given by $x^{\prime}_{i}=x_{i}$ if $i\in S$, and
$x^{\prime}_{i}=0$ otherwise. We use $\operatorname{supp}(x)$ to denote the
support of $x$. We use $\mathbb{R}^{[\Delta]^{2}}$ to denote the set of
functions from $[\Delta]\times[\Delta]$ to $\mathbb{R}$; note that
$\mathbb{R}^{[\Delta]^{2}}$ can be identified with $\mathbb{R}^{n}$ since
$n=\Delta^{2}$. We also use $\mathbb{R}_{+}$ to denote $\\{x\in\mathbb{R}\mid
x\geq 0\\}$.
#### EMD
Consider any two non-negative vectors $x,y\in\mathbb{R}_{+}^{[\Delta]^{2}}$
such that $\|x\|_{1}=\|y\|_{1}$. Let $\Gamma(x,y)$ be the set of functions
$\gamma:[\Delta]^{2}\times[\Delta]^{2}\to\mathbb{R}_{+}$, such that for any
$i,j\in[\Delta]^{2}$ we have $\sum_{l}\gamma(i,l)=x_{i}$ and
$\sum_{l}\gamma(l,j)=y_{j}$; that is, $\Gamma$ is the set of possible “flows”
from $x$ to $y$. Then we define
$\text{EMD}^{*}(x,y)=\min_{\gamma\in\Gamma}\sum_{i,j\in[\Delta]^{2}}\gamma(i,j)\|i-j\|_{1}$
to be the min cost flow from $x$ to $y$, where the cost of an edge is its
$\ell_{1}$ distance. This induces a norm $\left\lVert\cdot\right\rVert_{EMD}$
such that $\left\lVert x-y\right\rVert_{EMD}=\text{EMD}^{*}(x,y)$. For general
vectors $w$,
$\left\lVert w\right\rVert_{EMD}=\min_{\begin{subarray}{c}x-y+z=w\\\
\left\lVert x\right\rVert_{1}=\left\lVert y\right\rVert_{1}\\\ x,y\geq
0\end{subarray}}\text{EMD}^{*}(x,y)+D\left\lVert z\right\rVert_{1}$
where $D=2\Delta$ is the diameter of the set $[\Delta]^{2}$. That is,
$\left\lVert w\right\rVert_{EMD}$ is the min cost flow from the positive
coordinates of $w$ to the negative coordinates, with some penalty for
unmatched mass.
#### Signal models
The basic idea of the signal models framework of [BCDH10] is to restrict the
sparsity patterns of the approximations. For some sparsity parameter888We use
$K$ to denote the sparsity in the context of model-based recovery (as opposed
to $k$, which is used in the context of “standard” recovery). $K$ let ${\cal
S}_{K}$ be a family of subsets of $[n]$ such that for each $S\in{\cal S}_{K}$
we have $|S|\leq K$. The family ${\cal S}_{K}$ induces a signal model ${\cal
M}_{K}\subset\mathbb{R}^{n}$ where
${\cal M}_{K}=\\{x\in\mathbb{R}^{n}\mid\operatorname{supp}(x)\subseteq
S\text{\ for some\ }S\in{\cal S}_{K}\\}.$
Note that ${\cal M}_{K}$ is a union of $|{\cal S}_{K}|$ subspaces, each of
dimension at most $K$. The signals in ${\cal M}_{K}$ are called ${\cal
M}_{K}$-sparse.
The following two examples of signal models are particularly relevant to our
paper:
1. 1.
General $k$-sparse signals, where ${\cal S}_{k}$ contains all $k$-subsets of
$[n]$. In this case the induced signal model (denoted by $\Sigma_{k}$)
contains all $k$-sparse signals.
2. 2.
Tree sparse signals. In this case, we assume that $n=\frac{c^{l}-1}{c-1}$ for
some (constant) integer $c$ and parameter $l$, and associate each $i\in[n]$
with a node of a full $c$-ary tree $T(c,l)$ of depth $l$. The family ${\cal
S}_{K}$ contains all sets $S$ of size up to $K$ that are connected in $T(c,l)$
and contain the root (so each $S$ corresponds to a graph-theoretic subtree of
$T(c,l)$). The induced signal model is denoted by ${\cal T}^{c}_{K}$, or
${\cal T}_{K}$ for short.999We note that technically this model was originally
defined with respect to the wavelet basis (as opposed to the standard basis
here) and for $c=2$. We adapt that definition to the needs in our paper.
In order to facilitate signal recovery, one often needs to consider the
differences $x-y$ of two signals $x\in{\cal M}$, $y\in{\cal M}^{\prime}$. For
this purpose we define the Minkowski sum of ${\cal M}_{K}$ and ${\cal
M}^{\prime}_{K}$ as ${\cal M}_{K}\oplus{\cal M}^{\prime}_{K}=\\{x+y:x\in{\cal
M}_{K},y\in{\cal M}^{\prime}_{K}\\}$. To simplify the notation, we define
${\cal M}^{(t)}$ to the $t$-wise Minkowski sum of ${\cal M}_{K}$. For all
signal models considered in this paper, we have ${\cal
M}_{K}^{(t)}\subset{\cal M}_{Kt}$.
Restricting sparsity patterns enables to recover sparse approximations from
shorter sketches. We defer a more thorough overview of the results to Section
5.
#### Assumptions
We assume that the sparsity parameters $k$ (and $K$, where applicable) are
smaller than $n/2$. Note that if this assumption does not hold, the problem
becomes trivial, since one can define the measurement matrix $A$ to be equal
to the identity matrix.
## 3 Framework for EMD-sparse recovery
In this section we describe our approach to reducing sparse recovery under EMD
into sparse recovery under $\ell_{1}$. We need the following three components:
(i) a $t\times n$ matrix $B$ (that will be used to map the EMD space into the
$\ell_{1}$ space); (ii) a signal model $\mathcal{M}\subset\mathbb{R}^{t}$; and
(iii) an $\ell_{1}/\ell_{1}$ recovery scheme for ${\cal M}$. The latter
involves an $m\times t$ matrix $A^{\prime}$ (or a distribution over such
matrices) such that, for any $x\in\mathbb{R}^{t}$, given $A^{\prime}x$, one
can recover $x^{*}$ such that
$\left\lVert x-x^{*}\right\rVert_{1}\leq C^{\prime}\min_{x^{\prime}\in{\cal
M}}\left\lVert x-x^{\prime}\right\rVert_{1}$ (4)
for an approximation factor $C^{\prime}$. If $A^{\prime}$ is a distribution
over matrices, we require that the guarantee holds with some constant
probability, e.g., 2/3.
The mapping $B$ must satisfy the following three properties:
1. A.
(EMD-to-$\ell_{1}$ expansion.) For all $x\in\mathbb{R}^{n}$,
$\left\lVert x\right\rVert_{EMD}\leq\left\lVert Bx\right\rVert_{1}.$
2. B.
(Model-alignment of EMD with ${\cal M}$.) For all $x\in\mathbb{R}_{+}^{n}$,
there exists a $y\in\mathcal{M}$ with
$\left\lVert y-Bx\right\rVert_{1}\leq\epsilon\min_{k\text{-sparse
}x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{EMD}.$
3. C.
(Invertibility.) There is an efficient algorithm
$\mathcal{B}^{-1}\colon\mathbb{R}^{t}\to\mathbb{R}^{n}$ such that, for some
constant $D$ and all $y\in\mathbb{R}^{t}$,
$\left\lVert y-B\mathcal{B}^{-1}(y)\right\rVert_{1}\leq
D\min_{x\in\mathbb{R}^{n}}\left\lVert y-Bx\right\rVert_{1}.$
###### Lemma 3.1.
Consider $B,A^{\prime},{\cal M}$ satisfying the above properties. Then the
matrix $A=A^{\prime}B$ supports $k$-sparse recovery for EMD (as defined in
Equation (2)) with approximation factor $C=(1+D)C^{\prime}\epsilon$.
###### Proof.
Consider the recovery of any vector $x\in\mathbb{R}_{+}^{n}$. Let
$E=\min_{k\text{-sparse }x^{\prime}}\left\lVert
x-x^{\prime}\right\rVert_{EMD}.$
By Property B, for any $x\in\mathbb{R}^{n}$, there exists a $y\in\mathcal{M}$
with
$\left\lVert y-Bx\right\rVert_{1}\leq\epsilon E.$
Hence our $\ell_{1}/\ell_{1}$ model-based recovery scheme for ${\cal M}$, when
run on $Ax=A^{\prime}Bx$, returns a $y^{*}$ with
$\left\lVert y^{*}-Bx\right\rVert_{1}\leq C^{\prime}\epsilon E.$
Let $x^{*}=\mathcal{B}^{-1}(y^{*})$. We have by Property C that
$\left\lVert y^{*}-Bx^{*}\right\rVert_{1}\leq
D\min_{x^{\prime}\in\mathbb{R}^{n}}\left\lVert
y^{*}-Bx^{\prime}\right\rVert_{1}\leq D\left\lVert
y^{*}-Bx\right\rVert_{1}\leq DC^{\prime}\epsilon E.$
Hence by Property A
$\displaystyle\left\lVert x^{*}-x\right\rVert_{EMD}$
$\displaystyle\leq\left\lVert B(x^{*}-x)\right\rVert_{1}\leq\left\lVert
y^{*}-Bx\right\rVert_{1}+\left\lVert y^{*}-Bx^{*}\right\rVert_{1}$
$\displaystyle\leq(1+D)C^{\prime}\epsilon E$
as desired. ∎
## 4 Pyramid transform
In this section we will show that the pyramid transform $P$ defined in
Equation (3) of Section 1 satisfies properties B and C of Section 3, with
appropriate parameters.
The property A has been shown to hold for $P$ in many other papers (e.g.,
[Cha02, IT03]). The intuition is that the weight of a cell is at least the
Earth-Mover Distance to move all mass in the cell from the center to any
corner of the cell, including the corner that is at the center of the parent
of the cell.
### 4.1 Model-alignment with tree sparsity
In this section we show Property B, where the signal model ${\cal M}$ is equal
to the $K$-tree-sparse model ${\cal T}_{K}$, for $K=O(k\log(n/k))$. In fact,
we show a stronger statement: the trees have their _width_ (the maximum number
of nodes per level) bounded by some parameter $s$. We will exploit the latter
property later in the paper.
###### Lemma 4.1.
For any $x\in\mathbb{R}_{+}^{n}$ there exists a tree $S\subset[t]$ of size $K$
and width $s$ with
$\left\lVert(Px)_{\overline{S}}\right\rVert_{1}\leq\epsilon\min_{k\text{-sparse
}x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{EMD}$
for $s=O\left(\frac{1}{\epsilon^{2}}k\right)$ and
$K=O(\frac{1}{\epsilon^{2}}k\log(n/k))$.
###### Proof.
Let $x^{\prime}=\operatorname*{arg\,min}_{k\text{-sparse
}x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{EMD}$ be the $k$-medians
approximation of $x$. Consider the cells that contain each point in the
support of $x^{\prime}$. For each such cell at any level $i$, add the
$O(\frac{1}{\epsilon^{2}})$ other cells of the same level within an $\ell_{1}$
distance of $\frac{2}{\epsilon}2^{i}$. The resulting $S$ has
$s=O\left(\frac{1}{\epsilon^{2}}k\right)$ cells per level, and all the
ancestors of any cell in the result also lie in $S$. So $S$ is a tree of width
$s$. It has $O(s)$ elements from the top $\log_{4}s$ levels, and $O(s)$
elements on each of the $\log_{4}t-\log_{4}s$ remaining levels, for a size
$K=O(s\log t/s)$. We will show that
$\left\lVert(Px)_{\overline{S}}\right\rVert_{1}$ is small.
Define $e_{i}$ for $i\in[\Delta]^{2}$ to be the elementary vector with a $1$
at position $i$, so $x_{i}=x\cdot e_{i}$. Suppose that the distance between
$i$ and the nearest center in $x^{\prime}$ is $v_{i}$. Then we have
$\displaystyle\left\lVert(Px)_{\overline{S}}\right\rVert_{1}$
$\displaystyle=\sum_{i\in[\Delta]^{2}}\left\lVert(Px_{i}e_{i})_{\overline{S}}\right\rVert_{1}=\sum_{i\in[\Delta]^{2}}\left\lVert(Pe_{i})_{\overline{S}}\right\rVert_{1}x_{i}$
$\displaystyle\left\lVert x-x^{\prime}\right\rVert_{EMD}$
$\displaystyle=\sum_{i\in[\Delta]^{2}}v_{i}x_{i}.$
so it is sufficient to show
$\left\lVert(Pe_{i})_{\overline{S}}\right\rVert_{1}\leq\epsilon v_{i}$ for any
$i$.
Let $h$ be the highest level such that $e_{i}$ is not contained in a cell at
level $h$ in $S$. If no such $h$ exists,
$\left\lVert(Pe_{i})_{\overline{S}}\right\rVert_{1}=0$. Otherwise,
$v_{i}\geq\frac{2}{\epsilon}2^{h}$, or else $S$ would contain $e_{i}$’s cell
in level $h$. But then
$\displaystyle\left\lVert(Pe_{i})_{\overline{S}}\right\rVert_{1}=\sum_{j=0}^{h}2^{j}$
$\displaystyle<2^{h+1}\leq\epsilon v_{i}$
as desired. ∎
###### Corollary 4.2.
For any $x\in\mathbb{R}_{+}^{n}$, there exists a $y\in\mathcal{T}_{K}$ with
$\left\lVert y-Px\right\rVert_{1}\leq\epsilon\min_{k\text{-sparse
}x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{EMD}.$
### 4.2 Invertibility
Given an approximation $b$ to $Px$, we would like to find a vector $y$ with
$\left\lVert b-Py\right\rVert_{1}$ small. Note that this task can be
formulated as a linear program, and therefore solved in time that is
polynomial in $n$. In Appendix A we show a much faster approximate algorithm
for this problem, needed for our fast recovery algorithm:
###### Lemma 4.3.
Given any approximation $b$ to $Px$, we can recover a $y$ in
$O(\left|\operatorname{supp}(b)\right|)$ time with
$\left\lVert Py-Px\right\rVert_{1}\leq 8\left\lVert b-Px\right\rVert_{1}.$
Recall that $P$ has $t=\left\lfloor 4n/3\right\rfloor$ rows. This means
standard $\ell_{1}/\ell_{1}$ $K$-sparse recovery for $Px$ is possible with
$m=O(K\log t/K)=O(\frac{1}{\epsilon^{2}}k\log^{2}(n/k))$. Hence by Lemma 3.1,
using $B=P$ and standard sparse recovery techniques on the model ${\cal
M}=\Sigma_{K}$ gives the first result in Figure 1:
###### Theorem 4.4.
There exists a deterministic EMD/EMD recovery scheme with
$m=O(\frac{1}{\epsilon^{2}}k\log^{2}(n/k))$ and $C=\epsilon$. Recovery takes
$O(n\log^{c}n)$ time for some constant $c$.
## 5 Tree-sparse recovery
To decrease the number of measurements required by our algorithm, we can use
the stronger signal model $\mathcal{T}_{K}$ instead of $\Sigma_{K}$. The paper
[BCDH10] gives an algorithm for model-based sparse recovery of
$\mathcal{T}_{K}$, but their theorem does not give an $\ell_{1}/\ell_{1}$
guarantee. In Appendix B we review the prior work and convert their theorem
into the following:
###### Theorem 5.1.
There exists a matrix $A$ with $O(K)$ rows and a recovery algorithm that,
given $Ax$, returns $x^{*}$ with
$\left\lVert x-x^{*}\right\rVert_{1}\leq
C\sqrt{\log(n/K)}\min_{x^{\prime}\in{\cal T}_{K}}\left\lVert
x-x^{\prime}\right\rVert_{1}$
for some absolute constant $C>1$. As long as the coefficients of $x$ are
integers bounded by $n^{O(1)}$, the algorithm runs in time
$O(K^{2}n\log^{c}n)$ for some constant $c$.
By Lemma 3.1, using this on $B=P$ and ${\cal M}=\mathcal{T}_{K}$ gives the
second result in Figure 1:
###### Theorem 5.2.
There exists a deterministic EMD/EMD recovery scheme with
$m=O(\frac{1}{\epsilon^{2}}k\log(n/k))$ and distortion
$C=O(\epsilon\sqrt{\log(n/k)})$. Recovery takes $O(k^{2}n\log^{c}n)$ time for
some constant $c$.
## 6 Beyond tree sparsity
The previous section achieved $O(\sqrt{\log n})$ distortion deterministically
with $O(k\log(n/k))$ rows. In this section, we improve the distortion to an
arbitrarily small constant $\epsilon$ at the cost of making the algorithm
randomized. To do this, we show that EMD under the pyramid transform is
aligned with a stronger model than just tree sparsity—the model can restrict
the values of the coefficients as well as the sparsity pattern. We then give a
randomized algorithm for $\ell_{1}/\ell_{1}$ recovery in this model with
constant distortion.
###### Definition 6.1.
Define $T_{K}^{s}$ to be the family of sets $S\subseteq[t]$ such that (i) $S$
corresponds to a connected subset of $G$ containing the root and (ii)
$\left|S\cap G_{i}\right|\leq s$ for all $i$. We say that such an $S$ is _$K$
-tree-sparse with width $s$_.
###### Definition 6.2.
Define $\mathcal{M}\subset\mathcal{T}_{K}$ as
$\mathcal{M}=\left\\{y\in\mathbb{R}^{t}\middle|\begin{array}[]{l}\operatorname{supp}(y)\subseteq
S\text{ for some $S\in T_{K}^{s}$, and}\\\ y_{i}\geq 2\left\lVert
y_{C(i)}\right\rVert_{1}\forall i\in[t]\end{array}\right\\}.$
where $s=O(\frac{1}{\epsilon^{2}}k)$ comes from Lemma 4.1.
Note that every $y\in\mathcal{M}$ is non-negative, and
$(Px)_{S}\in\mathcal{M}$ for all $x\in\mathbb{R}_{+}^{n}$. With Lemma 4.1,
this implies:
###### Lemma 6.3.
There is model-alignment of $P$ with ${\cal M}$, i.e., they satisfy Property
B.
We will give a good algorithm for $\ell_{1}/\ell_{1}$ recovery over
$\mathcal{M}$.
### 6.1 Randomized $\ell_{1}/\ell_{1}$ recovery of $\mathcal{M}$
###### Theorem 6.4.
There is a randomized distribution over $m\times t$ matrices $A$ with
$m=O(\frac{1}{\epsilon^{2}}k\log(n/k))$ and an algorithm that recovers $y^{*}$
from $Ay$ in $O(\frac{1}{\epsilon^{2}}k\log(n/k))$ time with
$\left\lVert y^{*}-y\right\rVert_{1}\leq
C\min_{y^{\prime}\in\mathcal{M}}\left\lVert y-y^{\prime}\right\rVert_{1}$
with probability $1-k^{-\Omega(1)}$, for some constant $C$. We assume
$k=\Omega(\log\log n)$.
We will give an algorithm to estimate the support of $y$. Given a sketch of
$y$, it recovers a support $S\in T_{K}^{2s}$ with
$\left\lVert y_{\overline{S}}\right\rVert_{1}\leq
10\min_{y^{\prime}\in\mathcal{M}}\left\lVert y-y^{\prime}\right\rVert_{1}.$
We can then use the set query algorithm [Pri11] to recover a $y^{*}$ from a
sketch of size $O(\left|S\right|)$ with
$\left\lVert y^{*}-y_{S}\right\rVert_{1}\leq\left\lVert
y_{\overline{S}}\right\rVert_{1}.$
Then
$\left\lVert y^{*}-y\right\rVert_{1}\leq\left\lVert
y^{*}-y_{S}\right\rVert_{1}+\left\lVert y-y_{S}\right\rVert_{1}\leq
2\left\lVert y_{\overline{S}}\right\rVert_{1}\leq
20\min_{y^{\prime}\in\mathcal{M}}\left\lVert y-y^{\prime}\right\rVert_{1}.$
as desired. Hence estimating the support of $y$ is sufficient.
### 6.2 Finding a good sparse support $S$ to $y$
Vectors $y^{\prime}\in\mathcal{M}$ have two properties that allow us to find
good supports $S\in T_{K}^{s}$ with constant distortion using only
$O(\left|S\right|)$ rows. First, $\operatorname{supp}(y^{\prime})$ forms a
tree, so the support can be estimated from the top down, level by level.
Second, each coefficient has value at least twice the sum of the values of its
children. This means that the cost of making a mistake in estimating the
support (and hence losing the entire subtree below the missing coefficient) is
bounded by twice the weight of the missing coefficient. As a result, we can
bound the global error in terms of the local errors made at each level.
Of course, $y$ may not be in $\mathcal{M}$. But $y$ is “close” to some
$y^{\prime}\in\mathcal{M}$, so if our algorithm is “robust”, it can recover a
good support for $y$ as well. Our algorithm is described in Algorithm 1.
Algorithm 1 Finding sparse support under $\mathcal{M}$
Definition of sketch matrix $A$. The algorithm is parameterized by a width
$s$. Let $h_{i}$ be a random hash function from $G_{i}$ to $O(s)$ for
$i\in[\log(n/s)]$. Then define $A^{\prime}(i)$ to be the
$O(s)\times\left|G_{i}\right|$ matrix representing $h_{i}$, so
$A^{\prime}(i)_{ab}=1$ if $h_{i}(b)=a$ and $0$ otherwise. Choose $A$ to be the
vertical concatenation of the $A^{\prime}(i)$’s.
Recovery procedure.
$\triangleright$ Find approximate support $S$ to $y$ from $b=Ay$
procedure FindSupport($A$, $b$)
$T_{\log(n/s)}\leftarrow G_{\log(n/s)}$ $\triangleright$
$\left|T_{\log(n/s)}\right|\leq 2s$
for $i=\log(n/s)-1\dotsc 0$ do
$\triangleright$ Estimate $y$ over $C(T_{i+1})$.
$y^{*}_{j}\leftarrow b_{h_{i}(j)}$ for $j\in C(T_{i+1})$.
$\triangleright$ Select the $2s$ largest elements of our estimate.
$T_{i}\leftarrow{\displaystyle\operatorname*{arg\,max}_{\begin{subarray}{c}T^{\prime}\subseteq
C(T_{i+1})\\\ \left|T^{\prime}\right|\leq 2s\end{subarray}}}\left\lVert
y^{*}_{T^{\prime}}\right\rVert_{1}$
end for
$S\leftarrow\displaystyle\bigcup_{i=0}^{\log(n/s)}T_{i}\cup\bigcup_{i\geq\log(n/s)}G_{i}$
end procedure
###### Lemma 6.5.
Algorithm 1 uses a binary sketching matrix of $O(s\log(n/s))$ rows and takes
$O(s\log(n/s))$ time to recover $S$ from the sketch.
###### Proof.
The algorithm looks at $O(\log(n/s))$ levels. At each level it finds the top
$2s$ of $4\times 2s$ values, which can be done in linear time. The algorithm
requires a sketch with $O(\log(n/s))$ levels of $O(s)$ cells each. ∎
The algorithm estimates the value of $y_{C(T_{i+1})}$ by hashing all of
$y_{G_{i}}$ into an $O(s)$ size hash table, then estimating $y_{j}$ as the
value in the corresponding hash table cell. Since $y$ is non-negative, this is
an overestimate. We would like to claim that the $2s$ largest values in our
estimate approximately contain the $s$ largest values in $y_{C(T_{i+1})}$. In
particular, we show that any $y_{j}$ we miss is either (i) not much larger
than $s$ of the coordinates we do output or (ii) very small relative to the
coordinates we already missed at a previous level.
###### Lemma 6.6.
In Algorithm 1, for every level $i$ let $w_{i}=\max_{q\in C(T_{i+1})\setminus
T_{i}}y_{q}$ denote the maximum value that is skipped by the algorithm and let
$f_{i}=\left\lVert y_{G_{i+1}\setminus T_{i+1}}\right\rVert_{1}$ denote the
error from coordinates not included in $T_{i+1}$. Let $c_{i}$ denote the
$s$-th largest value in $y_{T_{i}}$. Then with probability at least
$1-e^{-\Omega(s)}$, $w_{i}\leq\max\\{\frac{f_{i}}{4s},2c_{i}\\}$ for all
levels $i$.
###### Proof.
Define $s^{\prime}=8s\geq\left|C(T_{i+1})\right|$. We make the hash table size
at each level equal to $u=32s^{\prime}$. We will show that, with high
probability, there are at most $s$ coordinates $p$ where $y^{*}_{p}$ is more
than $f_{i}/s^{\prime}$ larger than $y_{p}$. Once this is true, the result
comes as follows: $y^{*}$ is an overestimate, so the top $2s$ elements of
$y^{*}$ contain at least $s$ values that have been overestimated by at most
$f_{i}/s^{\prime}$. Because the algorithm passes over an element of value
$w_{i}$, each of these $s$ values must actually have value at least
$w_{i}-f_{i}/s^{\prime}$. Hence either
$w_{i}<2f_{i}/s^{\prime}=\frac{f_{i}}{4s}$ or all $s$ values are at least
$w_{i}/2$.
To bound the number of badly overestimated coordinates, we split the noise in
two components: the part from $G_{i}\setminus C(T_{i+1})$ and the part from
$C(T_{i+1})$. We will show that, with probability $1-e^{-\Omega(s)}$, the
former is at most $f_{i}/s^{\prime}$ in all but $s/4$ locations and the latter
is zero in all but $3s/4$ locations.
WLOG we assume that the function $h_{i}$ is first fixed for $G_{i}\setminus
C(T_{i+1})$, then randomly chosen for $C(T_{i+1})$. Let $O_{i}\subset[u]$ be
the set of “overflow buckets” $l$ such that the sum $s_{l}=\sum_{p\notin
C(T_{i+1}),h_{i}(p)=l}y_{p}$ is at least $f_{i}/s^{\prime}$. By the definition
of $f_{i}$, $\sum_{l}s_{l}=f_{i}/2$, so
$|O_{i}|/u\leq\frac{f_{i}/2}{f_{i}/s^{\prime}}/u=1/2\frac{s^{\prime}}{32s^{\prime}}=1/64.$
Thus, the probability that a fixed child $q\in C(T_{i+1})$ is mapped to
$O_{i}$ is at most $1/64$. This is independent over $C(T_{i+1})$, so the
Chernoff bound applies. Hence with probability at least $1-e^{-\Omega(s)}$,
the number of $q\in C(T_{i+1})$ mapping to $O_{i}$ is at most twice its
expectation, or $\left|C(T_{i+1})\right|/32=s/4$.
We now bound the collisions within $C(T_{i+1})$. Note that our process falls
into the “balls into bins” framework, but for completeness we will analyze it
from first principles.
Let $Z$ be the number of cells in $C(T_{i+1})$ that collide. $Z$ is a function
of the independent random variables $h_{i}(p)$ for $p\in C(T_{i+1})$, and $Z$
changes by at most $2$ if a single $h_{i}(p)$ changes (because $p$ can cause
at most one otherwise non-colliding element to collide). Hence by McDiarmid’s
inequality,
$\Pr[Z\geq\operatorname{E}[Z]+t]\leq e^{-t^{2}/(2s^{\prime})}$
But we know that the chance that a specific $p$ collides with any of the
others is at most $s^{\prime}/u=1/32$. Hence $\operatorname{E}[Z]\leq
s^{\prime}/32$, and
$\Pr[Z\geq(\frac{1}{32}+\epsilon)s^{\prime}]\leq
e^{-\epsilon^{2}s^{\prime}/2}.$
By setting $\epsilon=2/32$ we obtain that, with probability $1-e^{-\Omega(s)}$
we have that $Z\leq\frac{3s^{\prime}}{32}=3s/4$.
Hence with probability $1-e^{-\Omega(s)}$, only $3s/4$ locations have non-zero
corruption from $C(T_{i+1})$, and we previously showed that with the same
probability only $s/4$ locations are corrupted by $f^{\prime}/s^{\prime}$ from
outside $C(T_{i+1})$. By the union bound, this is true for all levels with
probability at least $1-(\log n)e^{-\Omega(s)}=1-e^{-\Omega(s)}.$ ∎
###### Lemma 6.7.
Let $S$ be the result of running Algorithm 1 on $y\in\mathbb{R}^{t}$. Then
$\left\lVert y_{\overline{S}}\right\rVert_{1}\leq
10\min_{y^{\prime}\in\mathcal{M}}\left\lVert y-y^{\prime}\right\rVert_{1}$
with probability at least $1-e^{\Omega(s)}$.
###### Proof.
From the algorithm definition, $T_{i}=S\cap G_{i}$ for each level $i$. Let
$y^{\prime}\in\mathcal{M}$ minimize $\left\lVert
y-y^{\prime}\right\rVert_{1}$, and let $U=\operatorname{supp}(y^{\prime})$. By
the definition of $\mathcal{M}$, $U\in T_{K}^{s}$.
For each $i$, define $V_{i}=U\cap C(T_{i+1})\setminus T_{i}$ to be the set of
nodes in $U$ that could have been chosen by the algorithm at level $i$ but
were not. For $q\in U\setminus S$, define $R(q)$ to be the highest ancestor of
$q$ that does not lie in $S$; hence $R(q)$ lies in $V_{i}$ for some level $i$.
Then
$\displaystyle\left\lVert
y^{\prime}_{\overline{S}}\right\rVert_{1}=\left\lVert y^{\prime}_{U\setminus
S}\right\rVert_{1}$ $\displaystyle=\sum_{q\in U\setminus S}y^{\prime}_{q}$
$\displaystyle=\sum_{i}\sum_{p\in V_{i}}\sum_{R(q)=p}y^{\prime}_{q}$
$\displaystyle\leq\sum_{i}\sum_{p\in V_{i}}2y^{\prime}_{p}$
$\displaystyle=2\sum_{i}\left\lVert y^{\prime}_{V_{i}}\right\rVert_{1},$ (5)
where the inequality holds because each element of $y^{\prime}$ is at least
twice the sum of its children. Hence the sum of $y^{\prime}$ over a subtree is
at most twice the value of the root of the subtree.
Define the error term $f_{i}=\left\lVert y_{G_{i+1}\setminus
T_{i+1}}\right\rVert_{1}$, and suppose that the statement in Lemma 6.6
applies, as happens with probability $1-e^{\Omega(s)}$. Then for any level $i$
and $p\in V_{i}$, if $c_{i}$ is the $s$th largest value in $y_{T_{i}}$, then
$y_{p}\leq\max\\{f_{i}/4s,2c_{i}\\}$ or $y_{p}\leq\frac{f_{i}}{4s}+2c_{i}$.
Since $y_{T_{i}}$ contains at least $s$ values larger than $c_{i}$, and at
most $\left|U\cap T_{i}\right|=\left|U\cap
C(T_{i+1})\right|-\left|V_{i}\right|\leq s-\left|V_{i}\right|$ of them lie in
$U$, $y_{T_{i}\setminus U}$ must contain at least $\left|V_{i}\right|$ values
larger than $c_{i}$. This, combined with $\left|V_{i}\right|\leq s$, gives
$\displaystyle\left\lVert y_{V_{i}}\right\rVert_{1}\leq f_{i}/4+2\left\lVert
y_{T_{i}\setminus U}\right\rVert_{1}.$ (6)
Combining Equations (5) and (6), we get
$\displaystyle\left\lVert y^{\prime}_{\overline{S}}\right\rVert_{1}$
$\displaystyle\leq
2[\sum_{i}\left\lVert(y-y^{\prime})_{V_{i}}\right\rVert_{1}+\left\lVert
y_{V_{i}}\right\rVert_{1}]$ $\displaystyle\leq
2\left\lVert(y-y^{\prime})_{U}\right\rVert_{1}+\sum_{i}\left(4\left\lVert
y_{T_{i}\setminus U}\right\rVert_{1}+f_{i}/2\right)$ $\displaystyle\leq
2\left\lVert(y-y^{\prime})_{U}\right\rVert_{1}+4\left\lVert y_{S\setminus
U}\right\rVert_{1}+\left\lVert y_{\overline{S}}\right\rVert_{1}/2$
$\displaystyle=2\left\lVert(y-y^{\prime})_{U}\right\rVert_{1}+4\left\lVert(y-y^{\prime})_{S\setminus
U}\right\rVert_{1}+\left\lVert y_{\overline{S}}\right\rVert_{1}/2$
$\displaystyle\leq 4\left\lVert y-y^{\prime}\right\rVert_{1}+\left\lVert
y_{\overline{S}}\right\rVert_{1}/2.$
Therefore
$\displaystyle\left\lVert y_{\overline{S}}\right\rVert_{1}$
$\displaystyle\leq\left\lVert y-y^{\prime}\right\rVert_{1}+\left\lVert
y^{\prime}_{\overline{S}}\right\rVert_{1}$ $\displaystyle\leq 5\left\lVert
y-y^{\prime}\right\rVert_{1}+\left\lVert y_{\overline{S}}\right\rVert_{1}/2$
$\displaystyle\left\lVert y_{\overline{S}}\right\rVert_{1}$ $\displaystyle\leq
10\left\lVert y-y^{\prime}\right\rVert_{1}$
as desired. ∎
### 6.3 Application to EMD recovery
By Lemma 3.1 our $\ell_{1}/\ell_{1}$ recovery algorithm for ${\cal M}$ gives
an $\text{EMD}/\text{EMD}$ recovery algorithm.
###### Theorem 6.8.
Suppose $k=\Omega(\log\log n)$. There is a randomized EMD/EMD recovery scheme
with $m=O(\frac{1}{\epsilon^{2}}k\log(n/k))$, $C=\epsilon$, and success
probability $1-k^{-\Omega(1)}$. Recovery takes
$O(\frac{1}{\epsilon^{2}}k\log(n/k))$ time.
## 7 Wavelet-based method
We can also instantiate the framework of Section 3 using a reweighted Haar
wavelet basis instead of $P$ for the embedding $B$. We will have $\mathcal{M}$
be the tree-sparse model $\mathcal{T}_{O(\frac{1}{\epsilon^{2}}k\log{n/k})}$,
and use the $\ell_{1}/\ell_{1}$ recovery scheme of Section 5.
The details are deferred to Appendix D. We obtain an embedding $W$ defined by
a Haar transform $H$ (after rescaling the rows), and the following theorem:
###### Theorem 7.1.
There exists a matrix $A$ with $O(k\log(n/k))$ rows such that we can recover
$x^{*}$ from $Ax$ with
$\left\lVert x^{*}-x\right\rVert_{EMD}\leq
C\min_{y\in\mathcal{T}_{K}}\left\lVert Wx-y\right\rVert_{1}\leq
C\min_{k\text{-sparse }x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{EMD}$
for some distortion $C=O(\sqrt{\log(n/k)})$.
Note that if we ignore the middle term, this gives the same EMD/EMD result as
in Section 5. However the middle term may be small for natural images even if
the right term is not. In particular, it is well known that images tend to be
tree-sparse under $H$.
#### Acknowledgements
The authors would like to thank Yaron Rachlin from Draper Lab for numerous
conversations and the anonymous reviewers for helping clarify the
presentation.
## References
* [AV99] P.K. Agarwal and K. Varadarajan. Approximation algorithms for bipartite and non-bipartite matching in the plane. SODA, 1999.
* [BCDH10] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde. Model-based compressive sensing. IEEE Transactions on Information Theory, 56, No. 4:1982–2001, 2010\.
* [BGI+08] R. Berinde, A. Gilbert, P. Indyk, H. Karloff, and M. Strauss. Combining geometry and combinatorics: a unified approach to sparse signal recovery. Allerton, 2008.
* [CCFC02] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. ICALP, 2002.
* [CDDD01] A. Cohen, W. Dahmen, I. Daubechies, and R. DeVore. Tree approximation and optimal encoding. Applied and Computational Harmonic Analysis, 2001.
* [Cha02] M. Charikar. Similarity estimation techniques from rounding. In STOC, pages 380–388, 2002.
* [CIHB09] V. Cevher, P. Indyk, C. Hegde, and RG Baraniuk. Recovery of clustered sparse signals from compressive measurements. SAMPTA, 2009.
* [CM04] G. Cormode and S. Muthukrishnan. Improved data stream summaries: The count-min sketch and its applications. Latin, 2004.
* [CM06] G. Cormode and S. Muthukrishnan. Combinatorial algorithms for compressed sensing. Sirocco, 2006.
* [CRT06] E. J. Candès, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):1208–1223, 2006.
* [DDT+08] M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk. Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine, 2008.
* [DIPW10] K. Do Ba, P. Indyk, E. Price, and D. Woodruff. Lower bounds for sparse recovery. SODA, 2010.
* [Don06] D. L. Donoho. Compressed Sensing. IEEE Trans. Info. Theory, 52(4):1289–1306, Apr. 2006.
* [FPRU10] S. Foucart, A. Pajor, H. Rauhut, and T. Ullrich. The gelfand widths of lp-balls for $0<p\leq 1$. preprint, 2010.
* [FS05] G. Frahling and C. Sohler. Coresets in dynamic geometric data streams. STOC, 2005.
* [GD05] K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. ICCV, 2005.
* [GI10] A. Gilbert and P. Indyk. Sparse recovery using sparse matrices. Proceedings of IEEE, 2010.
* [GIP10] R. Gupta, P. Indyk, and E. Price. Sparse recovery for earth mover distance. Allerton, 2010.
* [HPM04] S. Har-Peled and S. Mazumdar. Coresets for k-means and k-medians and their applications. STOC, 2004.
* [Ind04] P. Indyk. Algorithms for dynamic geometric problems over data streams. STOC, 2004.
* [Ind07] P. Indyk. Sketching, streaming and sublinear-space algorithms. Graduate course notes, available at `http://stellar.mit.edu/S/course/6/fa07/6.895/`, 2007.
* [IT03] P. Indyk and N. Thaper. Fast color image retrieval via embeddings. Workshop on Statistical and Computational Theories of Vision (at ICCV), 2003.
* [Low04] D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110, 2004.
* [Lyu05] S. Lyu. Mercel kernels for object recognition with local features. CVPR, 2005.
* [Mut05] S. Muthukrishnan. Data streams: Algorithms and applications). Foundations and Trends in Theoretical Computer Science, 2005.
* [NT08] D. Needell and J. A. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Arxiv math.NA 0803.2392, 2008.
* [Pri11] E. Price. Efficient sketches for the set query problem. SODA, 2011.
* [Rom09] J. Romberg. Compressive sampling by random convolution. SIAM Journal on Imaging Science, 2009.
* [RTG00] Y. Rubner, C. Tomasi, and L. Guibas. The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40(2):99–121, 2000.
* [SDS95] E.J. Stollnitz, A.D. DeRose, and D.H. Salesin. Wavelets for computer graphics: a primer. Computer Graphics and Applications, 1995.
## Appendix A Invertibility of Pyramid Transform
If $b$ were $(Px)_{S}$ for some $S$, then the problem would be fairly easy,
since $b$ tells us the mass $p_{q}$ in cells $q$ (in particular, if $q$ is at
level $i$, $p_{q}=\frac{b_{q}}{2^{i}}$). Define the _surplus_
$s_{q}=p_{q}-\sum_{r\in C(q)}p_{r}$ to be the mass estimated in the cell that
is not found in the cell’s children.
We start from the case when all surpluses are non-negative (as is the case for
$(Px)_{S}$). In this case, we can minimize $\left\lVert b-Py\right\rVert_{1}$
by creating $s_{q}$ mass anywhere in cell $q$.
Algorithm 1 Recovering $y$ from $b$ to minimize $\left\lVert
b-Py\right\rVert_{1}$ when all surpluses are non-negative.
For every cell $q\in G$, let $e_{q}\in\mathbb{R}^{n}$ denote an elementary
unit vector with the $1$ located somewhere in $q$ (for example, at the center
of $q$). Then return
$y=\sum_{q\in G}s_{q}e_{q}.$
###### Lemma A.1.
Suppose $b$ is such that $s_{q}\geq 0$ for all $q\in G$. Let $y$ be the result
of running Algorithm 1 on $b$. Then $y$ minimizes $\left\lVert
b-Py\right\rVert_{1}$.
###### Proof.
The vector $y$ has the property that $(Py)_{q}\geq b_{q}$ for all $q\in G$,
and for the root node $r$ we have $(Py)_{r}=b_{r}$. Because the weights are
exponential in the level value, any $y^{\prime}$ minimizing $\left\lVert
b-Py^{\prime}\right\rVert_{1}$ must have $(Py^{\prime})_{r}\geq b_{r}$, or
else increasing any coordinate of $y^{\prime}$ would decrease $\left\lVert
b-Py^{\prime}\right\rVert_{1}$. But then
$\displaystyle\left\lVert b-Py^{\prime}\right\rVert_{1}$
$\displaystyle=\sum_{i=0}^{\log\Delta}\sum_{q\in
G_{i}}\left|(Py^{\prime})_{q}-b_{q}\right|$
$\displaystyle\geq\sum_{i=0}^{\log\Delta}\sum_{q\in
G_{i}}(Py^{\prime})_{q}-b_{q}$
$\displaystyle=\sum_{i=0}^{\log\Delta}\left(2^{i-\log\Delta}(Py^{\prime})_{r}-\sum_{q\in
G_{i}}b_{q}\right)$
$\displaystyle=(2-2^{-\log\Delta})(Py^{\prime})_{r}-\left\lVert
b\right\rVert_{1}$ $\displaystyle\geq(2-2^{-\log\Delta})b_{r}-\left\lVert
b\right\rVert_{1}.$
Equality holds if and only if $(Py^{\prime})_{q}\geq b_{q}$ for all $q\in G$
and $(Py^{\prime})_{r}=b_{r}$. Since $y$ has these properties, $y$ minimizes
$\left\lVert b-Py\right\rVert_{1}$. ∎
Unfortunately, finding the exact solution is harder when some surpluses
$s_{q}$ may be negative. Then in order to minimize $\left\lVert
b-Py\right\rVert_{1}$ one must do a careful matching up of positive and
negative surpluses. In order to avoid this complexity, we instead find a
greedy 8-approximation. We modify $b$ from the top down, decreasing values of
children until all the surpluses are non-negative.
Algorithm 2 Modifying $b$ to form all non-negative surpluses
Perform a preorder traversal of $G$. At each node $q$ at level $i$, compute
the surplus $s_{q}$. If $s_{q}$ is negative, arbitrarily decrease $b$ among
the children of $q$ by a total of $2^{i-1}\left|s_{q}\right|$, so that $b$
remains non-negative.
###### Lemma A.2.
Suppose we run algorithm 2 on a vector $b$ to get $b^{\prime}$. Then
$\left\lVert b-b^{\prime}\right\rVert_{1}\leq 3\min_{y}\left\lVert
Py-b\right\rVert_{1}.$
###### Proof.
Let $y$ minimize $\left\lVert Py-b\right\rVert_{1}$. As with $Py^{\prime}$ for
any $y^{\prime}$, $Py$ has zero surplus at every node.
At the point when we visit a node $q$, we have updated our estimate of $b$ at
$q$ but not at its children. Therefore if $q$ is at level $i$ we compute
$s_{q}=\frac{1}{2^{i}}b^{\prime}_{q}-\frac{1}{2^{i-1}}\sum_{s\in C(q)}b_{s}$.
Then, because $Py$ has zero surplus,
$\displaystyle\left|s_{q}\right|$
$\displaystyle=\left|\frac{1}{2^{i}}b^{\prime}_{q}-\frac{1}{2^{i}}(Py)_{q}-\frac{1}{2^{i-1}}\sum_{s\in
C(q)}(b_{s}-(Py)_{s})\right|$
$\displaystyle\leq\frac{1}{2^{i}}\left|b^{\prime}_{q}-b_{q}\right|+\frac{1}{2^{i}}\left|b_{q}-(Py)_{q}\right|+\frac{1}{2^{i-1}}\sum_{s\in
C(q)}\left|b_{s}-(Py)_{s}\right|.$
Define $f_{i}=\sum_{q\in G_{i}}\left|b_{q}-(Py)_{q}\right|$ to be the original
$\ell_{1}$ error on level $i$, and $g_{i}=\sum_{q\in
G_{i}}\left|b^{\prime}_{q}-b_{q}\right|$ to be a bound on the amount of error
we add when running the algorithm. Because we only modify values enough to
rectify the surplus of their parent, we have
$\displaystyle g_{i-1}$ $\displaystyle\leq 2^{i-1}\sum_{q\in
G_{i}}\left|s_{q}\right|$ $\displaystyle\leq\sum_{q\in
G_{i}}\frac{1}{2}\left|b^{\prime}_{q}-b_{q}\right|+\frac{1}{2}\left|b_{q}-(Py)_{q}\right|+\sum_{s\in
C(q)}\left|b_{s}-(Py)_{s}\right|$
$\displaystyle\leq\frac{1}{2}g_{i}+\frac{1}{2}f_{i}+f_{i-1}.$
Unrolling the recursion, we get
$\displaystyle g_{i}$ $\displaystyle\leq
f_{i}+\sum_{j=1}^{\log\Delta-i}\frac{1}{2^{j-1}}f_{i+j}$
$\displaystyle\left\lVert
b^{\prime}-b\right\rVert_{1}=\sum_{i=0}^{\log\Delta}g_{i}$
$\displaystyle\leq\sum_{i=0}^{\log\Delta}3f_{i}=3\left\lVert
Py-b\right\rVert_{1}$
as desired. ∎
This lets us prove Lemma 4.3.
* Lemma 4.3.
Given any approximation $b$ to $Px$, running the previous two algorithms gives
a $y$ with
$\left\lVert Py-Px\right\rVert_{1}\leq 8\left\lVert b-Px\right\rVert_{1}$
in $O(\left|\operatorname{supp}(b)\right|)$ time.
###### Proof.
By running Algorithm 2 on $b$, we get $b^{\prime}$ with $\left\lVert
b-b^{\prime}\right\rVert_{1}\leq 3\left\lVert Px-b\right\rVert_{1}$. Then we
run Algorithm 1 on $b^{\prime}$ to get $y$ that minimizes $\left\lVert
Py-b^{\prime}\right\rVert_{1}$. Then
$\displaystyle\left\lVert Py-Px\right\rVert_{1}$ $\displaystyle\leq\left\lVert
Py-b^{\prime}\right\rVert_{1}+\left\lVert Px-b^{\prime}\right\rVert_{1}$
$\displaystyle\leq 2\left\lVert Px-b^{\prime}\right\rVert_{1}$
$\displaystyle\leq 2(\left\lVert Px-b\right\rVert_{1}+\left\lVert
b^{\prime}-b\right\rVert_{1})$ $\displaystyle\leq 8\left\lVert
Px-b\right\rVert_{1}.$
To bound the recovery time, note that after Algorithm 2 visits a node with
value $0$, it sets the value of every descendant of that node to $0$. So it
can prune its descent when it first leaves $\operatorname{supp}(b)$, and run
in $O(\left|\operatorname{supp}(b)\right|)$ time. Furthermore, this means
$\left|\operatorname{supp}(b^{\prime})\right|\leq\left|\operatorname{supp}(b)\right|$
and $\operatorname{supp}(b^{\prime})$ is a top-down tree. Hence Algorithm 1
can iterate through the support of $b^{\prime}$ in linear time. ∎
## Appendix B Model-based compressive sensing
In this section we first provide a quick review of model-based sparse
recovery, including the relevant definitions, algorithms and their guarantees.
We then show how to augment the algorithm so that it provides the guarantees
that are needed for our EMD algorithms.
### B.1 Background
#### Model-based RIP
Given a signal model ${\cal M}_{K}$, we can formulate the ${\cal
M}_{K}$-restricted isometry property (${\cal M}_{K}$-RIP) of an $m\times n$
matrix $A$, which suffices for performing sparse recovery.
###### Definition B.1.
A matrix $A$ satisfies the ${\cal M}_{K}$-RIP with constant $\delta$ if for
any $x\in{\cal M}_{K}$, we have
$(1-\delta)\left\lVert x\right\rVert_{2}\leq\left\lVert
Ax\right\rVert_{2}\leq(1+\delta)\left\lVert x\right\rVert_{2}$
It is known that random Gaussian matrices with $m=O(k\log(n/k))$ rows satisfy
the $\Sigma_{k}$-RIP (i.e., the “standard” RIP), with very high probability,
and that this bound cannot be improved [DIPW10]. In contrast, it has been
shown that in order to satisfy the ${\cal T}_{K}$-RIP, only $m=O(K)$ rows
suffice [BCDH10]. The intuitive reason behind this is that the number of
rooted trees of size $K$ is $2^{O(K)}$ while the number of sets of size $k$ is
$\binom{n}{k}=2^{\Theta(k\log(n/k))}$.
#### Algorithms
Given a matrix $A$ that satisfies the ${\cal M}_{K}$-RIP, one can show how to
recover an approximation to a signal from its sketch. The specific theorem
(proven in [BCDH10] and re-stated below) considers $\ell_{2}$ recovery of a
“noisy” sketch $Ax+e$, where $e$ is an arbitrary “noise” vector, while
$x\in{\cal M}_{K}$. In the next section we will use this theorem to derive an
$\ell_{1}$ result for a different scenario, where $x$ is an arbitrary vector,
and we are given its exact sketch $Ax$.
###### Theorem B.2.
Suppose that a matrix $A$ satisfies ${\cal M}^{(4)}_{K}$-RIP with constant
$\delta<0.1$. Moreover, assume that we are given a procedure that, given
$y\in\mathbb{R}^{n}$, finds $y^{*}\in{\cal M}_{K}$ that minimizes
$\|y-y^{*}\|_{2}$. Then there is an algorithm that, for any $x\in{\cal
M}_{K}$, given $Ax+e$, $e\neq 0$, finds $x^{*}\in{\cal M}_{K}$ such that
$\|x-x^{*}\|_{2}\leq C\|e\|_{2}$
for some absolute constant $C>1$. The algorithm runs in time
$O((n+T+MM)\log(\|x\|_{2}/\|e\|_{2}))$, where $T$ is the running time of the
minimizer procedure, and $MM$ is the time needed to perform the multiplication
of a vector by the matrix $A$.
Note that the algorithm in the theorem has a somewhat unexpected property: if
the sketch is nearly exact, i.e., $e\approx 0$, then the running time of the
algorithm becomes unbounded. The reason for this phenomenon is that the
algorithm iterates to drive the error down to $\left\lVert e\right\rVert_{2}$,
which takes longer when $e$ is small. However, as long as the entries of the
signals $x,x^{*}$ and the matrix $A$ have bounded precision, e.g., are
integers in the range $1,\dotsc,L$, one can observe that $O(\log L)$
iterations suffice.
The task of minimizing $\|y-y^{*}\|_{2}$ over $y^{*}\in{\cal M}_{K}$ can
typically be accomplished in time polynomial in $K$ and $n$. In particular,
for ${\cal M}_{K}={\cal T}_{K}$, there is a simple dynamic programming
algorithm solving this problem in time $O(k^{2}n)$. See, e.g., [CIHB09] for a
streamlined description of the algorithms for (a somewhat more general)
problem and references. For more mathematical treatment of tree
approximations, see [CDDD01].
The following lemma (from [NT08]) will help us bound the value of $\|e\|_{2}$.
###### Lemma B.3.
Assume that the matrix $A$ satisfies the (standard) $\Sigma_{s}$-RIP with
constant $\delta$. Then for any vector $z$, we have
$\|Az\|_{2}\leq\sqrt{1+\delta}(\|z_{S}\|_{2}+\|z\|_{1}/\sqrt{s})$, where $S$
is the set of the $s$ largest (in magnitude) coefficients of $z$.
For completeness, we also include a proof. It is different, and somewhat
simpler than the original one. Moreover, we will re-use one of the arguments
later.
###### Proof.
We partition the coordinates of $S$ into sets $S_{0},S_{1},$
$S_{2},\dotsc,S_{t}$, such that (i) the coordinates in the set $S_{j}$ are no
larger (in magnitude) than the coordinates in the set $S_{j-1}$, $j\geq 1$,
and (ii) all sets but $S_{t}$ have size $s$. We have
$\displaystyle\|Az\|_{2}$ $\displaystyle\leq$
$\displaystyle\sum_{j=0}^{t}\|Az_{S_{j}}\|_{2}$ $\displaystyle\leq$
$\displaystyle\sqrt{1+\delta}(\|z_{S_{0}}\|_{2}+\sum_{j=1}^{t}\|z_{S_{j}}\|_{2})$
$\displaystyle\leq$
$\displaystyle\sqrt{1+\delta}(\|z_{S_{0}}\|_{2}+\sum_{j=1}^{s}\sqrt{s}(\|z_{S_{j-1}}\|_{1}/s))$
$\displaystyle\leq$
$\displaystyle\sqrt{1+\delta}(\|z\|_{2}+\|z\|_{1}/\sqrt{s})$
∎
### B.2 New result
We start from the following observation relating general sparsity and tree
sparsity. Consider $k$ and $K$ such that $K=c^{\prime}k\log(n/k)$ for some
constant $c^{\prime}$.
###### Claim B.4.
Assume $n=\frac{c^{l}-1}{c-1}$ for some (constant) integer $c$. Then there
exists a constant $c^{\prime}$ such that $\Sigma_{k}\subset{\cal T}_{K}$.
###### Proof.
It suffices to show that for any $S\subset[n]$ of size $k$ there exists a
rooted connected subset $T$ of $T(c,l)$ of size $K$ such that $S\subset T$.
The set $T$ is equal to $T^{\prime}\cup T^{\prime\prime}$, where (i)
$T^{\prime}$ consist of all nodes in the tree $T(c,l)$ up to level
$\lceil\log_{c}k\rceil$ and (ii) $T^{\prime\prime}$ consists of all paths from
the root to node $i$, for $i\in S$. Note that $|T^{\prime}|=O(k)$, and
$|T^{\prime\prime}\setminus T^{\prime}|=O(k(\log n-\log k))=O(k\log(n/k))$. ∎
This claim is used in the following way. As we will see later, in order to
provide the guarantee for recovery with respect to the model ${\cal T}_{K}$,
we will need to perform the recovery with respect to the model ${\cal
T}_{K}\oplus\Sigma_{k}$. From the claim it follows that we can instead perform
the recovery with respect to the model ${\cal T}^{(2)}_{K}\subset{\cal
T}_{2K}$.
Specifically, we show the following.
###### Theorem B.5.
Suppose that we are given a matrix and minimizer subroutine as in Theorem B.2
for ${\cal T}_{2K}$. Then, for any $x$, given the vector $Ax$, the
approximation $x^{*}$ computed by the algorithm in Theorem B.2 satisfies
$\|x-x^{*}\|_{1}\leq(1+2C\sqrt{(1+\delta)c^{\prime}\log(n/k)})\min_{x^{\prime}\in{\cal
T}_{K}}\|x-x^{\prime}\|_{1}$
###### Proof.
Let $x^{\prime}\in{\cal T}_{K}$ be the minimizer of $\|x-x^{\prime}\|_{1}$.
Let $T$ be a tree of size $K$ such that $x^{\prime}=x_{T}$, and define the
“$\ell_{1}$ approximation error”
$E=\|x-x^{\prime}\|_{1}=\|x_{\overline{T}}\|_{1}$
Let $P\subseteq\overline{T}$ be the set of the $k$ largest (in magnitude)
coordinates of $x_{\overline{T}}$. By Claim B.4 it follows that $P\subseteq
T^{\prime}$, for some $T^{\prime}\in{\cal T}_{K}$. Let $T^{\prime\prime}=T\cup
T^{\prime}$.
We decompose $Ax$ into
$Ax_{T^{\prime\prime}}+Ax_{\overline{T^{\prime\prime}}}=Ax_{T^{\prime\prime}}+e$.
Since $x_{T^{\prime\prime}}\in{\cal T}_{2K}$, by Theorem B.2 we have
$\displaystyle\|x_{T^{\prime\prime}}-x^{*}\|_{2}$ $\displaystyle\leq$
$\displaystyle C\|e\|_{2}$ (7)
Let $T^{*}$ be the support of $x^{*}$. Note that $|T^{*}|\leq 2K$.
Since $A$ satisfies the (standard) RIP of order $k$ with constant
$\delta=0.1$, by Lemma B.3 we have
$\left\lVert e\right\rVert_{2}\leq\sqrt{1+\delta}[\left\lVert
x_{S}\right\rVert_{2}+\left\lVert x_{\overline{T\cup
P}}\right\rVert_{1}/\sqrt{k}]$
where $S\subset\overline{T\cup P}$ is the set of the $k$ largest (in
magnitude) coordinates of $x_{\overline{T\cup P}}$. By the definition of $P$,
every coordinate of $\left|x_{S}\right|$ is not greater than the smallest
coordinate of $|x_{P}|$. By the same argument as in the proof of Lemma B.3 it
follows that $\left\lVert x_{S}\right\rVert_{2}\leq\left\lVert
x_{P}\right\rVert_{1}/\sqrt{k}$, so
$\displaystyle\left\lVert e\right\rVert_{2}$
$\displaystyle\leq\sqrt{(1+\delta)/k}\left\lVert
x_{\overline{T}}\right\rVert_{1}.$ (8)
We have
$\displaystyle\left\lVert x-x^{*}\right\rVert_{1}$
$\displaystyle=\left\lVert(x-x^{*})_{T^{\prime\prime}\cup
T^{*}}\right\rVert_{1}+\left\lVert(x-x^{*})_{\overline{T^{\prime\prime}\cup
T^{*}}}\right\rVert_{1}$ $\displaystyle\leq\left\lVert
x_{T^{\prime\prime}}-x^{*}\right\rVert_{1}+\left\lVert x_{T^{*}\setminus
T^{\prime\prime}}\right\rVert_{1}+\left\lVert(x-x^{*})_{\overline{T^{\prime\prime}\cup
T^{*}}}\right\rVert_{1}$ $\displaystyle=\left\lVert
x_{T^{\prime\prime}}-x^{*}\right\rVert_{1}+\left\lVert
x_{\overline{T^{\prime\prime}}}\right\rVert_{1}$ $\displaystyle\leq\left\lVert
x_{T^{\prime\prime}}-x^{*}\right\rVert_{1}+E$
$\displaystyle\leq\sqrt{4K}\left\lVert
x_{T^{\prime\prime}}-x^{*}\right\rVert_{2}+E$
$\displaystyle\leq\sqrt{4K}C\left\lVert e\right\rVert_{2}+E$
$\displaystyle\leq\sqrt{4K}C\sqrt{(1+\delta)/k}\left\lVert
x_{\overline{T}}\right\rVert_{1}+E$
$\displaystyle=(1+2C\sqrt{(1+\delta)K/k})E$
$\displaystyle=(1+2C\sqrt{(1+\delta)c^{\prime}\log(n/k)})E$
by Equations 7 and 8. ∎
## Appendix C Strict sparse approximation
In this section we show how to reduce the sparsity of an approximation down to
$k$ for an arbitrary norm $\|\cdot\|$. This reduction seems folklore, but we
could not find an appropriate reference, so we include it for completeness.
Consider a sparse approximation scheme that, given $Ax$, returns (not
necessarily sparse) vector $x^{*}$ such that $\|x^{*}-x\|\leq
C\min_{k\text{-sparse }x^{\prime}}\|x^{\prime}-x\|$; let $x^{\prime}$ be the
the minimizer of the latter expression. Let $\hat{x}$ be the approximately
best k-sparse approximation to $x^{*}$, i.e., such that $\|\hat{x}-x^{*}\|\leq
C^{\prime}\min_{k\text{-sparse }x^{\prime\prime}}\|x^{\prime\prime}-x^{*}\|$;
let $x^{\prime\prime}$ be the minimizer of the latter expression. Note that
since $x^{\prime}$ is $k$-sparse, it follows that
$\|x^{\prime\prime}-x^{*}\|\leq\|x^{\prime}-x^{*}\|$.
###### Claim C.1.
We have
$\|\hat{x}-x\|\leq[(C^{\prime}+1)C+C^{\prime}]\|x^{\prime}-x\|$
###### Proof.
$\displaystyle\|\hat{x}-x\|$ $\displaystyle\leq$
$\displaystyle\|\hat{x}-x^{*}\|+\|x^{*}-x\|$ $\displaystyle\leq$
$\displaystyle C^{\prime}\|x^{\prime\prime}-x^{*}\|+\|x^{*}-x\|$
$\displaystyle\leq$ $\displaystyle C^{\prime}\|x^{\prime}-x^{*}\|+\|x^{*}-x\|$
$\displaystyle\leq$ $\displaystyle
C^{\prime}[\|x^{\prime}-x\|+\|x-x^{*}\|]+\|x^{*}-x\|$ $\displaystyle=$
$\displaystyle(C^{\prime}+1)\|x^{*}-x\|+C^{\prime}\|x^{\prime}-x\|$
$\displaystyle\leq$
$\displaystyle(C^{\prime}+1)C\|x^{\prime}-x\|+C^{\prime}\|x^{\prime}-x\|$
$\displaystyle=$ $\displaystyle[(C^{\prime}+1)C+C^{\prime}]\|x^{\prime}-x\|$
## Appendix D Wavelet-based method
We start by recalling the definition of the non-standard two-dimensional Haar
wavelet basis (see [SDS95] for an overview). Let $H\in\mathbb{R}^{n\times n}$
be the matrix with rows corresponding to the basis vectors. We will define $H$
in terms of the grids $G_{i}$. The first row of $H$ has all coordinates equal
to $1/n$. The rest of $H$ consists of three rows for each cell $C\in G_{i}$
for $i\geq 1$. For each cell $C$, the corresponding rows contain zeros outside
of the coordinates corresponding to $C$. The entries corresponding to $C$ are
defined as follows: (i) one row has entries equal to $2^{-i}$ for each entry
corresponding to the left half of $C$ and equal to $-2^{-i}$ for each entry
corresponding to the right half of $C$; (ii) the second row has entries equal
to $2^{-i}$ for the top half of $C$ and to $-2^{-i}$ for the bottom half; (ii)
and the third row has entries equal to $2^{-i}$ for the top left and bottom
right quadrants of $C$, and equal to $-2^{-i}$ for the other two quadrants.
We define $W$ to transform into the same basis as $H$, but with rescaled basis
vectors. In particular, the basis vectors from level $i$ are smaller by a
factor of $2^{2i-2}$, so the non-zero entries have magnitude $2^{2-3i}$. This
is equivalent to changing the coefficients of the corresponding rows of $W$ to
be $2^{i-2}$ rather than $2^{-i}$. Similarly, we rescale the all-positive
basis vector to have coefficients equal to $1/n^{3}$. Then $W=DH$ for some
diagonal matrix $D$.
This rescaling is such that the columns of $W^{-1}$, call them $v_{i}$, all
have $\left\lVert v_{i}\right\rVert_{EMD}=1$. This is because the min-cost
matching moves each of $2^{2i}/2$ coefficients by $2^{i}/2$. So we have
$\displaystyle\left\lVert x\right\rVert_{EMD}$
$\displaystyle=\left\lVert\sum(Wx)_{i}v_{i}\right\rVert_{EMD}$
$\displaystyle\leq\sum\left\lVert(Wx)_{i}v_{i}\right\rVert_{EMD}$
$\displaystyle=\sum\left|(Wx)_{i}\right|=\left\lVert Wx\right\rVert_{1},$
which is Property A of the framework.
Property C is easy since $W$ has a known inverse (namely $H^{T}D^{-1}$),
giving $\left\lVert y-WW^{-1}y\right\rVert_{1}=0$ for all $y$. All that
remains to show is Property B.
###### Lemma D.1.
For all $x\in\mathbb{R}_{+}^{n}$, there exists a
$y\in\mathcal{T}_{O(\frac{1}{\epsilon^{2}}k\log(n/k))}$ with
$\left\lVert y-Wx\right\rVert_{1}\leq\epsilon\min_{k\text{-sparse
}x_{k}}\left\lVert x-x_{k}\right\rVert_{EMD}.$
###### Proof.
We will show this using Lemma 4.1 as a black box. We know there exists a
support $S$ of $Px$ corresponding to a tree of grid cells such that
$\left\lVert(Px)_{\overline{S}}\right\rVert_{1}\leq\epsilon\min_{k\text{-sparse
}x_{k}}\left\lVert x-x_{k}\right\rVert_{EMD}.$
Let $S^{\prime}$ be a support of $Wx$ that contains the all-constant basis
vector as well as, for each cell $C\in G_{i}$ in $S$ with $i\geq 1$, the three
coefficients in $Wx$ corresponding to $C$. Then $S^{\prime}$ is also a tree.
For any cell $C\in G_{i}$, let $u$ be the row in $P$ corresponding to $C$ and
$v$ be any of the three rows in $W$ corresponding to $C$. Then
$\left\lVert v\right\rVert_{\infty}=2^{i-2}=\frac{1}{4}\left\lVert
u\right\rVert_{\infty}.$
So the only difference between $v$ and $u$ is that (i) $v$ has one fourth the
magnitude in each coefficient and (ii) some coefficients of $v$ are negative,
while all of $u$ are positive. Hence for positive $x$, $\left|v\cdot
x\right|\leq\frac{1}{4}\left|u\cdot x\right|$. This gives
$\left\lVert(Wx)_{\overline{S}^{\prime}}\right\rVert_{1}\leq\frac{3}{4}\left\lVert(Px)_{\overline{S}}\right\rVert_{2}\leq\frac{3}{4}\epsilon\min_{k\text{-sparse
}x_{k}}\left\lVert x-x_{k}\right\rVert_{EMD}.$
as desired. ∎
###### Theorem D.2.
This gives
$\left\lVert x^{*}-x\right\rVert_{EMD}\leq
C\min_{y\in\mathcal{T}_{K}}\left\lVert Wx-y\right\rVert_{1}\leq
C\min_{k\text{-sparse }x^{\prime}}\left\lVert x-x^{\prime}\right\rVert_{EMD}$
for some distortion $C=O(\sqrt{\log(n/k)})$.
∎
|
arxiv-papers
| 2011-04-25T03:49:54 |
2024-09-04T02:49:18.405758
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Piotr Indyk and Eric Price",
"submitter": "Eric Price",
"url": "https://arxiv.org/abs/1104.4674"
}
|
1104.4977
|
# On the formation location of Uranus and Neptune as constrained by dynamical
and chemical models of comets
J.J. Kavelaars Herzberg Institute of Astrophysics, National Research Council
of Canada, 5071 West Saanich Road, Victoria, BC V9E 2E7, Canada
JJ.Kavelaars@nrc.gc.ca Olivier Mousis Jean-Marc Petit Institut UTINAM, CNRS-
UMR 6213, Observatoire de Besançon, BP 1615, 25010 Besançon Cedex, France
Harold A. Weaver Space Department, Johns Hopkins University Applied Physics
Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723-6099, USA
###### Abstract
The D/H enrichment observed in Saturn’s satellite Enceladus is remarkably
similar to the values observed in the nearly-isotropic comets. Given the
predicted strong variation of D/H with heliocentric distance in the solar
nebula, this observation links the primordial source region of the nearly-
isotropic comets with the formation location of Enceladus. That is, comets
from the nearly-isotropic class were most likely fed into their current
reservoir, the Oort cloud, from a source region near the formation location of
Enceladus. Dynamical simulations of the formation of the Oort cloud indicate
that Uranus and Neptune are, primarily, responsible for the delivery of
material into the Oort cloud. In addition, Enceladus formed from material that
condensed from the solar nebula near the location at which Saturn captured its
gas envelope, most likely at or near Saturn’s current location in the solar
system. The coupling of these lines of evidence appears to require that Uranus
and Neptune were, during the epoch of the formation of the Oort cloud, much
closer to the current location of Saturn than they are currently. Such a
configuration is consistent with the Nice model of the evolution of the outer
solar system. Further measurements of the D/H enrichment in comets,
particularly in ecliptic comets, will provide an excellent discriminator among
various models of the formation of the outer solar system.
comets: general — Kuiper belt: general — planets and satellites: composition —
planets and satellites: dynamical evolution and stability — protoplanetary
disks
††slugcomment: Submitted to the Astrophysical Journal
## 1 Introduction
Levison (1996), following on previous work by Carusi et al. (1987) and others,
proposes two broad classes of comets, the ecliptic and the nearly isotropic.
Objects are selected into these dynamical classes by their Tisserand parameter
with respect to Jupiter. Levison finds that the value $T_{J}\sim 2$ results in
a secure boundary between comets from different reservoirs. Different
reservoirs likely indicate different source regions within the primordial
solar nebula. Determining the source regions from which the comet reservoirs
were first populated, and modeling the chemical evolution of those source
regions as constrained by observations of comets, will provide important clues
on the physical and chemical structure of the primordial solar system.
A comet’s origins in the primitive nebula can be probed by examining the
degree to which fossil deuterium is enriched compared to the protosolar
abundance. Calculations of the temporal and radial evolution of the deuterium
enrichment in the solar nebula can reproduce existing D/H measures for comets
(Drouart et al. 1999; Mousis et al. 2000; Horner et al. 2007). These
calculations show that the deuterium enrichment in water ice strongly depends
on the distance from the Sun at which the ice was formed. Comparing the D/H
value measured in comets with those predicted by such models allows retrieval
of their formation location.
The measurement of the D/H ratio at Enceladus by the Ion and Neutral Mass
Spectrometer aboard the $Cassini$ spacecraft (Waite et al. 2009) provides a
new, and tighter, constraint on the deuterium enrichment profile in the outer
solar nebula prompting us to reconsider models presented in previous works. We
pay particular attention, in this analysis, to the source region of the
reservoir of nearly-isotropic comets under the conditions described in the
Nice model scenario (Levison et al. 2008) of the formation of the outer solar
system. We demonstrate that the measured D/H abundance ratios for Oort cloud
comets are consistent with their formation having been in the 10-15 AU zone of
the solar system. Further, comets with (D/H)${}_{\tt{H_{2}O}}\lesssim 5\times
10^{-4}$ are precluded from forming more than $\sim 15$ AU from the Sun.
## 2 Reservoirs of comets and their source regions
The ‘cometary reservoir’ is the region of semi-stable phase space from which
comets are currently being delivered, while the ‘source regions’ are those
parts of the primitive nebula in which the comets formed and were then
delivered to the reservoirs. Ecliptic and isotropic comets are being delivered
from at least two distinct reservoirs and, as such, are likely from different
source regions.
The reservoir of the ecliptic comets has been demonstrated to be the Kuiper
belt and may be, more precisely, the ‘scattered disk’ component of that
population (Duncan & Levison 1997). The source region of the Kuiper belt is a
matter of current debate. In the Nice model, Uranus and Neptune originate in
the 10–15 AU region of the primordial solar system and later are transported
to their current locations via dynamical interactions. During this process,
material in the 20–30 AU region is deposited into the Kuiper belt and
scattered disk. More classically, the source of the Kuiper belt may be the
remnant of an in situ population. Regardless, the ecliptic comets now being
delivered from some part of the Kuiper belt formed beyond the formation
location of Neptune.
For the isotropic comets the reservoir region is, generically, the Oort cloud
(see Dones et al. 2004 for a good review). Some fraction of the isotropic
comets with $a<20,000$ AU may arrive from the ‘innermost’ component of this
distribution (Kaib & Quinn 2009), the remainder coming from the outer Oort
cloud. Modelling of delivery into the Oort cloud reservoir (e.g., Dones et al.
2004) generally finds this process to be controlled by Uranus-Neptune
scattering. The discovery of objects with large peri-centres, such as 2000
CR105 (Gladman et al. 2002) and (90377) Sedna (Brown, Trujillo & Rabinowitz
2004), motivated Brasser, Duncan & Levison (2006) and Kaib & Quinn (2008) to
examine the dynamics of Oort cloud formation in the presence of a stellar
birth cluster. They find that material from the Uranus-Neptune region of the
primordial solar system is effectively transported into the inner and outer
Oort cloud regions, the reservoirs of future nearly-isotropic comets.
Including the effect of gas-drag in the solar nebula (Brasser, Duncan &
Levison 2007) allows material in the ‘innermost’ Oort cloud to also be
delivered by Jupiter and Saturn. Uranus and Neptune, however, dominate the
post-nebula delivery. Thus, the Uranus-Neptune region appears to be the likely
source of material that now inhabits the inner and outer Oort clouds.
If Uranus and Neptune originated at (roughly) 12 and 15 AU then material
currently being delivered from the Oort cloud reservoir should have originated
from a source much closer to the Sun than in cases where Uranus and Neptune
formed at or near their current locations ($\sim$20 & 30 AU). A tracer of the
chemical evolution of the primordial solar system that is sensitive to
variations in the physical conditions between 10 and 30 AU, an example of
which is described in the next section, provides a discriminator between these
formation scenarios.
## 3 Isotopic fractionation of deuterium in the solar nebula
The main reservoir of deuterium in the solar nebula was molecular hydrogen (HD
vs. H2), and ion-molecule reactions in the interstellar medium (see e.g. Brown
& Millar 1989) causes fractionation amoung deuterated species. Consequently,
in the pre-solar cloud, fractionation resulted in heavier molecules being
enriched in deuterium. As the second most abundant hydrogen bearer in the
solar nebula, water became the second largest deuterium reservoir.
We follow the approaches of Drouart et al. (1999) and Mousis et al. (2000) who
described the evolution of the deuterium enrichment factor, $f$, that results
from the exchange between HD and HDO. $f$ is defined as the ratio of D/H in
H2O to that in molecular H2. Here we consider an additional constraint that
tightens the deuterium enrichment profiles calculated in Mousis et al. (2000).
The recent measurement by the $Cassini$ spacecraft of the D/H ratio in the
plumes of Enceladus, one of the ice-rich regular moons of Saturn, shows that
this value is in the same range as those measured in comets (see Table 1). If
Enceladus formed near the current location of Saturn, (which likely formed
within $\sim$1 AU of its current location), we can then pin the value of $f$
at this location in the nebula
We use the equation of diffusion describing the evolution of $f$ and the solar
nebula model depicted by Mousis et al. (2000) in which the disk viscously
spreads out with time under the action of turbulence. The equation of
diffusion takes into account the isotopic exchange between HDO and H2 in the
vapor phase, and turbulent diffusion throughout the solar nebula. The
diffusion equation remains valid as long as H2O does not condense, which
implies that the value of $f$ is “frozen” into the microscopic ices present at
the time and location of condensation. As the grains reach millimeter size,
they begin to decouple from the gas leading to the formation of planetesimals.
This implies that the enrichment factor $f$ acquired by planetesimals is that
of the microscopic grains from which they formed, irrespective of the
planetesimals subsequent evolution. We consider the case where the
cometesimals (planetesimals that find their way to the cometary reservoirs)
were accreted only from icy grains formed locally to the reservoir source.
This statement is consistent with Horner et al. (2007) who conclude that there
is little diffusion due to turbulence with grain transport limited to only a
few AU . This implies that the D/H ratio in the deuterated ices in comets is
the value at the time and location at which they condensed and may be used to
discriminate among models of the outer solar system’s evolution.
Figure 1 describes the evolution of $f$ as a function of distance from the Sun
in the case of the solar nebula defined by the parameters $\alpha$ = 0.003,
$R_{D0}$ = 15 AU and $M_{D0}$ = 0.06, each of them figuring within the range
of possible values determined by Mousis et al. (2000). As in previous work, we
assume that $f$ is constant at $t$ = 0 irrespective of the heliocentric
distance and corresponds to the value measured in the highly enriched
component found in LL3 meteorites (D/H = $(73~{}\pm~{}12)~{}\times~{}10^{-5}$;
Deloule et al. 1998) compared to the protosolar value ($2.1\pm 0.4\times
10^{-5}$; Geiss & Gloeckler 1998). The highly enriched component in LL3
meteorites is presumed to originate from ISM grains that were not reprocessed
when entering the nebula (Mousis et al. 2000) and is consistent with D/H
measurements from Infrared Space Observatory in grain mantles in W33A
(Teixeira et al. 1999).
For the adopted set of parameters, the deuterium enrichment profile
simultaneously matches the nominal D/H value measured in H2O in the moderately
enriched component of LL3 meteorites at 3 AU and at the current heliocentric
distance of Saturn matches the D/H enrichment of Enceladus. We were unable, in
this investigation, to find models matching both the moderately enriched
component of the LL3 meteorites at 3 AU and the value at Enceladus at 10 AU
that did not also require the value of $f$ to increase to much larger values
in the region beyond 15 AU. Thus, the result that $f$ in the 20-30 AU zone
should have exceeded $\sim 25$ is a generic outcome of the temperature
evolution of the disk, when constrained by the D/H measured at Enceladus, and
not particularly dependent on the model of that evolution.
## 4 Interpretation of the deuterium to hydrogen ratio measured at Enceladus
by the $Cassini$ spacecraft
One could argue that the building blocks of Enceladus were formed in Saturn’s
subnebula, implying that the D/H ratio in H2O measured at this satellite by
the $Cassini$ spacecraft might not be representative of the one acquired by
planetesimals condensed in Saturn’s feeding zone in the solar nebula. In order
to show that this hypothesis is unlikely, we have performed calculations of
the evolution of the D/H ratio in H2O in Saturn’s initially hot subnebula. The
hypothesis of an initially hot subnebula is required if one wants to assume
that the building blocks of the regular icy satellites, including Enceladus,
were formed $in~{}situ$. To do so, we have used the same turbulent disk model
utilized to describe the evolution of the D/H ratio in water in the solar
nebula, but in a version scaled to the plausible size and properties of the
Saturn’s subnebula. This model has already been used to describe the
thermodynamic evolution of cold subnebulae around Saturn and Jupiter (Mousis
et al. 2002a; Mousis et al. 2002b; Mousis & Gautier 2004). Here we consider
the subdisk parameters of the initially hot Saturn’s subnebula depicted by
Alibert & Mousis (2007) and whose evolution was constrained by Saturn’s
formation models. The viscosity parameter, the initial mass, and outer edge of
our Saturn’s subnebula have then been set to 2 $\times$ $10^{-4}$, 7 $\times$
$10^{3}$ Saturn’s mass and 200 Saturnian radii, respectively.
Figure 2 shows the temporal evolution of the temperature profile in the
midplane of Saturn’s subnebula. Because the initial temperature of the
subnebula is very high, any icy planetesimal entering the subdisk at early
epochs of its evolution should be devolatilized and would then enrich the gas
phase of the disk. In this model, ice forms again at the outer edge of the
subnebula at $t$ $\sim$ 3 $\times$ $10^{3}$ yr (once the gas temperature has
decreased down to $\sim$155 K at the corresponding pressure conditions) and
its condensation front reaches the orbit of Enceladus after only a few dozen
thousands of years of evolution.
Figure 3 represents the evolution of the D/H ratio in H2O in the subnebula
described with the same approach as in Section 3. We have assumed that the
deuterium enrichment factor, $f$, is equal to 13.8 (i.e., the value measured
at Enceladus by the $Cassini$ spacecraft) in the whole subnebula at $t$ = 0.
Due to the high temperature and pressure conditions that favor the isotopic
exchange between H2O and H2 within the subnebula, $f$ rapidly diminishes and
converges toward 1 in about 1000 years, prior to the condensation of ice (see
dashed curve in Figure 3). We find that planetesimals should present D/H
ratios in H2O very close to the protosolar value if they were condensed within
Saturn’s subnebula. The isotopic exchange is so efficient at the temperature
and pressure ranges likely to have been present the Saturn subnebula that $f$
would converge towards $\sim$ 1 for nearly any choice of initial value. The
$Cassini$ measurement at Enceladus shows that the D/H ratio in H2O present in
the plumes is strongly over-solar and we conclude that the building blocks of
this satellite must have formed in the solar nebula.
## 5 Implications for the primordial origin of comets
The D/H ratio for cometary water ice is available for only a limited sample of
comets, with two measurements available for only two. These measurements (see
Table 1 and references therein) have been conducted using a variety of
methods: remote UV spectroscopy (C/2001 Q4 (NEAT)), mass spectroscopy
(1P/Halley), radio spectroscopy (C/1996 B2 (Hyakutake) and C/1995 O1 (Hale-
Bopp)), and infrared spectroscopy (8P/Tuttle). Despite the variety of
techniques used for the cometary measurements and the limitations of each (see
the footnotes to Table 1), a remarkably narrow range of D/H values have been
reported. Table 1 summarizes those results and also includes the result for
the D/H ratio of Enceladus from $Cassini$. The taxonomic classification using
the system Levison (1996) is also provided. All of these comets are members of
the nearly-isotropic class. Comets like C/2004 Q4 (NEAT) are almost certainly
to have originated from the outer Oort cloud reservoir while the ‘external’
and ‘Halley type’ comets may, in fact, come from the inner-most Oort cloud
(Kaib & Quinn 2009).
### 5.1 Isotropic comets
The isotropic comets have their origin in some part (inner-most, inner or
outer) of the Oort cloud. Based on the value of $f$ observed in the nearly-
isotropic comets ($\sim 13-23$) and our modeling of the evolution of $f$, the
cometesimals are most likely to have been delivered into the Oort cloud from a
source region between 10 and 14 AU from the Sun. We find that the value of $f$
interior to $\sim$10 AU is too low for the nearly-isotropic comets, implying
that Jupiter and Saturn where not responsible for populating this reservoir.
Further, in the classical picture of solar system formation, where Uranus and
Neptune form near their current locations of 20 and 30 AU, the ice-giants
would have delivered cometesimals to the Oort cloud with values of $f>25$,
which is not seen. We find that, for our model of deuterium evolution, having
a value of $f\sim 15$ (as required by the Enceladus measurement) at 10 AU and
$f\sim 15$ at 25 AU is not possible.
The Nice model for the formation of the solar system, however, asserts that
the formation location of Uranus/Neptune, and presumably then the region from
which they delivered the majority of the material into the Oort cloud, was
considerably nearer to present day Saturn, between 11 and 13 AU for Uranus and
13.5 and 17 AU for Neptune (Tsiganis et al., 2005). This is precisely that
zone of the primordial solar system which our modeling indicates cometesimals
would have formed with values of $f$ similar to that observed in the nearly-
isotropic comets. Thus, the current measured values of $f$ in the isotropic
comet population appears to support a more compact configuration for the early
solar system. Our knowledge of the dynamics of the formation of the Oort cloud
from a compact configuration remains uncertain, indeed the origin of the Oort
cloud comets maybe varied (Clube & Napier 1984, for example). The homogeneity
of D/H measures in Oort cloud comets and similarity of those values to that
measured for Enceladus provides an interesting constraints for such scenarios.
### 5.2 Ecliptic comets
At present, no comets in the ecliptic class have known D/H levels. The
$Rosetta$ mission, currently en route to the ecliptic comet 67P/Churyumov-
Gerasimenko may alter this situation. Dynamical processes that populate the
ecliptic comet reservoir (either the Kuiper belt, scattering disk, or some
combination) all draw their source populations from beyond the orbit of
Neptune (at least beyond 17 AU). Based on our model of the radial dependence
of $f$ (see Figure 1), we predict that the measured D/H ratio in the ecliptic
comet population should exceed 24 times solar.
## 6 Conclusions
1P/Halley, 8P/Tuttle, C/1995 O1 (Hale-Bopp), C/1996 B2 (Hyakutake) and C/2001
Q4 (NEAT) all have D/H values that are consistent with or slightly larger than
that of Enceladus. These comets are all members of the nearly-isotropic class
and are, thus, drawn from a reservoir in some part of the Oort cloud. Based on
dynamical arguments, the Oort cloud itself was fed by material from the
Uranus/Neptune region. Our modeling of the dependence of $f$ (pinned by the
measured deuterium enrichment of Enceladus) on formation location (see Figure
1) precludes these comets from having formed beyond $\sim$15 AU from Sun. This
implies that Uranus and Neptune were originally closer to the current location
of Saturn than observed today, a configuration quite similar to that preferred
in the Nice model. Future space probe missions and improved remote sensing
capabilities will likely provide a larger number and variety of cometary D/H
measurements and will surely increase the constraints on the primordial
configuration from which the planetary system evolved to its current state.
Helpful advice provided by Ramon Brasser is gratefully acknowledged. J.
Kavelaars acknowledges support provided by Embassy France. O. Mousis
acknowledges support provided by the Centre National d’Etudes Spatiales.
## References
* Alibert & Mousis (2007) Alibert, Y., & Mousis, O. 2007, A&A, 465, 1051
* Balsiger et al. (1995) Balsiger, H., Altwegg, K., & Geiss, J. 1995, J. Geophys. Res., 100, 5827
* Bockelee-Morvan et al. (1998) Bockelee-Morvan, D., et al. 1998, Icarus, 133, 147
* Brasser, Duncan & Levison (2006) Brasser, R., Duncan, M. J., & Levison, H. F. 2006, Icarus, 184, 59
* Brasser, Duncan & Levison (2007) Brasser, R., Duncan, M. J., & Levison, H. F. 2007, Icarus, 191, 413
* Brown et al. (2004) Brown, M. E., Trujillo, C., & Rabinowitz, D., 2004, ApJ, 617, 645
* Brown & Millar (1989) Brown, P. D., & Millar, T. J. 1989, MNRAS, 240, 25P
* Carusi et al. (1987) Carusi, A., Kresak, E. & Valsecchi, G., 1987, A&A, 187, 899
* Clube & Napier (1984) Clube, S. V. M., & Napier, W. M. 1984, MNRAS, 208, 575
* Crovisier et al. (2004) Crovisier, J., Bockelée-Morvan, D., Colom, P., Biver, N., Despois, D., Lis, D. C., & the Team for target-of-opportunity radio observations of comets 2004, A&A, 418, 1141
* Deloule et al. (1998) Deloule, E., Robert, F., & Doukhan, J. C. 1998, Geochim. Cosmochim. Acta, 62, 3367
* Dones et al. (2004) Dones, L., Weissman, P. R., Levison, H. F., & Duncan, M. J. 2004, Comets II, 153
* Drouart et al. (1999) Drouart, A., Dubrulle, B., Gautier, D., & Robert, F. 1999, Icarus, 140, 129
* Duncan & Levison (1997) Duncan, M. J., & Levison, H. F. 1997, Science, 276, 1670
* Eberhardt et al. (1995) Eberhardt, P., Reber, M., Krankowsky, D., & Hodges, R. R. 1995, A&A, 302, 301
* Geiss & Gloeckler (1998) Geiss, J., & Gloeckler, G. 1998, Space Science Reviews, 84, 239
* Gladman et al. (2002) Gladman, B., Holman, M., Grav, T., Kavelaars, J., Nicholson, P., Aksnes, K. and Petit, J.-M., 2002, Icarus, 157, 269
* Horner et al. (2007) Horner, J., Mousis, O., & Hersant, F. 2007, Earth Moon and Planets, 100, 43
* Kaib & Quinn (2008) Kaib, N. A., & Quinn, T. 2008, Icarus, 197, 221
* Kaib & Quinn (2009) Kaib, N. A., & Quinn, T. 2009, Science, 325, 1234
* Lellouch et al. (2001) Lellouch, E., Bézard, B., Fouchet, T., Feuchtgruber, H., Encrenaz, T., & de Graauw, T. 2001, A&A, 370, 610
* Levison (1996) Levison, H. F. 1996, in ASP Conf. Ser. 107, Completing the Inventory of the Solar System, ed. T.W. & J.M. Hahn (San Francisco, CA: ASP), 173
* Levison et al. (2008) Levison, H. F., Morbidelli, A., Vanlaerhoven, C., Gomes, R., & Tsiganis, K. 2008, Icarus, 196, 258
* Meier et al. (1998) Meier, R., Owen, T. C., Matthews, H. E., Jewitt, D. C., Bockelee-Morvan, D., Biver, N., Crovisier, J., & Gautier, D. 1998, Science, 279, 842
* Mousis et al. (2002a) Mousis, O., Gautier, D., & Bockelée-Morvan, D. 2002a, Icarus, 156, 162
* Mousis (2004) Mousis, O., & Gautier, D. 2004, Planet. Space Sci., 52, 361
* Mousis et al. (2002b) Mousis, O., Gautier, D., & Coustenis, A. 2002b, Icarus, 159, 156
* Mousis et al. (2000) Mousis, O., Gautier, D., Bockelée-Morvan, D., Robert, F., Dubrulle, B., & Drouart, A. 2000, Icarus, 148, 513
* Teixeira et al. (1999) Teixeira, T. C., Devlin, J. P., Buch, V., & Emerson, J. P. 1999, A&A, 347, L19
* Tsiganis et al. (2005) Tsiganis, K., Gomes, R., Morbidelli, A., & Levison, H.F. 2005, Nature, 435, 459
* Villanueva et al. (2009) Villanueva, G. L., Mumma, M. J., Bonev, B. P., Di Santi, M. A., Gibb, E. L., Böhnhardt, H., & Lippi, M. 2009, ApJ, 690, L5
* Waite et al. (2009) Waite, J. H., Jr., et al. 2009, Nature, 460, 487
* Weaver et al. (2008) Weaver, H. A., A’Hearn, M. F., Arpigny, C., Combi, M. R., Feldman, P. D., Tozzi, G.-P., Dello Russo, N., & Festou, M. C. 2008, LPI Contrib., 1405, 8216
Table 1: Deuterium measurements in H2O in Enceladus and in different comets
Name | (D/H)${}_{\tt H_{2}O}$ ($\times~{}10^{-4}$) | $f$aaEnhancement of D/H in H2O compared to the protosolar D/H value of $(0.21~{}\pm~{}0.05)\times 10^{-4}$ (Geiss & Gloeckler 1998) | Reference | Object Class
---|---|---|---|---
LL3 (high) | $7.3~{}\ \pm~{}1.2$ | 34.8 | Deloule et al. (1998) |
LL3 (low) | $0.88~{}\pm~{}.11$ | 4.2 | Deloule et al. (1998) |
Enceladus | $2.9^{+1.5}_{-0.7}$ | 13.8 | Waite et al. (2009)bbD/H in molecular hydrogen in the plume of material ejected from Enceladus, D/H in molecular hydrogen should be representative of D/H in water. | Regular icy satellite of Saturn
C/2001 Q4 (NEAT) | $4.6\pm 1.4$ | 21.9 | Weaver et al. (2008)ccUltraviolet measurements of atomic D and H in the coma, assumes HDO and H2O photolysis are the exclusive sources of D and H. | Isotropic, new
1P/Halley | $3.1^{+0.4}_{-0.5}$ | 14.7 | Balsiger et al. (1995)ddIon mass spectrometer measurements of D/H in the hydronium ion (H3O+), assumes same ratio holds in water. | Isotropic, returning, Halley type
$\cdots$ | $3.2~{}\ \pm~{}0.3$ | 15.0 | Eberhardt et al. (1995)eeNeutral and ion mass spectrometer measurements of D/H in the hydronium ion (H3O+), corrected for fractionation in the ratio for water. | $\cdots$
C/1996 B2 (Hyakutake) | $2.9~{}\ \pm~{}1.0$ | 13.8 | Bockelée-Morvan et al. (1998)ffHDO production rate derived from the measurement of a single submillimeter HDO line and a water production rate obtained from other observations made at a different time. | Isotropic, returning, external
C/1995 O1 (Hale-Bopp) | $3.3~{}\ \pm~{}0.8$ | 15.7 | Meier et al. (1998)ffHDO production rate derived from the measurement of a single submillimeter HDO line and a water production rate obtained from other observations made at a different time. | Isotropic, returning, external
$\cdots$ | $4.7~{}\ \pm~{}1.1$ | 22.4 | Crovisier et al. (2004)f,gf,gfootnotemark: | $\cdots$
8P/Tuttle | $4.1~{}\ \pm~{}1.5$ | 19.5 | Villanueva et al. (2009)hhThe listed D/H is consistent, at the 3$\sigma$ level, with D/H $<4.35~{}\times~{}10^{-4}$. | Isotropic, returning, Halley type
ggfootnotetext: The authors also reported an upper limit of D/H$\lesssim
1.8~{}\times~{}10^{-4}$ using a different line, which is inconsistent with
their detections from two other HDO lines.
Figure 1: Enrichment factor $f$ as a function of the heliocentric distance.
The dashed curves correspond to the evolution of $f$ in the gas phase prior to
condensation terminated by dots at the heliocentric distance where H2O
condenses at the given epoch. The solid curve represents the value of $f$
acquired by ice as a function of its formation distance in the nebula. D/H
enrichments in LL3(low and high) meteorites and Enceladus are shown for
comparison. We take the LL3(high) value as the initial, protosolar, value. The
vertical dotted lines enclose the source region of Uranus and Neptune in the
Nice model. The gray area corresponds to the dispersion of the central values
of the $f$ in the comets for which measurements are available (see Table 1).
Figure 2: Temperature profiles at different epochs in the midplane of the
Saturnian subnebula, at times (from top to bottom) $t$ = 0, 5, 200, 400,
$10^{3}$, 2 $\times$ $10^{3}$, 3 $\times$ $10^{3}$, 5 $\times$ $10^{3}$, 7
$\times$ $10^{3}$, and $10^{4}$ yr as a function of the distance from Saturn
in units of Saturn radii. Dashed curve corresponds to the epoch $t=10^{3}$ yr
at which the deuterium enrichment factor of the D/H ratio in H2O reaches the
protosolar value in the whole subdisk (see Figure 3).
Figure 3: Enrichment factor $f$ of the D/H ratio in H2O with respect to the
protosolar value in the subnebula midplane, as a function of the distance to
Saturn (in units of Saturn radii), at times (from top to bottom) $t$ = 0, 0.1,
5, 20, 50, 100, 200, 400, and $10^{3}$ yr, see the text for details. The value
for $f$ at $t$ = 0 is taken to be equal to 13.8 (the value measured at
Enceladus by the $Cassini$ spacecraft), irrespective of the distance to Saturn
in the subdisk. At the epoch $t=10^{3}$ yr the deuterium enrichment factor in
H2O reaches the protosolar value in the whole subdisk. For Saturn D/H
$=1.7^{+0.75}_{-0.45}\times 10^{-5}$ (Lellouch et al., 2001) resulting in
$f\sim 0.8$.
|
arxiv-papers
| 2011-04-26T17:36:01 |
2024-09-04T02:49:18.423834
|
{
"license": "Public Domain",
"authors": "Jj Kavelaars, Olivier Mousis, Jean-Marc Petit, and Harold A. Weaver",
"submitter": "Jj Kavelaars",
"url": "https://arxiv.org/abs/1104.4977"
}
|
1104.5088
|
# Tsallis entropy approach to radiotherapy treatments
O. Sotolongo-Grau osotolongo@dfmf.uned.es D. Rodriguez-Perez
daniel@dfmf.uned.es O. Sotolongo-Costa osotolongo@fisica.uh.cu J. C. Antoranz
jcantoranz@dfmf.uned.es Hospital General Gregorio Marañón, Laboratorio de
Imagen Médica, 28007 Madrid, Spain UNED, Departamento de Física Matemática y
de Fluidos, 28040 Madrid, Spain University of Havana, Cátedra de Sistemas
Complejos Henri Poincaré, Havana 10400, Cuba
###### Abstract
The biological effect of one single radiation dose on a living tissue has been
described by several radiobiological models. However, the fractionated
radiotherapy requires to account for a new magnitude: time. In this paper we
explore the biological consequences posed by the mathematical prolongation of
a model to fractionated treatment. Nonextensive composition rules are
introduced to obtain the survival fraction and equivalent physical dose in
terms of a time dependent factor describing the tissue trend towards
recovering its radioresistance (a kind of repair coefficient). Interesting
(known and new) behaviors are described regarding the effectiveness of the
treatment which is shown to be fundamentally bound to this factor. The
continuous limit, applicable to brachytherapy, is also analyzed in the
framework of nonextensive calculus. Also here a coefficient arises that rules
the time behavior. All the results are discussed in terms of the clinical
evidence and their major implications are highlighted.
###### keywords:
Radiobiology, Fractionated Radiotherapy, Survival fraction, Entropy
††journal: Physica A
## 1 Introduction
The effects of fractionated radiotherapy and single dose radiation may be
quite different depending on the gap between consecutive fractions. The larger
the gap is, the larger the difference, due to the tissue recovery capabilities
characteristic times. Fractionated therapies are usually modeled including
correction factors in single dose expressions. Here, we will explore how to
include fractionation in a recently introduced model derived using the Tsallis
entropy definition [1] and the maximum entropy principle. As can be seen in
[2] (and other works in the same issue) nonextensive Tsallis entropy has
become a successful tool to describe a vast class of natural systems. The new
radiobiological model [3] (maxent model in what follows) takes advantage of
Tsallis formulation to describe the survival fraction as function of the
radiation dose, based on a minimum number of statistical and biologically
motivated hypotheses.
The maxent model assumes the existence of a critical dose, $D_{0}$, that
annihilates every single cell in the tissue. The radiation dose can be written
as a dimensionless quantity in terms of that critical dose as $x=d/D_{0}$,
where $d$ is the radiation dose. Then the support of the cell death
probability density function, $p(x)$, in terms of the received dose $x$,
becomes $\Omega=\left[0;1\right]$. A Tsallis entropy functional can be
written,
$S_{q}=\frac{1}{q-1}\left[1-\int_{0}^{1}p^{q}\left(x\right)dx\right],$ (1)
where $q$ is the nonextensivity index. The survival fraction of cells will be
given by $f\left(x\right)=\int_{x}^{1}p\left(x\right)dx$, that is the
complement of the fraction of cells killed by radiation. In order to maximize
functional (1) we must consider the normalization condition,
$\int_{0}^{1}p\left(x\right)dx=1$ (2)
Also, following [4], we must assume the existence of a finite $q$-mean value
(or the mean value of the escort probability distribution),
$\int_{0}^{1}p^{q}\left(x\right)xdx=\left\langle x\right\rangle_{q}$ (3)
Then the Lagrange multipliers method leads to,
$p\left(x\right)=\gamma\left(1-x\right)^{\gamma-1},$ (4)
with $\gamma=\frac{q-2}{q-1}$. So, the survival fraction the model predicts is
$f\left(x\right)=(1-x)^{\gamma},$ (5)
valid for $x\in\Omega$ and $\gamma>1$. This model has shown a remarkable
agreement with experimental data [3, 5], in particular in those limits where
previous models are less accurate, mainly at high doses. The analysis of the
model also provides new hints about the tissue response to radiation: first,
the interaction of a tissue with the radiation is universal and characterized
by a single exponent (not dependent on the radiation exposure); second, the
model includes a cutoff radiation dose above which every single cell dies.
Furthermore, previous models can be obtained as particular limiting cases.
Finally, as for those models, its mathematical expression is simple and can be
easily plotted and interpreted.
Assuming (5) valid for a single radiation dose, the total survival fraction,
$F_{n}$, for a fractionated treatment consisting of $n$ doses should be found
as a composition of the survival probabilities of the successive radiation
doses. However, the survival fraction now lacks the extensivity property.
Indeed, if two doses, $x_{A}$ and $x_{B}$ are applied over a tissue, the
resulting survival fraction from their composition has two possible values. If
the dose is assumed as additive, $f_{AB}=\left(1-x_{A}-x_{B}\right)^{\gamma}$,
the survival probabilities of individual cells under the $A$ and $B$ radiation
events could not be treated as independent probabilities, $f_{AB}\neq
f_{A}f_{B}$. On the other hand, if survival fractions are multiplicative,
$f=\left(1-x_{A}\right)^{\gamma}\left(1-x_{B}\right)^{\gamma}$ , doses would
not fulfill the superposition principle for the equivalent physical dose,
$x_{AB}\neq x_{A}+x_{B}$.
The subject of this manuscript is to develop on the composition rules that
would lead to the survival fraction and the equivalent physical dose of a
fractionated treatment, and to derive the biological implications of such
rules. This will be approached within the framework of the $q$-algebra [6, 7,
8], as far as it is the natural one for the nonextensive maxent model.
## 2 Composition rules for fractionation
As it has just been exposed, if those composition rules are defined keeping
the superposition principle for the dose, the survival fractions are not
independent of each other and _vice versa_ , if the survival fractions are
multiplicative, the dose becomes non additive [9]. So, other biophysical
properties of radiation tissue interaction need to be taken into account in
order to perform a meaningful composition.
Let us suppose several radiation events occur so separate in time that
physical consequences of one of them are independent from the others. In other
words, survival probabilities for individual radiation doses are independent.
The survival fraction should then be the product of partial fractions, and for
the whole treatment,
$F_{n}=\prod_{i=1}^{n}\left(1-x_{i}\right)^{\gamma},$ (6)
where $i$ runs along the radiation sessions.
However, if the radiation doses occur simultaneously, _i.e._ as coming from
different beams and concurrent in the same point of tissue, the radiation dose
must be additive and the total survival fraction follows,
$F_{n}=\left(1-\sum_{i=1}^{n}x_{i}\right)^{\gamma}$ (7)
In order to deal with a real treatment new generalized sum and product
operations need to be introduced. Notice that it is possible to write (7) as a
product, finding the expression that turns $F$ after $n-1$ fractions into $F$
after $n$ fractions. So, expression (7) can be written as,
$F_{n}=\left(1-\frac{x_{n}}{1-\sum_{k=1}^{n-1}x_{k}}\right)^{\gamma}F_{n-1}=\prod_{i=1}^{n}\left(1-\frac{x_{i}}{1-\sum_{k=1}^{i-1}x_{k}}\right)^{\gamma}$
(8)
This expression can be interpreted as a modified (6) in which the denominator,
which plays the role of the annihilation dose, gets reduced, in practice, by
an amount $x_{i}$ after addition of the $i$-th fraction. On the other hand,
for independent fractions this critical dose remains constant along the
treatment.
Let us introduce a new nonextensive sum, $\bigoplus$, and product,
$\bigotimes$, operators consistently defined to hold,
$F_{n}=\bigotimes_{i=1}^{n}\left(1-x_{i}\right)^{\gamma}=\left(1-\bigoplus_{i=1}^{n}x_{i}\right)^{\gamma}=\prod_{i=1}^{n}\left(1-\frac{x_{i}}{1-\epsilon\bigoplus_{k=1}^{i-1}x_{k}}\right)^{\gamma},$
(9)
subject to the condition
$\bigoplus_{i=1}^{n}x_{i}\rightarrow\sum_{i=1}^{n}x_{i}$, for
$\epsilon\rightarrow 1$. Then, the coefficient $\epsilon\in\left[0,1\right]$
relates equations (6) and (8) such that $\epsilon=1$ implies radiation
fractions are completely correlated while $\epsilon=0$ means they are fully
independent. According to both limits interpretation, $\epsilon$ values will
depend on the time between fractions and also on tissue repair or recovery
capabilities.
## 3 Biological and physical implications
### 3.1 Isoeffect relationship
A single radiation fraction with an effective dimensionless dose $X$ equal to
the whole fractionated treatment can be found such that,
$F_{n}=\left(1-X\right)^{\gamma}=\left(1-\bigoplus_{i=1}^{n}x_{i}\right)^{\gamma}$
(10)
After the $i$-th fraction, the dimensionless effective dose becomes,
$X_{i}=X_{i-1}+x_{i}\left(\frac{1-X_{i-1}}{1-\epsilon X_{i-1}}\right),$ (11)
assuming $X_{1}=x_{1}$. When the $n$-th fraction is given, then $X_{n}=X$.
All fractionated treatments sharing the same value of $X$ will provide the
same value for the survival fraction. So, the same $X$ will provide the
isoeffect criterion for the fractionated therapy.
In order to check the model reliability, it has been fitted to data from [10,
11, 12] using a weighted least squares algorithm. Those data sets are
considered as a reliable source of clinical parameters (as the $\alpha/\beta$
relation of LQ model [13]). The results of the fit are shown in Figure 1.
Figure 1: Isoeffect relationship data reported for mouse lung by [10]
($\epsilon=0.50$, $D_{0}=11.3\textrm{ Gy}$), mouse skin by [11]
($\epsilon=0.58$, $D_{0}=24.0\textrm{ Gy}$) and mouse jejunal crypt cells by
[12] ($\epsilon=0.62$, $D_{0}=16.1\textrm{ Gy}$), fitted to our model.
The obtained $\epsilon$ coefficients show a survival fraction behavior far
from the pure $q$-algebraic limits ($\epsilon=0,1$). Since $\epsilon$ values
for usual tissue reaction differ from limiting values, it is worth to further
study the biophysical interpretation of this new parameter.
Figure 2: Isoeffect curves for mouse jejunal crypt cells by [12]. Curves are
calculated based on fitted parameters $\epsilon=0.62$ and $D_{0}=16.1\textrm{
Gy}$ for different $X$ values of our model, shown for every plot.
Every $X$ value provides a different isoeffect relationship, as shown in
figure 2. Once the involved coefficients for a treatment ($\epsilon$ and
$D_{0}$) are known it can be tuned to obtain the desired effective dose by
changing $n$ and $d$.
### 3.2 Critical dosage
Assuming the same physical dose per fraction, $x$, as is the case in many
radiotherapy protocols, expression (11) becomes a recursive map, describing
the behavior of the effective dose in a treatment. For a given $\epsilon$
there is a critical value of $x$,
$x_{c}=1-\epsilon,$ (12)
dividing the plane $(\epsilon,x)$ in two different regions (see figure 3). For
a treatment with $x<x_{c}$, there will always be a surviving portion of the
tissue since always $X_{n}<1$. However, if $x>x_{c}$, after enough fractions
$X_{n}>1$, meaning that effective dose has reached the critical value and
every single cell of tissue has been removed by the treatment. Then it is
possible to find $n_{0}$, the threshold value of $n$, that kills every cell,
for a given therapy protocol. This is shown in the inset of Figure 3.
Figure 3: The larger plot represents $n_{0}$ isolines as a function of $x$ and
$\epsilon$ (dashed lines) above $x_{c}(\epsilon)$ (solid line); below this
line, killing all tissue cells is impossible. The small one represents
critical values $n_{0}$ in terms of $x_{c}$.
If the desired result is the elimination of the radiated tissue cells, _i.e._
surrounding tissue is not a concern for treatment planning, $n_{0}$ represents
the minimum number of sessions needed to achieve this goal; any session after
that will be unnecessary. On the contrary, if the therapy goal requires the
conservation of tissue cells (for instance in order to preserve an organ),
then the number of sessions must be lower than $n_{0}$.
The parameter $\epsilon$ is a cornerstone on isoeffect relationships. A
fractionated therapy of fully independent fractions requires a greater
radiation dose per fraction, or more fractions, in order to reach the same
isoeffect as a treatment with more correlated fractions. The $\epsilon$
coefficient acts here as a relaxation term. Immediately after radiation damage
occurs ($\epsilon=1$) tissue begins to recover, as $\epsilon$ decreases, until
the tissue eventually reaches its initial radiation response capacity
($\epsilon=0$). In other words, the formerly applied radiation results in a
decrease of the annihilation dose (initially equal to $D_{0}$) describing the
effect of the next fraction. The more correlated a fraction is to the previous
one, the larger the value of $\epsilon$ and, thus, the larger the effect on
the critical dose will be. Notice that unlike $\gamma$, that characterizes the
tissue primary response to radiation, $\epsilon$ characterizes the tissue
trend to recover its previous radioresistance.
Correlation between fractions can be translated in terms of the late and acute
tissue effects of radiobiology. Indeed, damaged tissue repairing and
recovering capabilities should determine the value of $\epsilon$. Given a
dosage protocol, an early responding tissue would correspond to $\epsilon$
close to $0$, whereas late responding tissue, would have $\epsilon$ closer to
$1$. Notice that in current working models for hyperfractionated therapies
this repair and recovery effects are introduced as empirical correction
factors [14], as will be required for $\epsilon$.
As it was shown in [3], nonextensivity properties of tissue response to
radiation for single doses are more noticeable for higher doses than predicted
by current models. On the contrary, a lower dose per fraction brought out
nonextensive properties for fractionated therapies. Indeed, for high dosage a
few fractions are applied in a treatment and a change in $n$ is not required
for different $\epsilon$ values. However, in the lower dosage case, more
radiation fractions need to be applied and the $\epsilon$ parameter may become
crucial. In this case $n$ values move away from each other for isoeffect
treatments with different $\epsilon$. So, in order to achieve the desired
therapy effects, fractionated radiotherapy must be planned for a tissue
described by $\gamma$, varying $x$ according to $\epsilon$. This $\epsilon$
coefficient should be experimentally studied as its value tunes the
annihilation dose along a radiotherapy protocol.
## 4 Continuous formulation
### 4.1 Continuous limit
For some radiation treatments as brachytherapy the irradiation is applied in a
single session but for a prolonged period of time. If the discrete irradiation
sessions were close enough (11) could be written as,
$\dot{X}=r\frac{1-X}{1-\epsilon X}$ (13)
In continuous irradiation the effective dose is in general small, and is
possible to assume $\epsilon X\ll 1$ and $\frac{1}{1-\epsilon X}\simeq
1+\epsilon X$. Then,
$\dot{X}\simeq r\left[1-\left(1-\epsilon\right)X\right],$ (14)
where the terms of second order in $X$ and above have been neglected.
### 4.2 Continuous irradiation
It is obvious from dose additivity properties that in the continuous
irradiation case and for two time instants $t_{0}$ and $t_{1}$ close enough,
$X=\int_{t_{0}}^{t_{1}}rdt,$ (15)
where $r$ is the dose rate per unit time. However if both instants of time are
far enough to make relevant the tissue recovering capabilities this expression
becomes invalid. So, whereas a usual integration process could become valid in
a short time period this is not true for longer intervals. So, in a similar
way as was already done for the sum operation, a new definition for
integration must be introduced.
This can be done following [7] and introducing the $q$-algebraic sum and
difference,
$\begin{array}[]{c}x\boxplus y=x+y-\theta xy\\\ x\boxminus
y=\frac{x-y}{1-\theta y}\end{array}$ (16)
In those terms, a nonextensive derivative operation follows such that,
$\frac{{\cal D}}{dt}f=\lim_{t\rightarrow t_{0}}\frac{f\left(t\right)\boxminus
f\left(t_{0}\right)}{t-t_{0}}=\frac{\dot{f}}{1-\theta f}$ (17)
Then we can define the physical dose rate, $r$, as the nonextensive time
derivative of the equivalent dose,
$r=\frac{{\cal D}}{dt}X=\frac{\dot{X}}{1-\theta X}$ (18)
Expression (18) can be rewritten as a standard ODE,
$\dot{X}+\theta rX=r,$ (19)
which can be solved in the usual way taking into account that $\theta$ and $r$
are in general functions of time. Due to the applied radiation ($r$) the
applied effective dose increases linearly. However a resistance force ($\theta
rX$), that depends not only on tissue recovering characteristics but also on
the dose rate and the effective dose itself, is slowing down this increase.
In order to show (19) behavior, let us suppose $r$ is constant (a common case
in clinical practice) and $\theta$ slowly varying in time, so that it can be
also taken as a constant. Then it is straightforwardly obtained,
$X=\frac{1}{\theta}\left\\{1-\exp\left(-\theta rt\right)\right\\},$ (20)
allowing to find the needed irradiation time to kill every cell in the tissue
($X=1$),
$t_{k}=-\frac{\ln\left(1-\theta\right)}{\theta r},$ (21)
and showing that effective dose increases at a decreasing speed,
$\dot{X}=r\exp\left(-\theta rt\right),$ (22)
until tissue cells get annihilated at time $t_{k}$ ($X=1$). Under continuous
irradiation, survival fraction decreases faster at the beginning of
irradiation process. However, depending on dose rate and $\theta$ coefficient,
the killing process speed slows down until eventually every cell is killed. If
the recovery capacity is very high ($\theta=1$) the radiation effects stack
slowly and there will always be surviving tissue cells ($t_{k}=\infty$). Those
radiation damages stack faster as long as tissue cells are less capable to
repair it and if there is no repair processes at all ($\theta=0$) the
effective radiation dose grows linearly in time and cells get killed faster
($t_{k}=1/r$). This time shortening behavior with decreasing repairing rate is
also shown by other radiobiological models [15].
Comparing (19) and (14) we see that, in the limit of continuous dosage, they
become the same expression with $\theta\simeq 1-\epsilon$. However this
relation may become invalid at high exposures as effective dose becomes larger
and $\epsilon X$ gets closer to $1$. At this point, the fractionated and
continuous treatments differ. So $\theta$ must be studied regardless of
$\epsilon$ but if a continuous alternative therapy is desired, known
$\epsilon$ values can be a good starting point to find $\theta$.
## 5 Conclusions
The use of Tsallis entropy and the second law of thermodynamics have allowed
us to write a simple nonextensive expression for the single dose survival
fraction. The mathematical constraints, required to define the probabilities
composition such that the two limiting behaviors are described, introduce a
new parameter, relating the radiation sessions. The fits to available
experimental data show that usual treatment have non trivial values of this
parameter, _i.e._ , are not close to the limiting behaviors. This make the
study of this coefficient relevant for clinical treatments and experimental
setups.
The existence of a critical dosage arises from these composition rules,
providing a criterion to adjust a treatment to kill every tumor cell or
minimize the damage over healthy tissue. This could be reached changing the
number of sessions or the radiation dose by session, allowing to switch
between isoeffective treatments.
Also an expression for the effective dose in continuous irradiation treatments
has been found, showing it is phenomenologically linked to the previous one.
This has the potential to provide isoeffect relationships in continuous dose
treatments such as brachytherapy. Besides, a relation between fractionated and
continuous therapies could be established from the obtained coefficients.
## Acknowledgments
Authors acknowledge the financial support from the Spanish Ministerio de
Ciencia e Innovación under ITRENIO project (TEC2008-06715-C02-01).
## References
* [1] C. Tsallis. Nonextensive statistics: Theoretical, experimental and computational evidences and connections. Brazilian Journal of Physics, 29:1–35, 1999.
* [2] H. Swinney and C. Tsallis. Anomalous distributions, nonlinear dynamics, and nonextensivity. Physica D: Nonlinear Phenomena, 193(1-4):1 – 2, 2004. Anomalous distributions, nonlinear dynamics, and nonextensivity.
* [3] O. Sotolongo-Grau, D. Rodríguez-Pérez, J. C. Antoranz, and O. Sotolongo-Costa. Tissue radiation response with maximum tsallis entropy. Phys. Rev. Lett., 105(15):158105, 2010.
* [4] A. Plastino and A. R. Plastino. Tsallis entropy and Jaynes’ information theory formalism. Brazilian Journal of Physics, 29:50–60, 1999.
* [5] O. Sotolongo-Grau, D. Rodriguez-Perez, J.C. Antoranz, and O. Sotolongo-Costa. Non-extensive radiobiology. In A. Mohammad-Djafari, J-F. Bercher, and P. Bessiere, editors, Bayesian inference and maximum entropy methods in science and engineering (Proceedings of the 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, 4-9 July 2010, Chamonix, France), volume 1305 of AIP Conference Proceedings, pages 219–226. AIP, 2010.
* [6] L. Nivanen, A. Le Mehaute, and Q. A. Wang. Generalized algebra within a nonextensive statistics. Reports On Mathematical Physics, 52, 2003.
* [7] E. P. Borges. A possible deformed algebra and calculus inspired in nonextensive thermostatistics. Physica A, 340:95, 2004.
* [8] N. Kalogeropoulos. Algebra and calculus for tsallis thermo-statistics. Physica A: Statistical Mechanics and its Applications, 356(2-4):408 – 418, 2005.
* [9] G. Kaniadakis, M. Lissia, and A. M. Scarfone. Two-parameter deformations of logarithm, exponential, and entropy: A consistent framework for generalized statistical mechanics. Physical Review E, 71, 2005.
* [10] C. S. Parkins, J. F. Fowler, R. L. Maughan, and M. J. Roper. Repair in mouse lung for up to 20 fractions of x rays or neutrons. Br J Radiol, 58(687):225–41, 1985.
* [11] B. G. Douglas and J. F. Fowler. The effect of multiple small doses of x rays on skin reactions in the mouse and a basic interpretation. Radiation Research, 66(2):401–426, 1976.
* [12] H. D. Thames, R. Withers, K. A. Mason, and B. O. Reid. Dose-survival characteristics of mouse jejunal crypt cells. International Journal of Radiation Oncology*Biology*Physics, 7(11):1591 – 1597, 1981.
* [13] A.J. van der Kogel and C.C.R. Arnout. Calculation of isoeffect relationships. In G.G. Steel, editor, Basic Clinical Radiobiology for Radiation Oncologists, pages 72–80. Edward Arnold Publishers, London, 1993.
* [14] M. C. Joiner. The linear-quadratic approach to fractionation. In G.G. Steel, editor, Basic Clinical Radiobiology for Radiation Oncologists, pages 55–64. Edward Arnold Publishers, London, 1993.
* [15] L. A. M. Pop, J. F. C. M. van den Broek, A. G. Visser, and A. J. van der Kogel. Constraints in the use of repair half times and mathematical modelling for the clinical application of hdr and pdr treatment schedules as an alternative for ldr brachytherapy. Radiotherapy and Oncology, 38(2):153 – 162, 1996.
|
arxiv-papers
| 2011-04-27T08:32:41 |
2024-09-04T02:49:18.430477
|
{
"license": "Public Domain",
"authors": "O. Sotolongo-Grau, Daniel Rodriguez-Perez, Oscar Sotolongo-Costa, and\n J. C. Antoranz",
"submitter": "Oscar Sotolongo",
"url": "https://arxiv.org/abs/1104.5088"
}
|
1104.5176
|
# Double symmetry breaking of solitons in one-dimensional virtual photonic
crystals
Yongyao Li1,4 yongyaoli@gmail.com Boris A. Malomed2,3 Mingneng Feng1 Jianying
Zhou1 stszjy@mail.sysu.edu.cn 1State Key Laboratory of Optoelectronic
Materials and Technologies,
Sun Yat-sen University, Guangzhou 510275, China
2Department of Physical Electronics, School of Electrical Engineering, Faculty
of Engineering, Tel Aviv University, Tel Aviv 69978, Israel
3ICFO-Institut de Ciencies Fotoniques, Mediterranean Technology Park,08860
Castelldefels (Barcelona), Spain††thanks: temporary Sabbatical address
4Department of Applied Physics, South China Agricultural University, Guangzhou
510642, China
###### Abstract
We demonstrate that spatial solitons undergo two consecutive spontaneous
symmetry breakings (SSBs), with the increase of the total power, in nonlinear
photonic crystals (PhCs) built as arrays of alternating linear and nonlinear
stripes, in the case when maxima of the effective refractive index coincide
with minima of the self-focusing coefficient, and vice versa, i.e., the
corresponding linear and nonlinear periodic potentials are in competition.
This setting may be induced, as a virtual PhC, by means of the EIT
(electromagnetically-induced-transparency) technique, in a uniform optical
medium. It may also be realized as a Bose-Einstein condensate (BEC) subject to
the action of combined periodic optical potential and periodically modulated
Feshbach resonance. The first SSB happens at the center of a linear stripe,
pushing a broad low-power soliton into an adjacent nonlinear stripe and
gradually suppressing side peaks in the soliton’s shape. Then, the soliton
restores its symmetry, being pinned to the midpoint of the nonlinear stripe.
The second SSB occurs at higher powers, pushing the narrow soliton off the
center of the nonlinear channel, while the soliton keeps its internal
symmetry. The results are obtained by means of numerical and analytical
methods. They may be employed to control switching of light beams by means of
the varying power.
###### pacs:
42.65.Tg; 42.70.Qs; 05.45.Yv; 03.75.Lm
## I Introduction
It is commonly known that linear eigenstates supported by symmetric
potentials, in contexts such as quantum mechanics and photonic crystals
(PhCs), may be classified according to representations of the underlying
symmetry group LL . The addition of the nonlinearity frequently gives rise to
the effect of the spontaneous symmetry breaking (SSB), i.e., reduction of the
full symmetry group to its subgroups, in the generic situation. The basic
feature of the SSB is the transition from the symmetric ground state (GS) to
that which does not follow the symmetry of the potential. The simplest
manifestations of the SSB, which were predicted in early works early , and
then in the model of nonlinear dual-core optical fibers Snyder ; fibers ,
occur in settings based on symmetric double-well potentials, or similarly
designed nonlinear pseudopotentials pseudo (the latter name for effective
potentials induced by a spatially inhomogeneous nonlinearity is used in solid-
state physics Harrison ). In the quantum theory, the nonlinearity naturally
appears in the context of the mean-field description of Bose-Einstein
condensates (BECs), with the Schrödinger equation replaced by the Gross-
Pitaevskii equation BEC . Similarly, the self-focusing makes PhCs a medium in
which the linear symmetry competes with the nonlinearity, in two-dimensional
(2D) Valencia and 1D Soukoulis -defocusing settings.
A natural extension of the SSB in the double-well potential is the transition
from symmetric to asymmetric solitons in the geometry which adds a uniform
transverse dimension to the potential, extending the potential from the double
well into a double trough. In this setting, the soliton self-traps in the free
direction due to the self-focusing nonlinearity. The SSB effect in the
solitons may be described by means of the effectively 1D two-mode
approximation, which is tantamount to the usual temporal-domain model of dual-
core optical fibers fibers , and may also be applied to the BEC loaded into a
pair of parallel tunnel-coupled cigar-shaped traps Arik . In the application
to the BEC, a more accurate description of the symmetry-broken solitons was
developed too, in the framework of the 2D Gross-Pitaevskii equation, for the
linear Marek and nonlinear Hung double-trough potentials.
PhC media are modeled by combinations of linear and nonlinear potentials,
which correspond to the alternation of material elements and voids in the PhC
structure Valencia -defocusing . A similar setting, emulating the PhC, may be
induced by means of the electromagnetically-induced-transparency (EIT)
technique in a uniform medium we . For BEC, a counterpart of the PhC may be
generated as a combination of the linear potential, induced by the optical
lattice, and a nonlinear pseudopotential imposed by a periodically patterned
magnetic or optical field modifying the local nonlinearity via the Feshbach-
resonance effect. Actually, the latter setting may be more general than the
PhC, as the so designed linear and nonlinear potentials may be created with
incommensurate periodicities HS .
Solitons in periodic linear and nonlinear potentials have been studied
theoretically in many works, as reviewed in Refs. Barcelona and Barcelona2 .
In particular, specific types of gap solitons were predicted in 1D models of
PhCs featuring the competition between the periodically modulated refractive
index and _self-defocusing_ material nonlinearity defocusing . Spatial optical
solitons, supported by effective linear potentials, were created in various
experimental setups experiment .
Unlike the double-well settings, periodic potentials usually do not give rise
to SSB in solitons, although examples of asymmetric solitons were found in
some 1D models Kominis (2006). Indeed, 1D optical media built as periodic
alternations of self-focusing material elements and voids feature no
competition between the effective linear and nonlinear potentials, as minima
and maxima of both types of the potentials coincide, hence there is no drive
for the SSB in the medium (as mentioned above, the competition takes place if
the material nonlinearity is self-defocusing, but in that case the
corresponding gap solitons do not feature SSB either defocusing ). Competing
potentials leading to SSB effects might be possible if maxima of the
refractive index would correspond to minima of the self-focusing nonlinearity.
While this is impossible in usual PhCs, composed of material stripes separated
by voids, the effective potential structures induced by EIT patterns in
uniform media admit such a situation we (as said above, a similar setting is
also possible in BEC HS ; Barcelona2 ). The objective of this work is to study
the SSB for spatial solitons in the virtual PhC of this type, following the
gradual increase of the total power of the soliton. It will be demonstrated
that the symmetry breaking happens twice in this setting, first to a low-power
soliton centered around a midpoint of a linear channel, and then to a high-
power beam situated at the center of the nonlinear stripe.
The setting under the consideration is displayed in Fig. 1(a), obeying the
following version of the nonlinear Schrödinger equation for local amplitude
$V(x,z)$, which is a function of propagation distance $z$ and transverse
coordinate $x$:
$iu_{z}=-{(1/2)}u_{xx}+V(x)\left(1-|u|^{2}\right)u,$ (1)
where the Kronig-Penney (KP) potential function, $V(x)$, is defined as per
Fig. 1(b). We stress that the self-focusing sign of the nonlinearity makes the
model different from ones with the competition between the linear and
nonlinear potentials provided by the self-defocusing defocusing . We consider
the version of the system with $d_{1}=d_{2}\equiv d$ in Fig. 1(a).
Figure 1: (Color online) (a) The scheme of the virtual photonic crystal with
period $D=d_{1}+d_{2}$. The blue (darker) and gray (lighter) slabs represent
the nonlinear and linear stripes,respectively. (b) The corresponding Kronig-
Penney modulation function.
The scaling is fixed by setting the modulation depth to be $V_{0}=0.02$, which
leaves half-period $d$ of the potential as a free parameter. In Section 2, we
present numerical results, at first, for $d=10$, which illustrates a generic
situation. Then, we demonstrate the results for other values of the period,
namely, $d=5,8,11,$ and $14$. In particular, it will be demonstrated that the
second SSB vanishes at large values of $d$. In Section 3, we report analytical
approximations,
## II Solitons and their symmetry breakings (numerical results)
### II.1 Results for $d=10$
GS (ground-state) solutions to Eq. (1) with the above-mentioned values of the
parameters, $d_{1}=d_{2}=10$ and $V_{0}=0.02$, were found by means of the
imaginary-time-propagation method Chiofalo (2000). Characteristic features of
GS solitons, which distinguish them from bound complexes of two or several
solitary beams, is the presence of a pronounced central peak, and the absence
of nodes (zero crossings).
Figure 2 displays a characteristic set of GS profiles found at different
values of total power, which is defined as
$P=\int_{-\infty}^{+\infty}|u(x)|^{2}dx$. Representing the stationary
solutions as $u\left(x,z\right)=e^{-i\mu z}U(x)$, it is straightforward to
check that wavenumber $-\mu$ of all the GS solitons falls into the semi-
infinite gap, in terms of the spectrum generated by the linearized version of
Eq. (1), see Fig. 3(d) below; this situation is natural for the system with
the self-focusing nonlinearity. Further, real-time simulations of Eq. (1) (not
shown here) demonstrates that all the GS modes are stable against
perturbations. As seen from Fig. 3(d), the stability of the GS solitons is
also supported by the Vakhitov-Kolokolov criterion VK , $d\mu/dP<0$.
Figure 2: (Color online) Profiles of ground-state solitons found at the
following values of the total power: $P=12$ (a), $P=16$ (b), $P=29$ (c), and
$P=50$ (d).
At sufficiently small values of $P$ (in the weakly nonlinear regime), Fig.
2(a) demonstrates that the central and side peaks of the GS soliton are
situated in linear channels, the soliton being symmetric about its central
peak, which exactly coincides with the midpoint of the corresponding linear
stripe. Although the light is chiefly guided by the linear stripes in this
regime, the soliton of course cannot exist without the self-focusing
nonlinearity, even if it is weak.
As seen in Fig. 2(b), the first SSB event happens with the increase of $P$,
breaking the symmetry of the weakly-nonlinear GS soliton and spontaneously
shifting its central peak off the center of the linear channel in which the
peak is trapped. The symmetry of the side peaks gets broken too, although they
remain trapped in the linear channels. It is relevant to stress that
asymmetric GS solitons were not reported in previously studied versions of
nonlinear systems with KP potentials Kominis (2006); defocusing ; Merhasin
(2005), although some exact asymmetric solutions for non-fundamental solitons,
which feature nodes, were found in Ref. Kominis (2006).
With the further increase of the power, the GS soliton undergoes strong self-
compression, which eliminates all side peaks, while the central peak moves
from the linear stripe into an adjacent nonlinear one, ending up in the middle
of the nonlinear stripe. A seen in Fig. 2(c), in the corresponding moderately
nonlinear regime the single-peak soliton eventually restores its symmetry
about the midpoint of the nonlinear channel into which it has shifted.
Accordingly, light is guided by the nonlinear stripe in this regime.
Finally, in the strongly nonlinear regime the _second_ SSB event happens, as
seen in Fig. 2(d), where the narrow GS soliton spontaneously shifts from the
midpoint of the nonlinear stripe, although staying in it. To the best of our
knowledge, such a _repeated_ SSB of solitons has not been reported in other
models of nonlinear optics or BEC. The second SSB seems a counter-intuitive
effect, as one might “naively” expect that, with indefinite increase of total
power $P$, the narrow high-power soliton would be only stronger nested at the
center of the nonlinear stripe. Nevertheless, an explanation to this effect is
possible, as argued below.
To quantify the double SSB, we define the following characteristics of the
soliton, as functions of the total power, $P$: center-of-mass coordinate
$x_{\mathrm{mc}}$, average width $W_{\mathrm{a}}$ , linear-stripe duty cycle
(DC), the soliton’s asymmetry measure (AS), and the above-mentioned
propagation constant, $-\mu$ (if the system is realized as the BEC model,
$\mu$ is the chemical potential):
$\displaystyle x_{\mathrm{mc}}=P^{-1}\int_{-\infty}^{+\infty}x|u|^{2}dx,$
$\displaystyle\mathrm{DC}=P^{-1}\int_{\mathrm{Ls}}|u|^{2}dx,~{}\mathrm{AS}=P^{-1}\int_{0}^{\infty}\left|\left[u(x_{\max}-y)\right]^{2}-\left[u(x_{\max}+y)\right]^{2}\right|dy,$
$\displaystyle
W_{\mathrm{a}}^{2}=P^{-1}\int_{-\infty}^{+\infty}(x-x_{\mathrm{mc}})^{2}|u|^{2}dx,~{}\mu=P^{-1}\int_{-\infty}^{+\infty}u^{\ast}\mathbf{\hat{H}}udx,$
(2)
where $\int_{\mathrm{Ls}}$ stands for the integral taken over the linear
stripes, $x_{\max}$ is the location of the maximum value of
$\left|u(x)\right|$, and the Hamiltonian operator is
$\mathbf{\hat{H}}={(1/2)}\partial_{xx}+V(x)[1-|u|^{2}]$. For
$\mathrm{DC}>50\%$ ($<50\%$), light is mainly guided by the linear (nonlinear)
stripes. The strength of the SSB as a whole is quantified by shift
$x_{\mathrm{mc}}$, while $\mathrm{AS}$ quantifies the related inner symmetry
breaking of the soliton.
The overall description of the double SSB is provided, in Fig. 3, by plots
showing the evolution of quantities (2) with the increase of $P$. In panel
(a), the dashed lines, $x=0$ and $x=-10$, mark the positions of the two
symmetric points in the KP potential, which correspond, respectively, to
midpoints of the linear and nonlinear stripes. The plots in Figs. 3(a,b)
clearly demonstrate that the low-power GS soliton remains symmetric, being
centered at $x=0$, for $P<15$.
Figure 3: (Color online) (a) The coordinate of the soliton’s center of mass
versus total power $P$. (b) The duty cycle (DC), showing the share of the
power trapped in linear stripes, and the soliton’s asymmetry measure (AS),
versus $P$. (c,d): The soliton’s width ($W_{\mathrm{a}}$) and propagation
constant ($-\mu$) as functions of $P$ (it can be checked that all values of
$\mu$ belong to the semi-infinite gap, in terms of the spectrum of the
linearized system). Note that the $\mu(P)$ dependence satisfies the Vakhitov-
Kolokolov stability criterion VK , $d\mu/dP<0$.
The first SSB event occurs at $P_{\mathrm{SSB}}^{(1)}\approx 15$. The soliton
becomes asymmetric, gradually moving from the midpoint of the linear stripe
($x=0$) towards the center of the adjacent nonlinear one ($x=-10$), in the
interval of $15<P<22$. As seen in Fig. 3(b), the soliton attains the largest
inner asymmetry degree, $\mathrm{AS}>40\%$, around $P\simeq 18$. Observe that
the asymmetric soliton gradually loses its side peaks, simplifying into the
single-peak shape, as seen in Figs. 2(a-c). The dependence $\mathrm{AS}(P)$ in
Fig. 3(b) shows that the SSB occurring at $P=P_{\mathrm{SSB}}^{(1)}$ is of the
supercritical type, i.e., it may be identified with a phase transition of the
second kind Landau (2002).
Next, the moderately high-power soliton remains completely symmetric, centered
at the midpoint of the nonlinear channel, at $22<P<30$. Note also that the
duty cycle falls to values $\mathrm{DC}<50\%$ at $P>22$, which implies the
switch from the quasi-linear guidance to that dominated by the nonlinear
pseudopotential.
The second SSB occurs at $P_{\mathrm{SSB}}^{(2)}\approx 30$. At $P>30$, the
high-power soliton gradually shifts from $x=-10$ towards the edge of the
nonlinear stripe, while keeping a virtually undisturbed symmetric shape, with
$\mathrm{AS}=0$, see Fig. 3(b). This instance of the SSB may also be
considered as a phase transition of the second kind.
### II.2 Extension to other values of the potential’s period
To present the most general results, in Fig. 4 we report dependence
$x_{\mathrm{mc}}(P)$ for different values of $d$ (still with $d_{1}=d_{2}=d$),
viz., $d=5$, $8$, $11$, and $14$. We notice that, when $d$ increases from $5$
in Fig. 4 to $11$ in Fig. 4, the power at which the second SSB takes place
decreases from $P_{\mathrm{SSB}}^{(2)}\approx 120$ to $\simeq 24$. This is
explained by the fact that the second SSB requires the average width of the
soliton to be much narrower than the width of the nonlinear stripe, hence,
with smaller $d$, larger $P$ is needed to sustain the second SSB for a tighter
localized soliton. This argument also explains the shrinkage of the interval
in which the soliton is pinned to the symmetric position at the midpoint of
the the nonlinear stripe with the increase of $d$, as seen in Figs. 4,(b) and
(c). If $d$ keeps increasing, this interval eventually disappears at $d>11$,
as shown in Fig. 4, pertaining to $d=14$.
Further, in Fig. 5 we plot soliton characteristics $\mathrm{DC}$,
$\mathrm{AS}$ and $W_{\mathrm{a}}$ for $d=14$. It exhibits a direct transition
from the original symmetric state to the final asymmetric state, without the
second SSB.
Figure 4: (Color online) $x_{\mathrm{mc}}(P)$ for different value of the
lattice period: (a) $d=5$, (b) $d=8$, (c) $d=11$, and (d) $d=14$
Figure 5: (Color online) (a) Dependences $\mathrm{DC}(P)$ and $\mathrm{AS}(P)$
for $d=14$. (b) $W_{\mathrm{a}}(P)$ for $d=14$.
## III Analytical considerations
Both instances of the SSB revealed by the numerical findings can be explained
by means of analytical approximations. To address the first SSB, which happens
to the _broad_ low-power GS soliton, we replace the KP modulation function
with period $D\equiv 2d$ in Eq. (1), $V(x)$ (see Fig. 1), by the combination
of its mean value and the first harmonic component, dropping higher-order
harmonic components (cf. Ref. Wang ):
$\tilde{V}(x)=\left(V_{0}/2\right)[1-\left(4/\pi\right)\cos\left(2\pi
x/D\right)],$ (3)
Accordingly, Eq. (1) is replaced by
$iu_{z}=-(1/2)u_{xx}+\tilde{V}(x)\left(1-|u|^{2}\right)u.$ (4)
In the zero-order approximation, one may neglect the variable part of the
modulation function, whose period is much smaller than the width of soliton.
In this case, Eq. (4) gives rise to the obvious soliton solution,
$u\left(x,z\right)=\sqrt{2/V_{0}}\eta
e^{ikz}\mathrm{sech}\left(\eta\left(x-\xi\right)\right),~{}k=\frac{1}{2}\left(\eta^{2}-V_{0}\right),$
(5)
with coordinate of the soliton’s center $\xi$ and inverse width $\eta$. The
total power of the soliton is $P=4\eta/V_{0}$.
The Hamiltonian corresponding to Eq. (4) is
$H=\frac{1}{2}\int_{-\infty}^{+\infty}\left[\left|u_{x}\right|^{2}+\tilde{V}(x)\left[2|u|^{2}-|u|^{4}\right]\right]dx.$
(6)
As follows from here, the energy of the interaction of the broad soliton,
taken as per Eq. (5) (which neglects the distortion of the soliton’s shape
under the action of the potential) with the variable part of modulation
function (3), expressed in terms of the soliton’s power, is
$\displaystyle
U(\xi)\equiv-\left(V_{0}/\pi\right)\int_{-\infty}^{+\infty}\cos\left(2\pi
x/D\right)\left[2|u(x)|^{2}-\left|u(x)\right|^{4}\right]dx$
$\displaystyle=\frac{8\pi}{d}\left[\sinh\left(\frac{4\pi^{2}}{PV_{0}D}\right)\right]^{-1}\left[-1+\frac{2}{3V_{0}}\left(\left(\frac{\pi}{D}\right)^{2}+\frac{P^{2}V_{0}^{2}}{16}\right)\right]\cos\left(\frac{2\pi\xi}{D}\right).$
(7)
For small $P$ (low-power solitons), energy (7) gives rise to a _local minimum_
of the corresponding potential, i.e., a _stable_ equilibrium position of the
soliton at $\xi=0$, provided that
$V_{0}>2\pi^{2}/\left(3D^{2}\right).$ (8)
Note that the parameter values adopted above to generate Figs. 1-3, $D=20$ and
$V_{0}=0.02$, satisfy this condition. Then, with the increase of $P$, the
equilibrium position predicted by potential (7) becomes unstable at
$P>P_{\mathrm{SSB}}^{(1)}\equiv\left(4/V_{0}\right)\sqrt{\left(3/2\right)V_{0}-\left(\pi/D\right)^{2}}.$
(9)
At $P>P_{\mathrm{SSB}}^{(1)}$, the soliton moves away from $\xi=0$, i.e., this
is the point of the first SSB. Note that, for $V_{0}=0.02$ and $D=20$,
expression (9) yields $P_{\mathrm{SSB}}^{(1)}\approx\allowbreak 14.6$, which
practically exactly coincides with the first SSB point identified above from
the numerical data, $P_{\mathrm{SSB}}^{(1)}\approx\allowbreak 15$. If $D$ is
too small and does not satisfy condition (8) for given $V_{0}$, this means
that the simplest approximation cannot be used, and corrections to the average
form of the soliton (5) should be taken into account, which is beyond the
scope of the present analysis.
Proceeding to the analytical consideration of the second SSB, we rewrite Eq.
(1) as
$iu_{z}=-(1/2)u_{yy}-V(y)\left(|u|^{2}-1\right)u,$ (10)
where coordinate $y$ is defined so as to place the center of the nonlinear
stripe at $y=0$. Aiming to consider narrow solitons, trapped in the given
channel, which do not feel the presence of other nonlinear stripes, we define
the modulation function here so that $V(y)=V_{0}$ in the nonlinear stripe, and
$V(y)=0$ outside of it. Then, the narrow soliton with the center located at
point $y=\xi$ (generally, shifted off the center of the nonlinear stripe) has
the following form:
$u\left(z,x\right)=\frac{\eta
e^{ikz}}{\sqrt{V_{0}}}\left\\{\begin{array}[]{c}\mathrm{sech}\left(\eta\left(y-\xi\right)\right),~{}\mathrm{at}~{}~{}|y|~{}<\frac{d_{2}}{2},\\\
2\exp\left[-\eta\left(\frac{d_{2}}{2}-\xi~{}\mathrm{sgn}\left(y\right)\right)-\sqrt{\eta^{2}-2V_{0}}\left(|y|-\frac{d_{2}}{2}\right)\right]~{}\mathrm{at}~{}|y|~{}>\frac{d_{2}}{2},\end{array}\right.$
(11)
where this time the inverse width of the soliton is assumed to be large,
$\eta\gg 1/d_{2}$, $k$ is the same as in Eq. (5), and the total power of the
narrow soliton is $P\approx 2\eta/V_{0}$.
The substitution of the wave field (11) into the Hamiltonian of Eq. (10)
yields the following effective potential of the interaction of the soliton
with the nonlinear stripe which holds it:
$U\left(\xi\right)=-2\eta\left[\frac{\eta^{2}}{V_{0}}\left(1-\sqrt{1-\frac{2V_{0}}{\eta^{2}}}\right)+2\right]e^{-\eta
d_{2}}\cosh\left(2\eta\xi\right)+\frac{4\eta^{3}}{V_{0}}e^{-2\eta
d_{2}}\cosh\left(4\eta\xi\right),$ (12)
cf. Eq. (7). As might be expected, the last term in this potential tends to
keep the soliton at the center ($\xi=0$), while the other terms push it
towards the edge of the nonlinear stripe. The equilibrium position is defined
by equation $dU/d\xi=0$. The substitution of potential (12) into this equation
yields two solutions: either $\xi=0$, which corresponds to the soliton placed
exactly at the center, and the off-center equilibrium, determined by the
following expression:
$\cosh\left(2\eta\xi\right)=\frac{1}{8}\left[\left(1-\sqrt{1-\frac{2V_{0}}{\eta^{2}}}\right)+\frac{2V_{0}}{\eta^{2}}\right]e^{\eta
d_{2}}.$ (13)
Solution (13) exists if it yields $\cosh\left(2\eta\xi\right)>1$, i.e.,
$e^{\eta
d_{2}}>8\left[\left(1-\sqrt{1-2V_{0}/\eta^{2}}\right)+2V_{0}/\eta^{2}\right]^{-1}.$
(14)
Following the assumption that the soliton is narrow, we assume
$2V_{0}/\eta^{2}\ll 1$, hence Eqs. (13) and (14) are simplified as follows:
$\displaystyle\cosh\left(V_{0}P\xi\right)=3\left(2V_{0}P^{2}\right)^{-1}\exp\left(V_{0}d_{2}P/2\right),~{}$
(15) $\displaystyle\exp\left(V_{0}d_{2}P/2\right)>(2/3)V_{0}P^{2}.$ (16)
The second SSB is realized, in the framework of the present approximation, as
the displacement of the soliton from $\xi=0$ to point (13), hence inequality
(16), if replaced by the respective equality, offers a rough approximation for
the second SSB point, $P_{\mathrm{SSB}}^{(2)}$ [“rough” because it was derived
taking into regard exponentially small terms in Eq. (12), which is a crude but
meaningful approximation expo ]. In particular, Eq. (16) predicts that, for
$V_{0}=0.02$, the numerically found value reported above,
$P_{\mathrm{SSB}}^{(2)}=30$, corresponds to $d_{2}\simeq 8.3$, which is not
far from $d_{2}=10$ which was actually used. For $P\rightarrow\infty$, Eq.
(15) yields $\xi\rightarrow\pm d_{2}/2$, i.e., within the framework of the
present approximation, the soliton moves to the edge of the nonlinear stripe.
On the other hand, for larger values of $d_{2}$ inequality (16) always holds,
which explains the disappearance of the second SSB in the numerical picture
displayed in Fig. 4.
## IV Conclusion
We have demonstrated the effect of the double SSB (spontaneous symmetry
breaking) for GS (ground-state) stable solitons in the 1D medium with the
competing periodic linear potential and its nonlinear counterpart
(pseudopotential) induced by a periodic modulation of the local self-
attraction coefficient. This medium may be realized as a virtual PhC (photonic
crystal) imprinted by means of the EIT technique into a uniform optical
medium, and also as the BEC setting using a combination of an optical lattice
and the spatially periodic modulation of the nonlinearity via the Feshbach
resonance. The two SSB events occur in the low- and high-power regimes,
pushing the soliton off the symmetric positions at the center of the linear
and nonlinear stripes, respectively. In the former case, the SSB also affects
the shape of the low-power soliton, making it asymmetric and gradually
stripping it of side peaks. In the latter case, the narrow high-power soliton,
while shifting off the midpoint of the nonlinear channel, keeps the symmetric
shape. At intermediate values of the power, the soliton is completely
symmetric, staying pinned at the center of the nonlinear stripe. On the other
hand, the increase of the period of the potential structure leads to the
direct transition from the original symmetric state to the final asymmetric
one, while the second SSB point disappears. These results , which were
obtained by means of systematic numerical computations and explained with the
help of analytical approximations, suggest a possibility to control the
switching of spatial optical solitons in the virtual PhC by varying their
power.
This work may be extended in other directions. It particular, a challenging
problem is to investigate similar settings and effects for 2D solitons, in
terms of PhCs and BEC alike.
Y.L. thanks Prof. X. Sun (Fudan University, Shanghai) for a useful discussion.
B.A.M. appreciates the hospitality of the State Key Laboratory of
Optoelectronic Materials and Technologies at the Sun Yat-sen University
(Guangzhou, China), and of the Department of Mechanical Engineering at the
Hong Kong University. This work was supported by the Chinese agencies NKBRSF
(grant No. G2010CB923204) and CNNSF(grant No. 10934011).
## References
* (1) L. D. Landau and E. M. Lifshitz, _Quantum Mechanics_ (Moscow: Nauka Publishers, 1974).
* (2) E. B. Davies, Comm. Math. Phys. 64, 191 (1979); J. C. Eilbeck, P. S. Lomdahl, and A. C. Scott, Physica D 16, 318 (1985).
* (3) A. W. Snyder, D. J. Mitchell, L. Poladian, D. R. Rowland, and Y. Chen, J. Opt. Soc. Am. B 8, 2101 (1991).
* (4) C. Paré and M. Fłorjańczyk, Phys. Rev. A 41, 6287 (1990); A. I. Maimistov, Kvant. Elektron. 18, 758 [Sov. J. Quantum Electron. 21, 687 (1991)]; N. Akhmediev and A. Ankiewicz, Phys. Rev. Lett. 70, 2395 (1993); P. L. Chu, B. A. Malomed, and G. D. Peng, J. Opt. Soc. A B 10, 1379 (1993); B. A. Malomed, in: Progr. Optics 43, 71 (E. Wolf, editor: North Holland, Amsterdam, 2002).
* (5) L. C. Qian, M. L. Wall, S. Zhang, Z. Zhou, and H. Pu, Phys. Rev. A 77, 013611 (2008); T. Mayteevarunyoo, B. A. Malomed, and G. Dong, ibid. A 78, 053601 (2008).
* (6) W. A. Harrison, _Pseudopotentials in the Theory of Metals_(Benjamin: New York, 1966).
* (7) L. Pitaevskii and S. Stringari, _Bose-Einstein Condensation_ (Clarendon Press: Oxford, 2003).
* (8) P. Xie, Z.-Q. Zhang, and X. Zhang, Phys. Rev. E 67, 026607 (2003); A. Ferrando, M. Zacarés, P. Fernández de Córdoba, D. Binosi, and J. A. Monsoriu, Opt. Exp. 11, 452 (2003); 12, 817 (2004); J. R. Salgueiro, Y.i S. Kivshar, D. E. Pelinovsky, V. Simón, and H. Michinel, Stud. Appl. Math. 115, 157 (2005); A. S. Desyatnikov, N. Sagemerten, R. Fischer, B. Terhalle, D. Träger, D. N. Neshev, A. Dreischuh, C. Denz, W. Królikowski, and Y. S. Kivshar, ibid. 14, 2851 (2006).
* (9) Q. Li, C. T. Chan, K. M. Ho and C. M. Soukoulis, Phys. Rev. B 53, 15577 (1996); E. Lidorikis, Q. Li, and C. M. Soukoulis, ibid. 54, 10249 (1996).
* (10) B. A. Malomed, Z. H. Wang, P. L. Chu, and G. D. Peng, J. Opt. Soc. Am. B 16, 1197 (1999).
* Kominis (2006) Y. Kominis, Phys. Rev. E. 73, 066619 (2006); Y. Kominis and K. Hizanidis, Opt. Exp. 16, 12124 (2008).
* (12) Y. Kominis and K. Hizanidis, Opt. Lett. 31, 2888 (2006); T. Mayteevarunyoo and B. A. Malomed, J. Opt. Soc. Am. B 25, 1854 (2008).
* (13) A. Gubeskys and B. A. Malomed, Phys. Rev. A 75, 063602 (2007); A 76, 043623 (2007).
* (14) M. Matuszewski, B. A. Malomed, and M. Trippenbach, Phys. Rev. A 75, 063621 (2007); M. Trippenbach, E. Infeld, J. Gocalek, M. Matuszewski, M. Oberthaler, and B. A. Malomed, ibid. A 78, 013603 (2008).
* (15) N. V. Hung, P. Ziń, M. Trippenbach, and B. A. Malomed, Phys. Rev. E 82, 046602 (2010).
* (16) C. Hang and V. V. Konotop, Phys. Rev. A 81, 053849 (2010); Y. Li, B. A. Malomed, M. Feng, and J. Zhou, ibid. 82, 633813 (2010).
* (17) H. Sakaguchi and B. A. Malomed, Phys. Rev. A 81, 013624 (2010).
* (18) Y. V. Kartashov, V. A. Vysloukh, and L. Torner, in: Progr. Optics 52, 63 (E. Wolf, editor: North Holland, Amsterdam, 2009).
* (19) Y. V. Kartashov, B. A. Malomed, and L. Torner, Solitons in nonlinear lattices, Rev. Mod. Phys., in press.
* (20) F. Lederer, G. I. Stegeman, D. N. Christodoulides, G. Assanto, M. Segev, and Y. Silberberg, Phys. Rep. 463, 1 (2008).
* Chiofalo (2000) M. L. Chiofalo, S. Succi, and M. P. Tosi, Phys. Rev. E. 62, 7438 (2000).
* Merhasin (2005) I. M. Merhasin, B. V. Gisin, R. Driben, and B. A. Malomed, Phys. Rev. E. 71, 016613 (2005).
* (23) M. Vakhitov and A. Kolokolov, Radiophys. Quantum. Electron. 16, 783 (1973); L. Bergé, Phys. Rep. 303, 259 (1998).
* Landau (2002) L. D. Landau and E. M. Lifshitz, _Statistical Physics_ (Moscow: Nauka Publishers, 1976).
* (25) Yu. S. Kivshar and B. A. Malomed, Phys. Rev. Lett. 60, 164 (1988); Rev. Mod. Phys. 61, 763 (1989).
|
arxiv-papers
| 2011-04-27T16:09:43 |
2024-09-04T02:49:18.436319
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yongyao Li, Boris A. Malomed, Mingneng Feng, and Jianying Zhou",
"submitter": "Yongyao Li",
"url": "https://arxiv.org/abs/1104.5176"
}
|
1104.5271
|
# Quintessence Ghost Dark Energy Model
Ahamd Sheykhi1,2111sheykhi@mail.uk.ac.ir and Ali Bagheri1 1Department of
Physics, Shahid Bahonar University, P.O. Box 76175, Kerman, Iran
2Research Institute for Astronomy and Astrophysics of Maragha (RIAAM),
Maragha, Iran
###### Abstract
A so called “ ghost dark energy” was recently proposed to explain the present
acceleration of the universe expansion. The energy density of ghost dark
energy, which originates from Veneziano ghost of QCD, is proportional to the
Hubble parameter, $\rho_{D}=\alpha H$, where $\alpha$ is a constant which is
related to the QCD mass scale. In this paper, we establish the correspondence
between ghost dark energy and quintessence scalar field energy density. This
connection allows us to reconstruct the potential and the dynamics of the
quintessence scalar field according to the evolution of ghost energy density.
## I Introduction
A wide range of cosmological observations, direct and indirect, provide an
impressive evidence in favor of the present acceleration of the cosmic
expansion. To explain this acceleration, in the context of standard cosmology,
we need an anti gravity fluid with negative pressure, usually dubbed “dark
energy” in the literature. The first and simple candidate for dark energy is
the cosmological constant with equation of state parameter $w=-1$ which is
located at the central position among dark energy models both in theoretical
investigation and in data analysis wein . However, there are several
difficulties with cosmological constant. For example, it suffers the so-called
fine-tuning and cosmic coincidence problems. Besides, the origin of it is
still a much source of doubt. Furthermore, the accurate data analysis, show
that the time varying dark energy gives a better fit than a cosmological
constant and in particular, $w$ can cross $-1$ around $z=0.2$ from above to
below Alam . Although the galaxy cluster gas mass fraction data do not support
the time-varying $w$ chen , an overwhelming flood of papers has appeared which
attempt to understand the $w=-1$ crossing. Among them are a negative kinetic
scalar field and a normal scalar field Feng , or a single scalar field model
MZ , interacting holographic Wang1 and interacting agegraphic Wei dark
energy models. Other studies on the $w=-1$ crossing Noj and dark energy
models have been carried out in nojiri . For a recent review on dark energy
models see cop . It is worthy to note that in most of these dark energy
models, the accelerated expansion are explained by introducing new degree(s)
of freedom or by modifying the underlying theory of gravity.
Recently a very interesting suggestion on the origin of a dark energy is made,
without introducing new degrees of freedom beyond what are already known, with
the dark energy of just the right magnitude to give the observed expansion
Urban ; Ohta . In this proposal, it is claimed that the cosmological constant
arises from the contribution of the ghost fields which are supposed to be
present in the low-energy effective theory of QCD Wit ; Ven ; Ros ; Na ; Kaw .
The ghosts are required to exist for the resolution of the $U(1)$ problem, but
are completely decoupled from the physical sector Kaw . The above claim is
that the ghosts are decoupled from the physical states and make no
contribution in the flat Minkowski space, but once they are in the curved
space or time-dependent background, the cancelation of their contribution to
the vacuum energy is off-set, leaving a small energy density $\rho\sim
H\Lambda^{3}_{QCD}$, where $H$ is the Hubble parameter and $\Lambda_{QCD}$ is
the QCD mass scale of order a $100MeV$. With $H\sim 10^{-33}eV$, this gives
the right magnitude $\sim(3\times 10^{-3}eV)^{4}$ for the observed dark energy
density. This numerical coincidence is remarkable and also means that this
model gets rid of fine tuning problem Urban ; Ohta . The advantages of this
new model compared to other dark energy models is that it is totally embedded
in standard model and general relativity, one needs not to introduce any new
parameter, new degree of freedom or to modify gravity. The dynamical behavior
of the ghost dark energy (GDE) model in flat CaiGhost and non flat shmov
universe have been studied in ample details.
On the other side, the scalar field model can be regarded as an effective
description of an underlying dark energy theory. Scalar fields naturally arise
in particle physics including supersymmetric field theories and string/M
theory. Therefore, scalar field is expected to reveal the dynamical mechanism
and the nature of dark energy. However, although fundamental theories such as
string/M theory do provide a number of possible candidates for scalar fields,
they do not predict its potential $V(\phi)$ uniquely. Consequently, it is
meaningful to reconstruct the potential $V(\phi)$ from some dark energy models
possessing some significant features of the quantum gravity theory, such as
holographic and agegraphic dark energy models. In the framework of holographic
and agegraphic dark energy models, the studies on the reconstruction of the
quintessence potential $V(\phi)$ have been carried out in Zhang and ageQ ,
respectively. Till now, quintessence reconstruction of ghost energy density
has not been done.
In this paper we are interested in that if we assume the GDE scenario as the
underlying theory of dark energy, how the low-energy effective scalar-field
model can be used to describe it. In this direction, we can establish the
correspondence between the GDE and quintessence scalar field, and describe GDE
in this case effectively by making use of quintessence. We shall reconstruct
the quintessence potential and the dynamics of the scalar field in the light
of the GDE.
## II Quintessence Ghost dark energy
We assume the GDE is accommodated in a flat Friedmann-Robertson-Walker (FRW)
which its dynamics is governed by the Friedmann equation
$\displaystyle H^{2}=\frac{1}{3M_{p}^{2}}\left(\rho_{m}+\rho_{D}\right),$ (1)
where $\rho_{m}$ and $\rho_{D}$ are the energy densities of pressureless
matter and GDE, respectively. We define the dimensionless density parameters
as
$\Omega_{m}=\frac{\rho_{m}}{\rho_{\rm cr}},\ \ \
\Omega_{D}=\frac{\rho_{D}}{\rho_{\rm cr}},\ \ $ (2)
where the critical energy density is $\rho_{\rm cr}={3H^{2}M_{p}^{2}}$. Thus,
the Friedmann equation can be rewritten as
$\Omega_{m}+\Omega_{D}=1.$ (3)
The conservation equations read
$\displaystyle\dot{\rho}_{m}+3H\rho_{m}$ $\displaystyle=$ $\displaystyle 0,$
(4) $\displaystyle\dot{\rho}_{D}+3H\rho_{D}(1+w_{D})$ $\displaystyle=$
$\displaystyle 0.$ (5)
The ghost energy density is proportional to the Hubble parameter Ohta ;
CaiGhost
$\rho_{D}=\alpha H,$ (6)
where $\alpha$ is a constant of order $\Lambda_{\rm QCD}^{3}$ and
$\Lambda_{\rm QCD}\sim 100MeV$ is QCD mass scale. Taking the time derivative
of relation (6) and using Friedmann equation (1) we find
$\dot{\rho}_{D}=-\frac{\alpha}{2M_{p}^{2}}\rho_{D}(1+u+w_{D}).$ (7)
where $u=\rho_{m}/\rho_{D}$ is the energy density ratio. Inserting this
relation in continuity equation (5) and using Eq. (3) we find
$w_{D}=-\frac{1}{2-\Omega_{D}}.$ (8)
At the early time where $\Omega_{D}\ll 1$ we have $w_{D}=-1/2$, while at the
late time where $\Omega_{D}\rightarrow 1$ the GDE mimics a cosmological
constant, namely $w_{D}=-1$. In figure 1 we have plotted the evolution of
$w_{D}$ versus scale factor $a$. From this figure we see that $w_{D}$ of the
GDE model cannot cross the phantom divide and the universe has a de Sitter
phase at late time.
Figure 1: The evolution of $w_{D}$ for GDE.
Now we are in a position to establish the correspondence between GDE and
quintessence scaler field. To do this, we assume the quintessence scalar field
model of dark energy is the effective underlying theory. The energy density
and pressure of the quintessence scalar field are given by
$\displaystyle\rho_{\phi}=\frac{1}{2}\dot{\phi}^{2}+V(\phi),$ (9)
$\displaystyle p_{\phi}=\frac{1}{2}\dot{\phi}^{2}-V(\phi).$ (10)
Thus the potential and the kinetic energy term can be written as
$\displaystyle V(\phi)=\frac{1-w_{\phi}}{2}\rho_{\phi},$ (11)
$\displaystyle\dot{\phi}^{2}=(1+w_{\phi})\rho_{\phi}.$ (12)
In order to implement the correspondence between GDE and quintessence scaler
field, we identify $\rho_{\phi}=\rho_{D}$ and $w_{\phi}=w_{D}$. Using Eqs. (6)
and (8) as well as relation $\dot{\phi}=H\frac{d\phi}{d\ln a}$ we obtain the
scalar potential and the dynamics of scalar field as
$\displaystyle V(\phi)$ $\displaystyle=$
$\displaystyle\frac{\alpha^{2}}{6M_{p}^{2}}\times\frac{3-\Omega_{D}}{\Omega_{D}(2-\Omega_{D})},$
(13) $\displaystyle\frac{d\phi}{d\ln a}$ $\displaystyle=$
$\displaystyle\sqrt{3}M_{p}\sqrt{\frac{\Omega_{D}(1-\Omega_{D})}{2-\Omega_{D}}}.$
(14)
Integrating yields
$\displaystyle\phi(a)-\phi(a_{0})=\sqrt{3}M_{p}\int_{a_{0}}^{a}{\frac{da}{a}\sqrt{\frac{\Omega_{D}(1-\Omega_{D})}{2-\Omega_{D}}}},$
(15)
Figure 2: The evolution of the scalar field $\phi(a)$ for quintessence GDE,
where $\phi$ is in unit of $\sqrt{3}M_{p}$. Figure 3: The reconstructed
potential $V(\phi)$ for quintessence GDE, where $V(\phi)$ is in unit of
$\alpha^{2}/6M_{p}^{2}$.
where we have set $a_{0}=1$ for the present value of the scale factor. The
analytical form of the potential in terms of the ghost quintessence field
cannot be determined due to the complexity of the equations involved. However,
we can obtain it numerically. The reconstructed quintessence potential
$V(\phi)$ and the evolutionary form of the field are plotted in Figs. 2 and 3,
where we have taken $\phi(a_{0}=1)=0$ for simplicity. From figure 2 we can see
the dynamics of the scalar field explicitly. Obviously, the scalar field
$\phi$ rolls down the potential with the kinetic energy $\dot{\phi}^{2}$
gradually decreasing. In other words, the amplitude of $\phi$ decreases with
time in the past.
## III Interacting Quintessence Ghost dark energy
Next we generalize our discussion to the interacting case. Although at this
point the interaction may look purely phenomenological but different
Lagrangians have been proposed in support of it (see Tsu and references
therein). Besides, in the absence of a symmetry that forbids the interaction
there is nothing, in principle, against it. In addition, given the unknown
nature of both dark energy and dark matter, which are two major contents of
the universe, one might argue that an entirely independent behavior of dark
energy is very special wang1 ; pav1 . Thus, microphysics seems to allow enough
room for the coupling; however, this point is not fully settled and should be
further investigated. The difficulty lies, among other things, in that the
very nature of both dark energy and dark matter remains unknown whence the
detailed form of the coupling cannot be elucidated at this stage. Since we
consider the interaction between dark matter and dark energy, $\rho_{m}$ and
$\rho_{D}$ do not conserve separately; they must rather enter the energy
balances pav1
$\displaystyle\dot{\rho}_{m}+3H\rho_{m}$ $\displaystyle=$ $\displaystyle Q,$
(16) $\displaystyle\dot{\rho}_{D}+3H\rho_{D}(1+w_{D})$ $\displaystyle=$
$\displaystyle-Q,$ (17)
where $Q$ represents the interaction term and we take it as
$Q=3b^{2}H(\rho_{m}+\rho_{D})=3b^{2}H\rho_{D}(1+u).$ (18)
with $b^{2}$ being a coupling constant. Inserting Eqs. (7) and (18) in Eq.
(17) we find
$w_{D}=-\frac{1}{2-\Omega_{D}}\left(1+\frac{2b^{2}}{\Omega_{D}}\right).$ (19)
One can easily check that in the late time where $\Omega_{D}\rightarrow 1$,
the equation of state parameter of interacting GDE necessary crosses the
phantom line, namely, $w_{D}=-(1+2b^{2})<-1$ independent of the value of
coupling constant $b^{2}$. For the present time with taking $\Omega_{D}=0.72$,
the phantom crossing can be achieved provided $b^{2}>0.1$ which is consistent
with recent observations wang1 . It is worth mentioning that the continuity
equations (16) and (17) imply that the interaction term should be a function
of a quantity with units of inverse of time (a first and natural choice can be
the Hubble factor $H$) multiplied with the energy density. Therefore, the
interaction term could be in any of the following forms: (i) $Q\propto
H\rho_{D}$, (ii) $Q\propto H\rho_{m}$, or (iii) $Q\propto
H(\rho_{m}+\rho_{D})$. We can present the above three choices in one
expression as $Q=\Gamma\rho_{D}$, where
$\displaystyle\begin{array}[]{ll}\Gamma=3b^{2}H\hskip 36.98866pt{\rm for}\ \
Q\propto H\rho_{D},&\\\ \Gamma=3b^{2}Hu\hskip 31.2982pt{\rm for}\ \ Q\propto
H\rho_{m},&\\\ \Gamma=3b^{2}H(1+u)\ \ {\rm for}\ \ Q\propto
H(\rho_{m}+\rho_{D}),&\end{array}$ (23)
It should be noted that the ideal interaction term must be motivated from the
theory of quantum gravity. In the absence of such a theory, we rely on pure
dimensional basis for choosing an interaction $Q$. To be more general in this
work we choose expression (iii) for the interaction term. The coupling $b^{2}$
is taken in the range $[0,1]$ HZ . Note that if $b^{2}=0$ then it represents
the noninteracting case while $b^{2}=1$ yields complete transfer of energy
from dark energy to matter ($Q>0$). Although in principle there is now reason
to take $Q>0$ and one may take $Q<0$ which means that dark matter transfers to
dark energy, however, as we will see below this is not the case. It is easy to
show that for $Q<0$, Eq. (19) becomes
$w_{D}=-\frac{1}{2-\Omega_{D}}\left(1-\frac{2b^{2}}{\Omega_{D}}\right).$ (24)
In the late time where $\Omega_{D}\rightarrow 1$, we have $w_{D}=-(1-2b^{2})$,
which for $b^{2}>1/3$ leads to $w_{D}>-1/3$. This implies that in the late
time where dark energy dominates we have no acceleration at least for some
value of coupling parameter. For the present time if we take
$\Omega_{D}=0.72$, from Eq. (24) we have $w_{D}=-0.78+2.2b^{2}$. Again for
$b^{2}>0.20$ we have $w_{D}>-1/3$ for the present time. This means that
universe is in deceleration phase at the present time which is ruled out by
recent observations.
The behaviour of the equation of state parameter of interacting GDE is shown
in figure 4 for different value of the coupling parameter.
Figure 4: The evolution of $w_{D}$ for interacting GDE.
In the presence of interaction, the evolution of GDE is governed by the
following equation shmov
$\frac{d\Omega_{D}}{d\ln
a}=\frac{3}{2}\Omega_{D}\left[1-\frac{\Omega_{D}}{2-\Omega_{D}}\left(1+\frac{2b^{2}}{\Omega_{D}}\right)\right].$
(25)
Fig. 5 shows that at the early time $\Omega_{D}\rightarrow 0$ while at the
late time $\Omega_{D}\rightarrow 1$, that is the ghost dark energy dominates
as expected.
Figure 5: The evolution of $\Omega_{D}$ for interacting ghost dark energy,
where we take $\Omega_{D0}=0.72$.
Now we implement a connection between interacting GDE and quintessence scalar
field. In this case the potential and scalar field are obtained as
$\displaystyle V(\phi)$ $\displaystyle=$
$\displaystyle\frac{\alpha^{2}}{6M_{p}^{2}}\times\frac{1}{\Omega_{D}(2-\Omega_{D})}\left(3-\Omega_{D}+\frac{2b^{2}}{\Omega_{D}}\right),$
(26) $\displaystyle\frac{d\phi}{d\ln a}$ $\displaystyle=$
$\displaystyle\sqrt{3}M_{p}\sqrt{\frac{\Omega_{D}}{2-\Omega_{D}}\left(1-\Omega_{D}-\frac{2b^{2}}{\Omega_{D}}\right)}.$
(27)
Finally we obtain the evolutionary form of the field by integrating the above
equation. The result is
$\displaystyle\phi(a)-\phi(a_{0})=\sqrt{3}M_{p}\int_{a_{0}}^{a}{\frac{da}{a}\sqrt{\frac{\Omega_{D}}{2-\Omega_{D}}\left(1-\Omega_{D}-\frac{2b^{2}}{\Omega_{D}}\right)}},$
(28)
where $\Omega_{D}$ is now given by Eq. (25). The reconstructed quintessence
potential $V(\phi)$ and the evolutionary form of the field are plotted in
Figs. 6 and 7, where again we have taken $\phi(a_{0}=1)=0$ for the present
time. Selected curves are plotted for different value of the coupling
parameter $b^{2}$. From these figures we find out that $\phi$ increases with
time while the potential $V(\phi)$ becomes steeper with increasing $b^{2}$.
Figure 6: The evolutionary form of the scalar field $\phi(a)$ for interacting
quintessence GDE, where $\phi$ is in unit of $\sqrt{3}M_{p}$. Figure 7: The
reconstructed potential $V(\phi)$ for interacting quintessence GDE, where
$V(\phi)$ is in unit of $(\alpha^{2}/6M_{p}^{2})$.
## IV Conclusion
Considering the quintessence scalar field dark energy model as an effective
description of the underlying theory of dark energy, and assuming the ghost
vacuum energy scenario as pointing in the same direction, it is interesting to
study how the quintessence scalar field model can be used to describe the
ghost energy density. The quintessence scalar field is specified to an
ordinary scalar field minimally coupled to gravity, namely the canonical
scalar field. It is remarkable that the resulting model with the reconstructed
potential is the unique canonical single-scalar model that can reproduce the
GDE evolution of the universe. In this paper, we established a connection
between the GDE scenario and the quintessence scalar-field model. The GDE
model is a new attempt to explain the origin of dark energy within the
framework of Veneziano ghost of QCD Ohta . If we regard the quintessence
scalar-field model as an effective description of GDE, we should be capable of
using the scalar-field model to mimic the evolving behavior of the dynamical
ghost energy and reconstructing this scalar-field model according to the
evolutionary behavior of GDE. With this strategy, we reconstructed the
potential of the ghost quintessence and the dynamics of the field according to
the evolution of ghost energy density.
Finally we would like to mention that the aforementioned discussion in this
paper can be easily generalized to other non-canonical scalar fields, such as
K-essence and tachyon. It can also be extended to the non-flat FRW universe.
###### Acknowledgements.
This work has been supported by Research Institute for Astronomy and
Astrophysics of Maragha, Iran.
## References
* (1) S. Weinberg, Rev. Mod. Phys. 61, 1 (1989);
N. Straumann, arXiv:astro-ph/0203330;
T. Padmanabhan, Classical Quantum Gravity 22, L107 (2005).
* (2) U. Alam, V. Sahni and A. Starobinsky, JCAP 0406 (2004) 008;
Y.G. Gong, Class. Quant. Grav. 22 (2005) 2121;
U. Alam, V. Sahni, T. Saini and A. Starobinsky, Mon. Not. Roy. Astron. Soc.
354, (2004) 275;
T. Choudhury and T. Padmanabhan, Astron. Astrophys. 429, (2005) 807.
* (3) G. Chen and B. Ratra, Astrophys. J. 612, (2004) L1.
* (4) B. Feng, X. L.Wang and X. M. Zhang, Phys. Lett. B 607 (2005) 35;
W. Hu, Phys. Rev. D 71, (2005) 047301;
Z.K. Guo, Y.S. Piao, X.M. Zhang and Y.Z. Zhang, Phys. Lett. B 608, (2005) 177.
* (5) M.Z. Li, B. Feng and X.M. Zhang, JCAP 0512 (2005) 002.
* (6) B. Wang, Y. Gong, E. Abdalla, Phys. Lett. B 624, (2006) 141;
B. Wang, C. Y. Lin, E. Abdalla, Phys. Lett. B 637, (2006) 357;
B. Wang, J.D. Zang, C.Y. Lin, E. Abdalla and S. Micheletti, Nucl. Phys. B 778
(2007) 69;
A. Sheykhi, Phys Lett B 681 (2009) 205;
A. Sheykhi, Class. Quantum Grav. 27 (2010) 025007.
* (7) H. Wei and R. G. Cai, Phys. Lett. B 660 (2008) 113;
H. Wei and R. G. Cai, Eur. Phys. J. C 59 (2009) 99;
A. Sheykhi, Phys. Lett. B 680 (2009) 113;
A. Sheykhi, Phys. Rev. D 81 (2010) 023525;
A. Sheykhi, Phys. Lett. B 682 (2010) 329.
* (8) S. Nojiri and S.D. Odintsov, Phys. Lett. B 562, (2003) 147;
S. Nojiri and S. D. Odintsov, Phys. Rev. D 70 (2004) 103522;
S. Nojiri, S.D. Odintsov and S. Tsujikawa, Phys. Rev. D 71 (2005) 063004;
A. Vikman, Phys. Rev. D 71, (2005) 023515;
A. Anisimov, E. Babichev and A. Vikman, JCAP 0506, (2005) 006;
A. Sheykhi, B. Wang and N. Riazi, Phys. Rev. D 75, 123513 (2007).
* (9) S. Nojiri, S.D. Odintsov, Phys. Lett. B 637 (2006) 139;
S. Nojiri, S.D. Odintsov, Int. J. Geom. Meth. Mod. Phys. 4 (2007) 115;
S. Nojiri, S.D. Odintsov, Phys. Lett. B 659 (2008) 821;
S. Nojiri, S.D. Odintsov, P. V. Tretyakov, Prog. Theor. Phys. Suppl. 172
(2008) 81.
* (10) E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006).
* (11) F. R. Urban and A. R. Zhitnitsky, Phys. Lett. B 688 (2010) 9 ;
F. R. Urban and A. R. Zhitnitsky, Phys. Rev. D 80 (2009) 063001;
F. R. Urban and A. R. Zhitnitsky, JCAP 0909 (2009) 018;
F. R. Urban and A. R. Zhitnitsky, Nucl. Phys. B 835 (2010) 135\.
* (12) N. Ohta, Phys. Lett. B 695 (2011) 41,
* (13) E. Witten, Nucl. Phys. B 156 (1979) 269.
* (14) G. Veneziano, Nucl. Phys. B 159 (1979) 213.
* (15) C. Rosenzweig, J. Schechter and C. G. Trahern, Phys. Rev. D 21 (1980) 3388.
* (16) P. Nath and R. L. Arnowitt, Phys. Rev. D 23 (1981) 473.
* (17) K. Kawarabayashi and N. Ohta, Nucl. Phys. B 175 (1980) 477; Prog. Theor. Phys. 66 (1981) 1789;
N. Ohta, Prog. Theor. Phys. 66 (1981) 1408.
* (18) R.G. Cai, Z.L. Tuo, H.B. Zhang, arXiv:1011.3212
* (19) A. Sheykhi, M. Sadegh Movahed, arXiv:1104.4713.
* (20) X. Zhang, Phys. Lett. B 648 (2007)1.
* (21) J. Zhang, X. Zhang, H. Liu, Eur. Phys. J. C 54 (2008) 303;
J .P Wu, D. Z. Ma, Y. Ling, Phys. Lett. B 663, (2008) 152;
A. Sheykhi, A. Bagheri and M. M. Yazdanpanah, JCAP 09 (2010 ) 017.
* (22) S. Tsujikawa, M. Sami, Phys. Lett. B 603 (2004) 113.
* (23) B. Wang, Y. Gong and E. Abdalla, Phys. Lett. B 624 (2005) 141;
B. Wang, C. Y. Lin and E. Abdalla, Phys. Lett. B 637 (2005) 357.
* (24) D. Pavon, W. Zimdahl, Phys. Lett. B 628 (2005) 206;
N. Banerjee, D. Pavon, Phys. Lett. B 647 (2007) 477.
* (25) H. Zhang and Z. Zhu, Phys. Rev. D 73 (2006) 043518.
|
arxiv-papers
| 2011-04-26T12:19:32 |
2024-09-04T02:49:18.442696
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ahamd Sheykhi and Ali Bagheri",
"submitter": "Ali Bagheri",
"url": "https://arxiv.org/abs/1104.5271"
}
|
1104.5295
|
# Moment bounds for IID sequences under sublinear expectations††thanks: First
version: Agu. 4, 2009. This is the second version on Apr. 15, 2010.
Feng Hu
School of Mathematics
Shandong University
Jinan 250100, China E-mail address: hufengqf@163.com (F. Hu).
Abstract
In this paper, with the notion of independent identically distributed (IID)
random variables under sublinear expectations introduced by Peng [7-9], we
investigate moment bounds for IID sequences under sublinear expectations. We
can obtain a moment inequality for a sequence of IID random variables under
sublinear expectations. As an application of this inequality, we get the
following result: For any continuous function $\varphi$ satisfying the growth
condition $|\varphi(x)|\leq C(1+|x|^{p})$ for some $C>0$, $p\geq 1$ depending
on $\varphi$, central limit theorem under sublinear expectations obtained by
Peng [8] still holds.
Keywords moment bound, sublinear expectation, IID random variables, $G$-normal
distribution, central limit theorem.
2000 MR Subject Classification 60H10, 60G48
## 1 Introduction
In classical probability theory, it is well known that for IID random
variables with $E[X_{1}]=0$ and $E[|X_{1}|^{r}]<\infty$ $(r\geq 2)$,
$E[|S_{n}|^{r}]=O(n^{\frac{r}{2}})$ holds, and hence
$\sup\limits_{m\geq 0}E[|S_{m+n}-S_{m}|^{r}]=O(n^{\frac{r}{2}}).$ $None$
Bounds of this kind are potentially useful to obtain limit theorems,
especially strong laws of large numbers, central limit theorems and laws of
the iterated logarithm (see, for example, Serfling [10] and Stout [11],
Chapter 3.7).
Since the paper (Artzner et al. [1]) on coherent risk measures, people are
more and more interested in sublinear expectations (or more generally, convex
expectations, see Föllmer and Schied [4] and Frittelli and Rossaza Gianin
[5]). By Peng [9], we know that a sublinear expectation $\hat{E}$ can be
represented as the upper expectation of a subset of linear expectations
$\\{E_{\theta}:\theta\in\Theta\\}$, i.e.,
$\hat{E}[\cdot]=\sup\limits_{\theta\in\Theta}E_{\theta}[\cdot]$. In most
cases, this subset is often treated as an uncertain model of probabilities
$\\{P_{\theta}:\theta\in\Theta\\}$ and the notion of sublinear expectation
provides a robust way to measure a risk loss $X$. In fact, nonlinear
expectation theory provides many rich, flexible and elegant tools.
In this paper, we are interested in
$\overline{E}[\cdot]=\sup\limits_{P\in{\cal P}}E_{P}[\cdot],$
where ${\cal P}$ is a set of probability measures. The main aim of this paper
is to obtain moment bounds for IID sequences under sublinear expectations.
This paper is organized as follows: in section 2, we give some notions and
lemmas that are useful in this paper. In section 3, we give our main results
including the proofs.
## 2 Preliminaries
In this section, we introduce some basic notions and lemmas. For a given set
${\cal P}$ of multiple prior probability measures on $(\Omega,{\cal F}),$ let
${\cal H}$ be the set of random variables on $(\Omega,{\cal F}).$
For any $\xi\in{\cal H},$ we define a pair of so-called maximum-minimum
expectations $(\overline{E},\underline{E})$ by
$\overline{E}[\xi]:=\sup_{P\in{\cal P}}E_{P}[\xi],\ \ \
\underline{E}[\xi]:=\inf_{P\in{\cal P}}E_{P}[\xi].$
Without confusion, here and in the sequel, $E_{P}[\cdot]$ denotes the
classical expectation under probability measure $P$.
Obviously, $\overline{E}$ is a sublinear expectation in the sense that
Definition 2.1 (see Peng [8, 9]). Let $\Omega$ be a given set and let ${\cal
H}$ be a linear space of real valued functions defined on $\Omega$. We assume
that all constants are in ${\cal H}$ and that $X\in{\cal H}$ implies
$|X|\in{\cal H}$. ${\cal H}$ is considered as the space of our ”random
variables”. A nonlinear expectation $\hat{E}$ on ${\cal H}$ is a functional
$\hat{E}$ : ${\cal H}\mapsto R$ satisfying the following properties: for all
$X$, $Y\in{\cal H}$, we have
(a) Monotonicity: If $X\geq Y$ then $\hat{E}[X]\geq\hat{E}[Y]$.
(b) Constant preserving: $\hat{E}[c]=c$.
The triple $(\Omega,{\cal H},\hat{E})$ is called a nonlinear expectation space
(compare with a probability space $(\Omega,{\cal F},P)$). We are mainly
concerned with sublinear expectation where the expectation $\hat{E}$ satisfies
also
(c) Sub-additivity: $\hat{E}[X]-\hat{E}[Y]\leq\hat{E}[X-Y]$.
(d)Positive homogeneity: $\hat{E}[\lambda X]=\lambda\hat{E}[X]$,
$\forall\lambda\geq 0$.
If only (c) and (d) are satisfied, $\hat{E}$ is called a sublinear functional.
The following representation theorem for sublinear expectations is very useful
(see Peng [9] for the proof).
Lemma 2.1. Let $\hat{E}$ be a sublinear functional defined on $(\Omega,{\cal
H})$, i.e., (c) and (d) hold for $\hat{E}$. Then there exists a family
$\\{E_{\theta}:\theta\in\Theta\\}$ of linear functionals on $(\Omega,{\cal
H})$ such that
$\hat{E}[X]=\max\limits_{\theta\in\Theta}E_{\theta}[X].$ $None$
If (a) and (b) also hold, then $E_{\theta}$ are linear expectations for
$\theta\in\Theta$. If we make furthermore the following assumption: (H) For
each sequence $\\{X_{n}\\}_{n=1}^{\infty}\subset{\cal H}$ such that
$X_{n}(\omega)\downarrow 0$ for $\omega$, we have $\hat{E}[X_{n}]\downarrow
0$. Then for each $\theta\in\Theta$, there exists a unique ($\sigma$-additive)
probability measure $P_{\theta}$ defined on $(\Omega,\sigma({\cal H}))$ such
that
$E_{\theta}[X]=\int_{\Omega}X(\omega){\rm d}P_{\theta}(\omega),\ \ X\in{\cal
H}.$ $None$
Remark 2.1. Lemma 2.1 shows that in most cases, a sublinear expectation indeed
is a supremum expectation. That is, if $\hat{E}$ is a sublinear expectation on
${\cal H}$ satisfying (H), then there exists a set (say $\hat{\cal P}$) of
probability measures such that
$\hat{E}[\xi]=\sup_{P\in\hat{\cal P}}E_{P}[\xi],\ \ \
-\hat{E}[-\xi]=\inf_{P\in\hat{\cal P}}E_{P}[\xi].$
Therefore, without confusion, we sometimes call supremum expectations as
sublinear expectations.
Moreover, a supremum expectation $\overline{E}$ can generate a pair $(V,v)$ of
capacities denoted by
$V(A):=\overline{E}[I_{A}],\ \ \ v(A):=-\overline{E}[-I_{A}],\ \ \forall
A\in{\cal F}.$
It is easy to check that the pair of capacities satisfies
$V(A)+v(A^{c})=1,\ \ \ \forall A\in{\cal F}$
where $A^{c}$ is the complement set of $A$.
The following is the notion of IID random variables under sublinear
expectations introduced by Peng [7-9].
Definition 2.2 (IID under sublinear expectations). Independence: Suppose that
$Y_{1},Y_{2},\cdots,Y_{n}$ is a sequence of random variables such that
$Y_{i}\in{\cal H}$. Random variable $Y_{n}$ is said to be independent of
$X:=(Y_{1},\cdots,Y_{n-1})$ under $\overline{E}$, if for each measurable
function $\varphi$ on $R^{n}$ with $\varphi(X,Y_{n})\in{\cal H}$ and
$\varphi(x,Y_{n})\in{\cal H}$ for each $x\in{R}^{n-1},$ we have
$\overline{E}[\varphi(X,Y_{n})]=\overline{E}[\overline{\varphi}(X)],$
where $\overline{\varphi}(x):=\overline{E}[\varphi(x,Y_{n})]$ and
$\overline{\varphi}(X)\in{\cal H}$.
Identical distribution: Random variables $X$ and $Y$ are said to be
identically distributed, denoted by $X\sim Y$, if for each measurable function
$\varphi$ such that $\varphi(X),\;\varphi(Y)\in{\cal H}$,
$\overline{E}[\varphi(X)]=\overline{E}[\varphi(Y)].$
IID random variables: A sequence of random variables
$\\{X_{i}\\}_{i=1}^{\infty}$ is said to be IID, if $X_{i}\sim X_{1}$ and
$X_{i+1}$ is independent of $Y:=(X_{1},\cdots,X_{i})$ for each $i\geq 1.$
Definition 2.3 (Pairwise independence, see Marinacci [6]). Random variable $X$
is said to be pairwise independent of $Y$ under capacity $\hat{V},$ if for all
subsets $D$ and $G\in{\cal B}(R),$
$\hat{V}(X\in D,Y\in G)=\hat{V}(X\in D)\hat{V}(Y\in G).$
The following lemma shows the relation between Peng’s independence and
pairwise independence.
Lemma 2.2. Suppose that $X,Y\in{\cal H}$ are two random variables.
$\overline{E}$ is a sublinear expectation and $(V,v)$ is the pair of
capacities generated by $\overline{E}$. If random variable $X$ is independent
of $Y$ under $\overline{E}$, then $X$ also is pairwise independent of $Y$
under capacities $V$ and $v$.
Proof. If we choose $\varphi(x,y)=I_{D}(x)I_{G}(y)$ for $\overline{E}$, by the
definition of Peng’s independence, it is easy to obtain
$V(X\in D,Y\in G)=V(X\in D)V(Y\in G).$
Similarly, if we choose $\varphi(x,y)=-I_{D}(x)I_{G}(y)$ for $\overline{E}$,
it is easy to obtain
$v(X\in D,Y\in G)=v(X\in D)v(Y\in G).$
The proof is complete.
Let $C_{b}(R^{n})$ denote the space of bounded and continuous functions, let
$C_{l,Lip}(R^{n})$ denote the space of functions $\varphi$ satisfying
$|\varphi(x)-\varphi(y)|\leq C(1+|x|^{m}+|y|^{m})|x-y|\ \ \ \forall x,y\in
R^{n},$
for some $C>0$, $m\in N$ depending on $\varphi$ and let $C_{b,Lip}(R^{n})$
denote the space of bounded functions $\varphi$ satisfying
$|\varphi(x)-\varphi(y)|\leq C|x-y|\ \ \ \forall x,y\in R^{n},$
for some $C>0$ depending on $\varphi$.
From now on, we consider the following sublinear expectation space
$(\Omega,{\cal H},\overline{E})$: if $X_{1},\cdots,X_{n}\in{\cal H}$, then
$\varphi(X_{1},\cdots,X_{n})\in{\cal H}$ for each $\varphi\in
C_{l,Lip}(R^{n})$.
Definition 2.4 ($G$-normal distribution, see Definition 10 in Peng [7]). A
random variable $\xi\in{\cal H}$ under sublinear expectation $\widetilde{E}$
with $\overline{\sigma}^{2}=\widetilde{E}[\xi^{2}]$,
$\underline{\sigma}^{2}=-\widetilde{E}[-\xi^{2}]$ is called $G$-normal
distribution, denoted by ${\cal
N}(0;[{\underline{\sigma}}^{2},{\overline{\sigma}}^{2}])$, if for any function
$\varphi\in C_{l,Lip}(R)$, write
$u(t,x):=\widetilde{E}[\varphi(x+\sqrt{t}\xi)],$ $(t,x)\in[0,\infty)\times R$,
then $u$ is the unique viscosity solution of PDE:
$\partial_{t}u-G(\partial^{2}_{xx}u)=0,\ \ u(0,x)=\varphi(x),$
where
$G(x):=\frac{1}{2}(\overline{\sigma}^{2}x^{+}-\underline{\sigma}^{2}x^{-})$
and $x^{+}:=\max\\{x,0\\}$, $x^{-}:=(-x)^{+}$.
With the notion of IID under sublinear expectations, Peng shows central limit
theorem under sublinear expectations (see Theorem 5.1 in Peng [8]).
Lemma 2.3 (Central limit theorem under sublinear expectations). Let
$\\{X_{i}\\}_{i=1}^{\infty}$ be a sequence of IID random variables. We further
assume that $\overline{E}[X_{1}]=\overline{E}[-X_{1}]=0.$ Then the sequence
$\\{\overline{S}_{n}\\}_{n=1}^{\infty}$ defined by
$\overline{S}_{n}:=\frac{1}{\sqrt{n}}\sum\limits_{i=1}^{n}X_{i}$ converges in
law to $\xi$, i.e.,
$\lim\limits_{n\rightarrow\infty}\overline{E}[\varphi(\overline{S}_{n})]=\widetilde{E}[\varphi(\xi)],$
for any continuous function $\varphi$ satisfying linear growth condition
(i.e., $|\varphi(x)|\leq C(1+|x|)$ for some $C>0$ depending on $\varphi$),
where $\xi$ is a $G$-normal distribution.
## 3 Main results and proofs
Theorem 3.1. Let a random sequence $\\{X_{n}\\}_{n=1}^{\infty}$ be IID under
$\overline{E}$. Denote $S_{n}:=\sum\limits_{i=1}^{n}X_{i}$. Assume that
$\overline{E}[X_{1}]=\overline{E}[-X_{1}]=0$. Then for each $r>2$, there
exists a positive constant $K_{r}$ not depending on $n$ such that for all
$n\in N$,
$\sup\limits_{m\geq 0}\overline{E}[|S_{m+n}-S_{m}|^{r}]\leq
K_{r}n^{\frac{r}{2}}.$
Proof. Let $r=\theta+\gamma$, where $\theta\in N,\theta\geq 2$ and
$\gamma\in(0,1]$. For simplicity, write
$S_{m,n}:=S_{m+n}-S_{m},$ $a_{n}:=\sup\limits_{m\geq
0}\overline{E}[|S_{m,n}|^{r}].$
Firstly, we shall show that there exists a positive constant $C_{r}$ not
depending on $n$ such that for all $n\in N$,
$\overline{E}[|S_{m,2n}|^{r}]\leq 2a_{n}+C_{r}a_{n}^{1-\gamma}n^{\frac{\gamma
r}{2}}.$ $None$
In order to prove (4), we show the following inequalities for all $n\in N$:
$\overline{E}[|S_{m,2n}|^{r}]\leq
2a_{n}+2^{\theta+1}(\overline{E}[|S_{m,n}|^{\gamma}|S_{m+n,n}|^{\theta}]+\overline{E}[|S_{m,n}|^{\theta}|S_{m+n,n}|^{\gamma}]),$
$None$ $\overline{E}[|S_{m,n}|^{\gamma}|S_{m+n,n}|^{\theta}]\leq
a_{n}^{1-\gamma}(\overline{E}[|S_{m,n}||S_{m+n,n}|^{\theta-1+\gamma}])^{\gamma},$
$None$ $\overline{E}[|S_{m,n}|^{\theta}|S_{m+n,n}|^{\gamma}]\leq
a_{n}^{1-\gamma}(\overline{E}[|S_{m,n}|^{\theta-1+\gamma}|S_{m+n,n}|])^{\gamma},$
$None$ $\overline{E}[|S_{m,n}||S_{m+n,n}|^{\theta-1+\gamma}]\leq
D_{r}n^{\frac{r}{2}},$ $None$
$\overline{E}[|S_{m,n}|^{\theta-1+\gamma}|S_{m+n,n}|]\leq
D_{r}n^{\frac{r}{2}},$ $None$
where $D_{r}$ is a positive constant not depending on $n$.
To prove (5). Elementary estimates yield the following inequality (*):
$\begin{array}[]{lcl}&&|S_{m,2n}|^{r}=|S_{m,n}+S_{m+n,n}|^{\theta+\gamma}\leq(|S_{m,n}|+|S_{m+n,n}|)^{\theta}(|S_{m,n}|+|S_{m+n,n}|)^{\gamma}\\\
&\leq&\sum_{i=0}^{\theta}C_{\theta}^{i}|S_{m,n}|^{\theta-i}|S_{m+n,n}|^{i}(|S_{m,n}|^{\gamma}+|S_{m+n,n}|^{\gamma})\\\
&\leq&|S_{m,n}|^{\theta+\gamma}+|S_{m+n,n}|^{\theta+\gamma}+2\sum_{i=0}^{\theta}C_{\theta}^{i}(|S_{m,n}|^{\gamma}|S_{m+n,n}|^{\theta}+|S_{m,n}|^{\theta}|S_{m+n,n}|^{\gamma})\\\
&\leq&|S_{m,n}|^{\theta+\gamma}+|S_{m+n,n}|^{\theta+\gamma}+2^{\theta+1}(|S_{m,n}|^{\gamma}|S_{m+n,n}|^{\theta}+|S_{m,n}|^{\theta}|S_{m+n,n}|^{\gamma}).\end{array}$
Since $\\{X_{n}\\}_{n=1}^{\infty}$ is a IID random sequence, by the definition
of IID under sublinear expectations,
$a_{n}=\sup\limits_{m\geq 0}\overline{E}[|S_{m,n}|^{r}]=\sup\limits_{m\geq
0}\overline{E}[|S_{m+n,n}|^{r}].$
Taking $\overline{E}[\cdot]$ on both sides of (*), we have
$\overline{E}[|S_{m,2n}|^{r}]\leq
2a_{n}+2^{\theta+1}(\overline{E}[|S_{m,n}|^{\gamma}|S_{m+n,n}|^{\theta}]+\overline{E}[|S_{m,n}|^{\theta}|S_{m+n,n}|^{\gamma}]).$
Hence, (5) holds.
Since the proof of ($6^{{}^{\prime}}$) is very similar to that of (6), we only
prove (6). Without loss of generality, we assume $\gamma\in(0,1)$. By Hölder’s
inequality,
$\begin{array}[]{lcl}\overline{E}[|S_{m,n}|^{\gamma}|S_{m+n,n}|^{\theta}]&\leq&(\overline{E}[|S_{m,n}||S_{m+n,n}|^{\theta-1+\gamma}])^{\gamma}(\overline{E}[|S_{m+n,n}|^{\frac{\theta-\gamma(\theta-1+\gamma)}{1-\gamma}}])^{1-\gamma}\\\
&\leq&a_{n}^{1-\gamma}(\overline{E}[|S_{m,n}||S_{m+n,n}|^{\theta-1+\gamma}])^{\gamma}.\end{array}$
This proves (6).
To prove (7). By the definition of IID under sublinear expectations and
Schwarz’s inequality, we have
$\overline{E}[|S_{m,n}||S_{m+n,n}|^{\theta-1+\gamma}]=\overline{E}[|S_{m,n}|]\overline{E}[|S_{m+n,n}|^{\theta-1+\gamma}]\leq(\overline{E}[|S_{m,n}|^{2}])^{\frac{1}{2}}\overline{E}[|S_{m+n,n}|^{\theta-1+\gamma}].$
$None$
Next we prove
$\overline{E}[S_{m,n}^{2}]\leq n\overline{E}[X_{1}^{2}],\ \ \forall m\geq 0.$
Indeed, using the definition of IID under sublinear expectations again, we
have
$\begin{array}[]{lcl}&&\overline{E}[S_{m,n}^{2}]=\overline{E}[(S_{m,n-1}+X_{m+n})^{2}]=\overline{E}[S_{m,n-1}^{2}+2S_{m,n-1}X_{m+n}+X_{m+n}^{2}]\\\
&\leq&\overline{E}[S_{m,n-1}^{2}]+\overline{E}[X_{m+n}^{2}]\leq\cdots=n\overline{E}[X_{1}^{2}].\end{array}$
So
$\overline{E}[S_{m,n}^{2}]\leq n\overline{E}[X_{1}^{2}]$ $None$
and
$\overline{E}[S_{m+n,n}^{2}]\leq n\overline{E}[X_{1}^{2}]$ $None$
hold. On the other hand, by Hölder’s inequality,
$\overline{E}[|S_{m+n,n}|^{1+\gamma}]\leq(\overline{E}[S_{m+n,n}^{2}])^{\frac{1+\gamma}{2}}\leq
n^{\frac{1+\gamma}{2}}(\overline{E}[X_{1}^{2}])^{\frac{1+\gamma}{2}}.$ $None$
If $\theta=2$, (7) follows from (8), (9), (10) and (11). If $\theta>2$, we
inductively assume
$\overline{E}[|S_{m+n,n}|^{\theta-1+\gamma}]\leq
M_{r}n^{\frac{\theta-1+\gamma}{2}},$ $None$
where $M_{r}$ is a positive constant not depending on $n$. Then (8), (9) and
(12) yield (7). In a similar manner, we can prove that ($7^{{}^{\prime}}$)
holds.
From (5)-($7^{{}^{\prime}}$), it is easy to check that (4) holds. From (4), we
can obtain that for all $n\in N$,
$a_{2n}\leq 2a_{n}+C_{r}a_{n}^{1-\gamma}n^{\frac{\gamma r}{2}}.$
By induction, there exists a positive constant $C_{r}^{{}^{\prime}}$ not
depending on $n$ such that $a_{n}\leq C_{r}^{{}^{\prime}}n^{\frac{r}{2}}$ for
all $n\in\\{2^{k}:k\in N\bigcup\\{0\\}\\}$.
If $n$ is any positive integer, it can be written in the form
$n=2^{k}+v_{1}2^{k-1}+\cdots+v_{k}\leq 2^{k}+2^{k-1}+\cdots+1$
where $2^{k}\leq n<2^{k+1}$ and each $v_{j}$ is either $0$ or $1$. Then
$S_{m,n}$ can be written as the sum of $k+1$ groups of sums containing
$2^{k},v_{1}2^{k-1},\cdots$ terms and using Minkowski’s inequality,
$\begin{array}[]{lcl}&&a_{n}\leq\sup\limits_{m\geq
0}[(\overline{E}[|S_{m+v_{k}+\cdots+v_{1}2^{k-1},2^{k}}|^{r}])^{\frac{1}{r}}+\cdots+(\overline{E}[|S_{m,v_{k}}|^{r}])^{\frac{1}{r}}]^{r}\\\
&\leq&C_{r}^{{}^{\prime}}[2^{\frac{k}{2}}+\cdots+1]^{r}=C_{r}^{{}^{\prime}}[\frac{2^{\frac{k+1}{2}}-1}{2^{\frac{1}{2}}-1}]^{r}\leq
K_{r}n^{\frac{r}{2}}.\end{array}$
The proof is complete.
Remark 3.1. (i) From the proof of Theorem 3.1, we can check that the
assumption of IID under $\overline{E}$ can be replaced by the weaker
assumption that $\\{X_{n}\\}_{n=1}^{\infty}$ is a IID random sequence under
$\overline{E}$ with respect to the following functions
$\varphi_{1}(x)=x;\ \ \ \varphi_{2}(x)=-x;$
$\varphi_{3}(x_{1},\cdots,x_{n})=|x_{1}+\cdots+x_{n}|^{r},\ \ \ n=1,2,\cdots,\
\ \ r\geq 2;$
$\varphi_{4}(x_{1},\cdots,x_{m},x_{m+1},\cdots,x_{m+n})=|x_{1}+\cdots+x_{m}||x_{m+1}+\cdots+x_{m+n}|^{p},\\\
m,n=1,2,\cdots,\ \ \ p>1;$
and
$\varphi_{5}(x_{1},\cdots,x_{m},x_{m+1},\cdots,x_{m+n})=|x_{1}+\cdots+x_{m}|^{p}|x_{m+1}+\cdots+x_{m+n}|,\\\
m,n=1,2,\cdots,\ \ \ p>1.$
(ii) A close inspection of the proof of Theorem 3.1 reveals that the
definition of IID under sublinear expectations plays an important role in the
proof. The proof of Theorem 3.1 is very similar to the classical arguments,
e.g., in Theorem 1 of Birkel [2].
Applying Theorem 3.1, we can obtain the following result:
Theorem 3.2. Let $\\{X_{i}\\}_{i=1}^{\infty}$ be a sequence of IID random
variables. We further assume that
$\overline{E}[X_{1}]=\overline{E}[-X_{1}]=0.$ Then the sequence
$\\{\overline{S}_{n}\\}_{n=1}^{\infty}$ defined by
$\overline{S}_{n}:=\frac{1}{\sqrt{n}}\sum\limits_{i=1}^{n}X_{i}$ converges in
law to $\xi$, i.e.,
$\lim\limits_{n\rightarrow\infty}\overline{E}[\varphi(\overline{S}_{n})]=\widetilde{E}[\varphi(\xi)],$
$None$
for any continuous function $\varphi$ satisfying the growth condition
$|\varphi(x)|\leq C(1+|x|^{p})$ for some $C>0$, $p\geq 1$ depending on
$\varphi$, where $\xi$ is a $G$-normal distribution.
Proof. Indeed, we only need to prove that (13) holds for the $p>1$ cases. Let
$\varphi$ be an arbitrary continuous function with growth condition
$|\varphi(x)|\leq C(1+|x|^{p})$ ($p>1$). For each $N>0$, we can find two
continuous functions $\varphi_{1}$, $\varphi_{2}$ such that
$\varphi=\varphi_{1}+\varphi_{2}$, where $\varphi_{1}$ has a compact support
and $\varphi_{2}(x)=0$ for $|x|\leq N$, and $|\varphi_{2}(x)|\leq|\varphi(x)|$
for all $x$. It is clear that $\varphi_{1}\in C_{b}(R)$ and
$|\varphi_{2}(x)|\leq\frac{2C(1+|x|^{p+1})}{N},\ \ \ \hbox{for}\ \ x\in R.$
Thus
$\begin{array}[]{lcl}|\overline{E}[\varphi(\overline{S}_{n})]-\widetilde{E}[\varphi(\xi)]|&=&|\overline{E}[\varphi_{1}(\overline{S}_{n})+\varphi_{2}(\overline{S}_{n})]-\widetilde{E}[\varphi_{1}(\xi)+\varphi_{2}(\xi)]|\\\
&\leq&|\overline{E}[\varphi_{1}(\overline{S}_{n})]-\widetilde{E}[\varphi_{1}(\xi)]|+|\overline{E}[\varphi_{2}(\overline{S}_{n})]-\widetilde{E}[\varphi_{2}(\xi)]|\\\
&\leq&|\overline{E}[\varphi_{1}(\overline{S}_{n})]-\widetilde{E}[\varphi_{1}(\xi)]|+\frac{2C}{N}(2+\overline{E}[|\overline{S}_{n}|^{p+1}]+\widetilde{E}[|\xi|^{p+1}]).\end{array}$
Applying Theorem 3.1, we have
$\sup\limits_{n}\overline{E}[|\overline{S}_{n}|^{p+1}]<\infty$. So the above
inequality can be rewritten as
$|\overline{E}[\varphi(\overline{S}_{n})]-\widetilde{E}[\varphi(\xi)]|\leq|\overline{E}[\varphi_{1}(\overline{S}_{n})]-\widetilde{E}[\varphi_{1}(\xi)]|+\frac{\overline{C}}{N},$
where
$\overline{C}=2C(2+\sup\limits_{n}\overline{E}[|\overline{S}_{n}|^{p+1}]+\widetilde{E}[|\xi|^{p+1}])$.
From Lemma 2.3, we know that (13) holds for any $\varphi\in C_{b}(R)$ with a
compact support. Thus, we have
$\limsup\limits_{n\rightarrow\infty}|\overline{E}[\varphi(\overline{S}_{n})]-\widetilde{E}[\varphi(\xi)]|\leq\frac{\overline{C}}{N}$.
Since $N$ can be arbitrarily large, $\overline{E}[\varphi(\overline{S}_{n})]$
must converge to $\widetilde{E}[\varphi(\xi)]$. The proof of Theorem 3.2 is
complete.
## References
* [1] Artzner P, Delbaen F, Eber J M, Heath D. Coherent measures of risk. Math. Finance, 1999, 9 (3): 203-228
* [2] Birkel T. Moment bounds for associated sequences. Ann. Probab, 1988, 16 (3): 1184-1193
* [3] Doob J L. Stochastic Processes. Wiley, New York, 1953
* [4] Föllmer H, Schied A. Convex measures of risk and trading constraints. Finance and Stochastics, 2002, 6 (4): 429-447
* [5] Frittelli M, Rossaza Gianin E. Dynamic convex risk measures. In: G. Szegö (Ed.), New Risk Measures for the 21st Century, John Wiley & Sons, pp. 227-248, 2004
* [6] Marinacci M. Limit laws for non-additive probabilities and their frequentist interpretation. J. Econom. Theory, 1999, 84: 145-195
* [7] Peng S G. Law of large number and central limit theorem under nonlinear expectations. arXiv:math.PR/0702358vl, 2007
* [8] Peng S G. A new central limit theorem under sublinear expectations. arXiv:0803.2656vl, 2008
* [9] Peng S G. Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations. Sci China Series A, 2009, 52 (7): 1391-1411
* [10] Serfling R J. Convergence properties of $S_{n}$ under moment restrictions. Ann. Math. Statist, 1970, 41: 1235-1248
* [11] Stout W F. Almost Sure Convergence. Academic, New York, 1974
|
arxiv-papers
| 2011-04-28T05:00:07 |
2024-09-04T02:49:18.447700
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Feng Hu",
"submitter": "Feng Hu Dr.",
"url": "https://arxiv.org/abs/1104.5295"
}
|
1104.5296
|
# General laws of large numbers under sublinear expectations††thanks: This
work has been supported in part by the National Basic Research Program of
China (973 Program) (Grant No. 2007CB814901 ) (Financial Risk). First version:
Oct. 20, 2010. This is the third version.
Feng Hu
School of Mathematics
Shandong University
Jinan 250100, China E-mail address: hufengqf@163.com (F. Hu).
Abstract
In this paper, under some weaker conditions, we give three laws of large
numbers under sublinear expectations (capacities), which extend Peng’s law of
large numbers under sublinear expectations in [8] and Chen’s strong law of
large numbers for capacities in [1]. It turns out that these theorems are
natural extensions of the classical strong (weak) laws of large numbers to the
case where probability measures are no longer additive.
Keywords sublinear expectation, capacity, law of large numbers, maximal
distribution. 2000 MR Subject Classification 60H10, 60G48
## 1 Introduction
The classical strong (weak) laws of large numbers (strong (weak) LLN) as
fundamental limit theorems in probability theory play a fruitful role in the
development of probability theory and its applications. However, these kinds
of limit theorems have always considered additive probabilities and additive
expectations. In fact, the additivity of probabilities and expectations has
been abandoned in some areas because many uncertain phenomena can not be well
modelled by using additive probabilities and additive expectations. Motivated
by some problems in mathematical economics, statistics and quantum mechanics,
a number of papers have used non-additive probabilities (called capacities)
and nonlinear expectations (for example Choquet integral/expectation,
$g$-expectation) to describe and interpret the phenomena. Recently, motivated
by the risk measures, super-hedge pricing and model uncertainty in finance,
Peng [5-9] initiated the notion of independent and identically distributed
(IID) random variables under sublinear expectations. Furthermore, he showed
law of large numbers (LLN) and central limit theorem (CLT) under sublinear
expectations. In [1], Chen presented a strong law of large numbers for
capacities induced by sublinear expectations with the notion of IID random
variables initiated by Peng.
The purpose of this paper is to investigate one of the very important
fundamental results in the theory of Peng’s sublinear expectations: tlaw of
large numbers. All of the results on laws of large numbers in [1] and [8]
require that the sequence of random variables is independent and identically
distributed. In this paper we intend to obtain three laws of large numbers
without the requirement of identical distribution. Under some weaker
conditions, we prove three laws of large numbers under Peng’s sublinear
expectations, which extend Peng’s law of large numbers under sublinear
expectations in [8] and Chen’s strong law of large numbers for capacities in
[1].
This paper is organized as follows: in Section 2, we recall some notions and
lemmas under sublinear expectations. In Section 3, we give our main results
including the proofs.
## 2 Notions and Lemmas
In this section, we present some preliminaries in the theory of sublinear
expectations.
DEFINITION 2.1 (see [5-9]). Let $\Omega$ be a given set and let ${\cal H}$ be
a linear space of real valued functions defined on $\Omega$. We assume that
all constants are in ${\cal H}$ and that $X\in{\cal H}$ implies $|X|\in{\cal
H}$. ${\cal H}$ is considered as the space of our ”random variables”. A
nonlinear expectation $\mathbb{E}$ on ${\cal H}$ is a functional $\mathbb{E}$
: ${\cal H}\mapsto\mathbb{R}$ satisfying the following properties: for all
$X$, $Y\in{\cal H}$, we have
(a) Monotonicity: If $X\geq Y$ then $\mathbb{E}[X]\geq\mathbb{E}Y]$.
(b) Constant preserving: $\mathbb{E}[c]=c$.
The triple $(\Omega,{\cal H},\mathbb{E})$ is called a nonlinear expectation
space (compare with a probability space $(\Omega,{\cal F},P)$). We are mainly
concerned with sublinear expectation where the expectation $\mathbb{E}$
satisfies also
(c) Sub-additivity: $\mathbb{E}[X]-\mathbb{E}[Y]\leq\mathbb{E}[X-Y]$.
(d)Positive homogeneity: $\mathbb{E}[\lambda X]=\lambda\mathbb{E}[X]$,
$\forall\lambda\geq 0$.
If only (c) and (d) are satisfied, $\mathbb{E}$ is called a sublinear
functional.
The following representation theorem for sublinear expectations is very useful
(see Peng [8, 9] for the proof).
LEMMA 2.1. Let $\mathbb{E}$ be a sublinear functional defined on
$(\Omega,{\cal H})$, i.e., (c) and (d) hold for $\mathbb{E}$. Then there
exists a family $\\{E_{\theta}:\theta\in\Theta\\}$ of linear functionals on
$(\Omega,{\cal H})$ such that
$\mathbb{E}[X]=\max\limits_{\theta\in\Theta}E_{\theta}[X].$
If (a) and (b) also hold, then $E_{\theta}$ are linear expectations for
$\theta\in\Theta$. If we make furthermore the following assumption: (H) For
each sequence $\\{X_{n}\\}_{n=1}^{\infty}\subset{\cal H}$ such that
$X_{n}(\omega)\downarrow 0$ for $\omega$, we have $\hat{E}[X_{n}]\downarrow
0$. Then for each $\theta\in\Theta$, there exists a unique ($\sigma$-additive)
probability measure $P_{\theta}$ defined on $(\Omega,\sigma({\cal H}))$ such
that
$E_{\theta}[X]=\int_{\Omega}X(\omega){\rm d}P_{\theta}(\omega),\ \ X\in{\cal
H}.$
REMARK 2.1. Lemma 2.1 shows that under $(H)$, indeed, a stronger
representation holds. That is, if $\mathbb{E}$ is a sublinear expectation on
${\cal H}$ satisfying (H), then there exists a set (say $\hat{\cal P}$) of
probability measures such that
$\mathbb{E}[\xi]=\sup_{P\in\hat{\cal P}}E_{P}[\xi],\ \ \
-\mathbb{E}[-\xi]=\inf_{P\in\hat{\cal P}}E_{P}[\xi].$
Therefore, without confusion, we sometimes call supremum expectations as
sublinear expectations.
Given a sublinear expectation $\mathbb{E}$, let us denote the conjugate
expectation ${\cal E}$ of sublinear $\mathbb{E}$ by
${\cal E}[X]:=-\mathbb{E}[-X],\quad\forall X\in\mathcal{H}$
Obviously, for all $X\in\mathcal{H},$ ${\cal E}[X]\leq\mathbb{E}[X].$
Furthermore, let us denote a pair $(\mathbb{V},v)$ of capacities by
$\mathbb{V}(A):=\mathbb{E}[I_{A}],\quad v(A):={\cal E}[I_{A}],\quad\forall
A\in{\cal F}.$
It is easy to check that
$\mathbb{V}(A)+v(A^{c})=1,\quad\forall A\in{\cal F}$
where $A^{c}$ is the complement set of $A.$
DEFINITION 2.2. A set function $V$: ${\cal F}\rightarrow[0,1]$ is called upper
continuous capacity if it satisfies
(1) $V(\phi)=0,V(\Omega)=1$.
(2) $V(A)\leq V(B),$ whenever $A\subset B$ and $A,B\in{\cal F}$.
(3) $V(A_{n})\downarrow V(A)$, if $A_{n}\downarrow A$, where $A_{n},A\in{\cal
F}$.
ASSUMPTION A. Throughout this paper, we assume that $\mathbb{E}$ is a
sublinear expectation, $\mathbb{V}$ is an upper continuous capacity generated
by sublinear expectation $\mathbb{E}$.
The following is the notion of IID random variables under sublinear
expectations introduced by Peng [5-9].
DEFINITION 2.3. Independence: Suppose that $Y_{1},Y_{2},\cdots,Y_{n}$ is a
sequence of random variables such that $Y_{i}\in\mathcal{H}.$ Random variable
$Y_{n}$ is said to be independent to $X:=(Y_{1},\cdots,Y_{n-1})$ under
$\mathbb{E}$, if for each measurable function $\varphi$ on $R^{n}$ with
$\varphi(X,Y_{n})\in\mathcal{H}$ and $\varphi(x,Y_{n})\in\mathcal{H}$ for each
$x\in\mathbb{R}^{n-1},$ we have
$\mathbb{E}[\varphi(X,Y_{n})]=\mathbb{E}[\overline{\varphi}(X)],$
where $\overline{\varphi}(x):=\mathbb{E}[\varphi(x,Y_{n})]$ and
$\overline{\varphi}(X)\in\mathcal{H}$.
Identical distribution: Random variables $X$ and $Y$ are said to be
identically distributed, denoted by $X\overset{d}{=}Y$, if for each $\varphi$
such that $\varphi(X),\;\varphi(Y)\in\mathcal{H}$,
$\mathbb{E}[\varphi(X)]=\mathbb{E}[\varphi(Y)].$
Sequence of IID random variables: A sequence of IID random sequence
$\\{X_{i}\\}_{i=1}^{\infty}$ is said to be IID random variables, if
$X_{i}\overset{d}{=}X_{1}$ and $X_{i+1}$ is independent to
$Y:=(X_{1},\cdots,X_{i})$ for each $i\geq 1.$
The following lemma shows the relation between Peng’s independence and
pairwise independence in Maccheroni and Marinacci [3] and Marinacci [4].
LEMMA 2.2 (see Chen [1]). Suppose that $X,Y\in\mathcal{H}$ are two random
variables. $\mathbb{E}$ is a sub-linear expectation and $(\mathbb{V},v)$ is
the pair of capacities generated by $\mathbb{E}.$ If random variable $X$ is
independent to $Y$ under $\mathbb{E}$, then $X$ also is pairwise independent
to $Y$ under capacities $\mathbb{V},$ and $v$ e.g. for all subsets $D$ and
$G\in{\cal B}(\mathbb{R}),$
$V(X\in D,Y\in G)=V(X\in D)V(Y\in G)$
holds for both capacities $\mathbb{V}$ and $v$.
Borel-Cantelli Lemma is still true for capacity under some assumptions.
LEMMA 2.3 (see Chen [1]). Let $\\{A_{n},n\geq 1\\}$ be a sequence of events in
${\cal F}$ and $(\mathbb{V},v)$ be a pair of capacities generated by sublinear
expectation $\mathbb{E}$.
(1) If $\sum\limits_{n=1}^{\infty}\mathbb{V}(A_{n})<\infty,$ then
$\mathbb{V}\left(\bigcap\limits_{n=1}^{\infty}\bigcup\limits_{i=n}^{\infty}A_{i}\right)=0.$
(2) Suppose that $\\{A_{n},n\geq 1\\}$ are pairwise independent with respect
to $v$, e.g.
$v\left(\bigcap\limits_{i=1}^{\infty}A_{i}^{c}\right)=\prod_{i=1}^{\infty}v\left(A_{i}^{c}\right).$
If $\sum\limits_{n=1}^{\infty}{\mathbb{V}}(A_{n})=\infty$, then
$\mathbb{V}\left(\bigcap\limits_{n=1}^{\infty}\bigcup\limits_{i=n}^{\infty}A_{i}\right)=1.$
DEFINITION 2.4 ( Maximal distribution) (see Peng [8, 9]). Let
$C_{b,Lip}(\mathbb{R})$ denote the space of bounded and Lipschitz continuous
functions. A random variable $\eta$ on sublinear expectation space
$(\Omega,{\cal H},\mathbb{E})$ is called maximal distributed if
$\mathbb{E}[\varphi(\eta)]=\sup\limits_{\underline{\mu}\leq
y\leq\overline{\mu}}\varphi(y),\ \ \ \forall\varphi\in C_{b,Lip}(\mathbb{R}),$
where $\overline{\mu}:=\mathbb{E}[\eta]$ and $\underline{\mu}:={\cal
E}[\eta]$.
REMARK 2.2 (see Peng [8, 9]). Let $\eta$ be maximal distributed with
$\overline{\mu}=\mathbb{E}[\eta]$, $\underline{\mu}={\cal E}[\eta]$, the
distribution of $\eta$ is characterized by the following parabolic PDE:
$\partial_{t}u-g(\partial_{x}u)=0,\quad u(0,x)=\varphi(x),$
where $u(t,x):=\mathbb{E}[\varphi(x+t\eta)],$
$(t,x)\in[0,\infty)\times\mathbb{R}$,
$g(x):=\overline{\mu}x^{+}-\underline{\mu}x^{-}$ and $x^{+}:=\max\\{x,0\\}$,
$x^{-}:=(-x)^{+}$.
With the notion of IID under sublinear expectations, Peng shows a law of large
numbers under sublinear expectations (see Theorem 5.1 in Peng [8]).
LEMMA 2.4 (Law of large numbers under sublinear expectations). Let
$\\{X_{i}\\}_{i=1}^{\infty}$ be a sequence of IID random variables with finite
means $\overline{\mu}=\mathbb{E}[X_{1}],\;\;\underline{\mu}={\cal E}[X_{1}].$
Suppose $\mathbb{E}[|X_{1}|^{2}]<\infty$. Then for any continuous and linear
growth function $\varphi,$
$\mathbb{E}\left[\varphi\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\right)\right]\to\sup_{\underline{\mu}\leq
y\leq\overline{\mu}}\varphi(y),\;\hbox{ as}\;n\to\infty.$
The following lemma is a strong law of large numbers for capacities, which can
be found in Chen [1].
LEMMA 2.5 ( Strong law of large numbers for capacities). Let
$\\{X_{i}\\}_{i=1}^{\infty}$ be a sequence of IID random variables for
sublinear expectation $\mathbb{E}$. Suppose $\mathbb{E}[|X_{1}|^{2}]<\infty$.
Set $\overline{\mu}:=\mathbb{E}[X_{1}],$ $\underline{\mu}:={\cal E}[X_{1}]$
and $S_{n}:=\sum\limits_{i=1}^{n}X_{i}.$ Then
(I)
$v\left(\underline{\mu}\leq\liminf\limits_{n\to\infty}S_{n}/n\leq\limsup\limits_{n\to\infty}S_{n}/n\leq\overline{\mu}\right)=1.$
(II)
$\mathbb{V}\left(\limsup\limits_{n\to\infty}S_{n}/n=\overline{\mu}\right)=1,\quad\mathbb{V}\left(\liminf\limits_{n\to\infty}S_{n}/n=\underline{\mu}\right)=1.$
(III) $\forall b\in[\underline{\mu},\overline{\mu}]$,
$\mathbb{V}\left(\liminf\limits_{n\to\infty}|S_{n}/n-b|=0\right)=1.$
## 3 Main Results
3.1 General law of large numbers under sublinear expectations
THEOREM 3.1 (General law of large numbers under sublinear expectations). Let a
sequence $\\{X_{i}\\}_{i=1}^{\infty}$ which is in a sublinear expectation
space $(\Omega,{\cal H},\mathbb{E})$ satisfy the following conditions:
(i) each $X_{i+1}$ is independent of $(X_{1},\cdots,X_{i})$, for
$i=1,2,\cdots$;
(ii) $\mathbb{E}[X_{i}]=\overline{\mu}_{i}$, ${\cal
E}[X_{i}]=\underline{\mu_{i}}$, where
$-\infty<\underline{\mu_{i}}\leq\overline{\mu}_{i}<\infty$;
(iii) there are two constants ${\overline{\mu}}$ and ${\underline{\mu}}$ such
that
$\lim\limits_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{i=1}^{n}|\underline{\mu_{i}}-\underline{\mu}|=0,\
\ \
\lim\limits_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{i=1}^{n}|\overline{\mu_{i}}-\overline{\mu}|=0;$
(iv) $\sup\limits_{i\geq 1}\mathbb{E}[|X_{i}|^{2}]<\infty$. Then for any
continuous and linear growth function $\varphi,$
$\mathbb{E}\left[\varphi\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\right)\right]\to\sup_{\underline{\mu}\leq
y\leq\overline{\mu}}\varphi(y),\;\hbox{ as}\;n\to\infty.$
PROOF. The main idea comes from Theorem 3.1 Li and Shi [2]. First we prove the
case that $\varphi$ is a bounded and Lipschitz continuous function. For a
small but fixed $h>0$, Let $V$ be the unique viscosity solution of the
following equation
$\partial_{t}V+g(\partial_{x}V)=0,\ \ \ (t,x)\in[0,1+h]\times\mathbb{R},\ \ \
V(1+h,x)=\varphi(x),$ (1)
where $g(x):=\overline{\mu}x^{+}-\underline{\mu}x^{-}$. According to the
definition of maximal distribution, we have
$V(t,x)=\mathbb{E}[\varphi(x+(1+h-t)\eta)].$
Particularly,
$V(h,0)=\mathbb{E}[\varphi(\eta)],\ \ \ V(1+h,x)=\varphi(x).$ (2)
Since (1) is a uniformly parabolic PDE, by the interior regularity of $V$ (see
Wang [10]), we have
$||V||_{C^{1+\alpha/2,1+\alpha}([0,1]\times\mathbb{R})}<\infty,\ \ \ \hbox{for
some}\ \ \ \alpha\in(0,1).$
We set $\delta:=\frac{1}{n}$ and ${S}_{0}:=0$. Then
$\begin{array}[]{lcl}&&V(1,\delta{S}_{n})-V(0,0)=\sum\limits_{i=0}^{n-1}\\{V((i+1)\delta,\delta{S}_{i+1})-V(i\delta,\delta{S}_{i})\\}\\\
&&=\sum\limits_{i=0}^{n-1}\left\\{\left[V((i+1)\delta,\delta{S}_{i+1})-V(i\delta,\delta{S}_{i+1})\right]+\left[V(i\delta,\delta{S}_{i+1})-V(i\delta,\delta{S}_{i})\right]\right\\}\\\
&&=\sum\limits_{i=0}^{n-1}\\{I_{\delta}^{i}+J_{\delta}^{i}\\},\end{array}$
with, by Taylor’s expansion,
$J_{\delta}^{i}=\partial_{t}V(i\delta,\delta{S}_{i})\delta+\partial_{x}V(i\delta,\delta{S}_{i})X_{i+1}\delta,$
$\begin{array}[]{lcl}I_{\delta}^{i}&=&\int_{0}^{1}\left[\partial_{t}V((i+\beta)\delta,\delta{S}_{i+1})-\partial_{t}V(i\delta,\delta{S}_{i+1})\right]{\rm
d}\beta\delta\\\
&+&\left[\partial_{t}V(i\delta,\delta{S}_{i+1})-\partial_{t}V(i\delta,\delta{S}_{i})\right]\delta\\\
&+&\int_{0}^{1}\left[\partial_{x}V(i\delta,\delta{S}_{i}+\beta\delta
X_{i+1})-\partial_{x}V(i\delta,\delta{S}_{i})\right]{\rm d}\beta
X_{i+1}\delta.\end{array}$
Thus
$\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right]+{\cal
E}\left[\sum\limits_{i=0}^{n-1}I_{\delta}^{i}\right]\leq\mathbb{E}[V(1,\delta{S}_{n})]-V(0,0)\leq\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right]+\mathbb{E}\left[\sum\limits_{i=0}^{n-1}I_{\delta}^{i}\right].$
(3)
From (1) as well as the independence of $X_{i+1}$ to $(X_{1},\cdots,X_{i})$,
it follows that
$\begin{array}[]{lcl}&&\mathbb{E}[J_{\delta}^{i}]=\mathbb{E}\left[\partial_{t}V(i\delta,\delta{S}_{i})\delta+\partial_{x}V(i\delta,\delta{S}_{i})X_{i+1}\delta\right]\\\
&&=\mathbb{E}\left\\{\partial_{t}V(i\delta,\delta{S}_{i})\delta+\delta[(\partial_{x}V(i\delta,\delta{S}_{i}))^{+}\overline{\mu_{i+1}}-(\partial_{x}V(i\delta,\delta{S}_{i}))^{-}\underline{\mu_{i+1}}]\right\\}\\\
&&\leq\mathbb{E}\left\\{\partial_{t}V(i\delta,\delta{S}_{i})\delta+\delta[(\partial_{x}V(i\delta,\delta{S}_{i}))^{+}\overline{\mu}-(\partial_{x}V(i\delta,\delta{S}_{i}))^{-}\underline{\mu}]\right\\}\\\
&&+\delta\mathbb{E}[(\partial_{x}V(i\delta,\delta{S}_{i}))^{+}(\overline{\mu_{i+1}}-\overline{\mu})+(\partial_{x}V(i\delta,\delta{S}_{i}))^{-}(\underline{\mu_{i+1}}-\underline{\mu})]\\\
&&=\delta\mathbb{E}[(\partial_{x}V(i\delta,\delta{S}_{i}))^{+}(\overline{\mu_{i+1}}-\overline{\mu})+(\partial_{x}V(i\delta,\delta{S}_{i}))^{-}(\underline{\mu_{i+1}}-\underline{\mu})]\\\
&&\leq\delta(|\overline{\mu_{i+1}}-\overline{\mu}|+|\underline{\mu_{i+1}}-\underline{\mu}|)\mathbb{E}[|\partial_{x}V(i\delta,\delta{S}_{i})|].\end{array}$
But since both $\partial_{t}V$ and $\partial_{x}V$ are uniformly
$\alpha$-hölder continuous in $x$ and $\frac{\alpha}{2}$-hölder continuous in
$t$ on $[0,1]\times\mathbb{R}$, it follows that
$|\partial_{x}V(i\delta,\delta{S}_{i})-\partial_{x}V(0,0)|\leq
C|\delta{S}_{i}|^{\alpha}+|i\delta|^{\frac{\alpha}{2}},$
where $C$ is some positive constant. Since
$\mathbb{E}[|\delta{S}_{i}|^{\alpha}]\leq\mathbb{E}[|\delta{S}_{i}|]+1\leq\sup\limits_{i\geq
1}\mathbb{E}[|X_{i}|]+1.$
Hence, by (iv), we claim that there exists a constant $C_{1}>0$, such that
$\mathbb{E}[|\partial_{x}V(i\delta,\delta{S}_{i})|]\leq C_{1}.$
Then we obtain
$\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right]\leq\sum\limits_{i=0}^{n-1}\mathbb{E}\left[J_{\delta}^{i}\right]\leq
C_{1}\frac{1}{n}\sum\limits_{i=0}^{n-1}(|\overline{\mu_{i+1}}-\overline{\mu}|+|\underline{\mu_{i+1}}-\underline{\mu}|).$
From (iii), we have
$\limsup\limits_{n\rightarrow\infty}\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right]\leq
0.$
In a similar manner of the above, we also have
$\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right]\geq-\sum\limits_{i=0}^{n-1}\delta(|\overline{\mu_{i+1}}-\overline{\mu}|+|\underline{\mu_{i+1}}-\underline{\mu}|)\mathbb{E}[|\partial_{x}V(i\delta,\delta{S}_{i})|].$
By (iii), it follows that
$\liminf\limits_{n\rightarrow\infty}\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right]\geq
0.$ So we can claim that
$\lim\limits_{n\rightarrow\infty}\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right]=0.$
(4)
For $I_{\delta}^{i}$, since both $\partial_{t}V$ and $\partial_{x}V$ are
uniformly $\alpha$-hölder continuous in $x$ and $\frac{\alpha}{2}$-hölder
continuous in $t$ on $[0,1]\times\mathbb{R}$, we then have
$|I_{\delta}^{i}|\leq
C\delta^{1+\alpha/2}(1+|X_{i+1}|^{\alpha}+|X_{i+1}|^{1+\alpha}).$
It follows that
$\mathbb{E}[|I_{\delta}^{i}|]\leq
C\delta^{1+\alpha/2}(1+\mathbb{E}[|X_{i+1}|^{\alpha}]+\mathbb{E}[|X_{i+1}|^{1+\alpha}]).$
Thus
$\begin{array}[]{lcl}&&-C(\frac{1}{n})^{\frac{\alpha}{2}}\frac{1}{n}\sum\limits_{i=0}^{n-1}\left(1+\mathbb{E}\left[|X_{i+1}|^{\alpha}\right]+\mathbb{E}\left[|X_{i+1}|^{1+\alpha}\right]\right)+\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right]\\\
&&\leq\mathbb{E}[V(1,\delta{S}_{n})]-V(0,0)\\\ &&\leq
C(\frac{1}{n})^{\frac{\alpha}{2}}\frac{1}{n}\sum\limits_{i=0}^{n-1}\left(1+\mathbb{E}\left[|X_{i+1}|^{\alpha}\right]+\mathbb{E}\left[|X_{i+1}|^{1+\alpha}\right]\right)+\mathbb{E}\left[\sum\limits_{i=0}^{n-1}J_{\delta}^{i}\right].\end{array}$
(5)
Therefore, from (iv) and (4), as $n\rightarrow\infty$, we have
$\lim\limits_{n\rightarrow\infty}\mathbb{E}[V(1,\delta S_{n})]=V(0,0).$ (6)
On the other hand, for each $t$, $t^{{}^{\prime}}\in[0,1+h]$ and
$x\in\mathbb{R}$,
$|V(t,x)-V(t^{{}^{\prime}},x)|\leq C|t-t^{{}^{\prime}}|.$
Thus
$|V(0,0)-V(h,0)|\leq Ch$ (7)
and, by (2)
$\begin{array}[]{lcl}&&|\mathbb{E}[V(1,\delta{S}_{n})]-\mathbb{E}[\varphi(\delta
S_{n})]|\\\
&&=|\mathbb{E}[V(1,{\delta}{S}_{n})]-\mathbb{E}[V(1+h,{\delta}S_{n})|\leq
Ch.\end{array}$ (8)
It follows from (6)-(8) that
$\limsup\limits_{n\rightarrow\infty}\left|\mathbb{E}\left[\varphi\left(\frac{S_{n}}{{n}}\right)\right]-\mathbb{E}[\varphi(\eta)]\right|\leq
2C{h}.$
Since $h$ can be arbitrarily small, we have
$\lim\limits_{n\rightarrow\infty}\mathbb{E}\left[\varphi\left(\frac{S_{n}}{{n}}\right)\right]=\mathbb{E}\left[\varphi\left(\eta\right)\right]=\sup_{\underline{\mu}\leq
y\leq\overline{\mu}}\varphi(y).$
The rest proof of Theorem 3.1 is very similar to that of Lemma 5.5 in peng
[8]. So we omit it.
From Theorem 3.1, we easily claim the following corollary.
COROLLARY 3.1. Let a sequence $\\{X_{i}\\}_{i=1}^{\infty}$ which is in a
sublinear expectation space $(\Omega,{\cal H},\mathbb{E})$ satisfy the
following conditions:
(i) each $X_{i+1}$ is independent of $(X_{1},\cdots,X_{i})$, for
$i=1,2,\cdots$;
(ii) $\mathbb{E}[X_{i}]=\overline{\mu}_{i}$, ${\cal
E}[X_{i}]=\underline{\mu}_{i}$, where
$-\infty<\underline{\mu}_{i}\leq\overline{\mu}_{i}<\infty$;
(iii) there are two constants ${\overline{\mu}}$ and ${\underline{\mu}}$ such
that $\lim\limits_{i\rightarrow\infty}\underline{\mu}_{i}=\underline{\mu}$,
$\lim\limits_{i\rightarrow\infty}\overline{\mu}_{i}=\overline{\mu};$
(iv) $\sup\limits_{i\geq 1}\mathbb{E}[|X_{i}|^{2}]<\infty$. Then for any
continuous and linear growth function $\varphi,$
$\mathbb{E}\left[\varphi\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\right)\right]\to\sup_{\underline{\mu}\leq
y\leq\overline{\mu}}\varphi(y),\;\hbox{ as}\;n\to\infty.$
3.2 General strong law of large numbers for capacities induced by sublinear
expectations
THEOREM 3.2 (General strong law of large numbers for capacities). Under the
conditions of Theorem 3.1, then
(I)
$v\left(\underline{\mu}\leq\liminf\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}\leq\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}\leq\overline{\mu}\right)=1.$
(II) $\forall b\in[\underline{\mu},\overline{\mu}]$,
$\mathbb{V}\left(\liminf\limits_{n\to\infty}\left|\frac{\sum_{i=1}^{n}X_{i}}{n}-b\right|=0\right)=1.$
(III)
$\mathbb{V}\left(\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}=\overline{\mu}\right)=1,\quad\mathbb{V}\left(\liminf\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}=\underline{\mu}\right)=1.$
In order to prove Theorem 3.2, we need the following lemma.
LEMMA 3.1 (see Chen [1]). In a sublinear expectation space $(\Omega,{\cal
H},\mathbb{E})$, let $\\{X_{i}\\}_{i=1}^{\infty}$ be a sequence of independent
random variables such that $\sup\limits_{i\geq
1}\mathbb{E}[|X_{i}|^{2}]<\infty$. Suppose that there exists a constant $C>0$
such that
$|X_{n}-\mathbb{E}[X_{n}]|\leq C\frac{n}{\log(1+n)},\quad n=1,2,\cdots.$
Then there exists a sufficiently large number $m>1$ such that
$\sup_{n\geq
1}\mathbb{E}\left[\exp\left(\frac{m\log(1+n)}{n}S_{n}\right)\right]<\infty.$
Where $S_{n}:=\sum\limits_{i=1}^{n}[X_{i}-\mathbb{E}[X_{i}]].$
PROOF OF THEOREM 3.2. First, it is easy to show that (I) is equivalent to the
conjunction of
$\mathbb{V}\left(\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}>\overline{\mu}\right)=0,$
(9)
$\mathbb{V}\left(\liminf\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}<\underline{\mu}\right)=0.$
(10)
Indeed, write
$A:=\left\\{\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}>\overline{\mu}\right\\},$
$B:=\left\\{\liminf\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}<\underline{\mu}\right\\},$
the equivalence can be proved from the inequality
$\max\\{\mathbb{V}(A),V(B)\\}\leq\mathbb{V}\left(A\bigcup
B\right)\leq\mathbb{V}(A)+\mathbb{V}(B).$
We shall turn the proofs of (9) and (10) into two steps.
Step 1 Assume that there exists a constant $C>0$ such that
$|X_{n}-\overline{\mu}_{n}|\leq\frac{Cn}{\log(1+n)}$ for $n\geq 1.$ To prove
(9), we shall show that for any $\varepsilon>0,$
$\mathbb{V}\left(\bigcap_{n=1}^{\infty}\bigcup_{i=n}^{\infty}\left\\{\frac{\sum_{i=1}^{n}(X_{i}-\overline{\mu_{i}})}{n}\geq\varepsilon\right\\}\right)=0.$
(11)
Indeed, by Lemma 3.1, for any $\varepsilon>0,$ let us choose $m>1/\varepsilon$
such that
$\sup_{n\geq
1}\mathbb{E}\left[\exp\left(\frac{m\log(1+n)}{n}\sum_{i=1}^{n}(X_{i}-\overline{\mu}_{i})\right)\right]<\infty.$
By Chebyshev’s inequality,
$\begin{array}[]{lcl}\mathbb{V}\left(\frac{\sum_{i=1}^{n}(X_{i}-\overline{\mu}_{i})}{n}\geq\varepsilon\right)&=&\mathbb{V}\left(\frac{m\log(1+n)}{n}\sum\limits_{i=1}^{n}(X_{i}-\overline{\mu}_{i})\geq\varepsilon
m\log(1+n)\right)\\\ &\leq&{\rm e}^{-\varepsilon
m\log(1+n)}\mathbb{E}\left[\exp\left(\frac{m\log(1+n)}{n}\sum\limits_{i=1}^{n}(X_{i}-\overline{\mu}_{i})\right)\right]\\\
&\leq&\frac{1}{(1+n)^{\varepsilon m}}\sup\limits_{n\geq
1}\mathbb{E}\left[\exp\left(\frac{m\log(1+n)}{n}\sum\limits_{i=1}^{n}(X_{i}-\overline{\mu}_{i})\right)\right].\end{array}$
Since $\varepsilon m>1,$ and $\sup\limits_{n\geq
1}\mathbb{E}\left[\exp\left(\frac{m\log(1+n)}{n}\sum\limits_{i=1}^{n}(X_{i}-\overline{\mu}_{i})\right)\right]<\infty.$
It then follows from the convergence of
$\sum\limits_{n=1}^{\infty}\frac{1}{(1+n)^{\varepsilon m}},$ we have
$\sum_{n=1}^{\infty}\mathbb{V}\left(\frac{\sum_{i=1}^{n}(X_{i}-\overline{\mu}_{i})}{n}\geq\varepsilon\right)<\infty.$
Using the first Borel-Cantelli Lemma, we have
$\mathbb{V}\left(\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}(X_{i}-\overline{\mu}_{i})}{n}\geq\varepsilon\right)=0\quad\forall\varepsilon>0,$
which implies
$\mathbb{V}\left(\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}>\overline{\mu}\right)=0.$
Also
$v\left(\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}\leq\overline{\mu}\right)=1.$
Similarly, considering the sequence $\\{-X_{i}\\}_{i=1}^{\infty}.$ By step 1,
it suffices to obtain
$\mathbb{V}\left(\limsup\limits_{n\to\infty}\frac{-\sum_{i=1}^{n}(X_{i}-\underline{\mu_{i}})}{n}>0\right)=0.$
Hence,
$\mathbb{V}\left(\liminf\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}<\underline{\mu}\right)=0.$
Also
$v\left(\liminf\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}\geq\underline{\mu}\right)=1.$
Step 2. Write
$\overline{X}_{n}:=(X_{n}-\overline{\mu}_{n})I_{\left\\{|X_{n}-\overline{\mu}_{n}|\leq\frac{Cn}{\log(1+n)}\right\\}}-\mathbb{E}\left[(X_{n}-\overline{\mu}_{n})I_{\left\\{|X_{n}-\overline{\mu}_{n}|\leq\frac{Cn}{\log(1+n)}\right\\}}\right]+\overline{\mu}_{n}.$
It is easy to check that $\\{\overline{X}_{i}\\}_{i=1}^{\infty}$ satisfies the
assumptions in Lemma 3.1. Indeed, obviously for each $n\geq 1,$
$|\overline{X}_{n}-\overline{\mu}_{n}|\leq\frac{2Cn}{\log(1+n)}.$
On the other hand, for each $n\geq 1,$ it easy to check that
$|\overline{X}_{n}-\overline{\mu}_{n}|\leq|X_{n}-\overline{\mu}_{n}|+\mathbb{E}[|X_{n}-\overline{\mu}_{n}|].$
Then, by (iv),
$\mathbb{E}[|\overline{X}_{n}-\overline{\mu}_{n}|^{2}]\leq
4\left(\mathbb{E}[|X_{n}-\overline{\mu}_{n}|^{2}]+(\mathbb{E}[|X_{n}-\overline{\mu}_{n}|])^{2}\right)<\infty.$
Set
$\overline{S}_{n}:=\sum\limits_{i=1}^{n}(\overline{X}_{i}-\overline{\mu}_{i}),$
immediately,
$\frac{1}{n}\sum\limits_{i=1}^{n}(X_{i}-\overline{\mu}_{i})\leq\frac{1}{n}\overline{S}_{n}+\frac{1}{n}\sum_{i=1}^{n}|X_{i}-\overline{\mu}_{i}|I_{\left\\{|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right\\}}+\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[|X_{i}-\overline{\mu}_{i}|I_{\left\\{|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right\\}}\right].$
(12)
Since $\sup\limits_{i\geq 1}\mathbb{E}[|X_{i}|^{2}]<\infty$, we have
$\begin{array}[]{lcl}\sum\limits_{i=1}^{\infty}\frac{\mathbb{E}\left[\left|X_{i}-\overline{\mu}_{i}\right|I_{\left\\{|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right\\}}\right]}{i}&\leq&\sum\limits_{i=1}^{\infty}\frac{\log(1+i)}{ci^{2}}\mathbb{E}[|X_{i}-\overline{\mu}_{i}|^{2}]\\\
&\leq&C_{1}\sum\limits_{i=1}^{\infty}\frac{\log(1+i)}{i^{2}}<\infty.\end{array}.$
By Kronecker Lemma,
$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[|X_{i}-\overline{\mu}_{i}|I_{\left\\{|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right\\}}\right]\to
0.$ (13)
Furthermore, write
$A_{i}:=\left\\{|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right\\}$ for
$i\geq 1.$ It suffices now to prove that
$\mathbb{V}\left(\bigcap_{n=1}^{\infty}\bigcup_{i=n}^{\infty}A_{i}\right)=0.$
Indeed, by Chebyshev’s inequality,
$\mathbb{V}\left(|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right)\leq\left(\frac{\log(1+i)}{Ci}\right)^{2}\mathbb{E}[|X_{i}-\overline{\mu}_{i}|^{2}]$
Hence,
$\sum_{i=1}^{\infty}\mathbb{V}\left(|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right)<\infty$
and by the first Borel-Cantelli Lemma, we have
$\mathbb{V}\left(\bigcap\limits_{n=1}^{\infty}\bigcup\limits_{i=n}^{\infty}A_{i}\right)=0.$
This implies that
$\omega\not\in\bigcap\limits_{n=1}^{\infty}\bigcup\limits_{i=n}A_{i},$ the
sequence
$\sum_{i=1}^{n}\frac{|X_{i}-\overline{\mu}_{i}|I_{\left\\{|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right\\}}}{i}$
converges almost surely with respect to $\mathbb{V}$ as $n\rightarrow\infty$.
Applying Kronecker Lemma again,
$\frac{1}{n}\sum_{i=1}^{n}\left(|X_{i}-\overline{\mu}_{i}|I_{\left\\{|X_{i}-\overline{\mu}_{i}|>\frac{Ci}{\log(1+i)}\right\\}}\right)\to
0,\hbox{ a.s}\quad\mathbb{V}.$ (14)
Set $\lim\sup$ on both side of (12), then by (13) and (14), we have
$\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}(X_{i}-\overline{\mu}_{i})}{n}\leq\limsup\limits_{n\to\infty}\frac{\overline{S}_{n}}{n},\quad\hbox{a.s.}\quad\mathbb{V}.$
Since $\\{\overline{X}_{n}\\}_{n=1}^{\infty}$ satisfies the assumption of Step
1, by Step 1,
$\mathbb{V}\left(\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}\overline{X}_{i}}{n}>\overline{\mu}\right)=0.$
Also
$v\left(\limsup\limits_{n\to\infty}\frac{\sum_{i=1}^{n}\overline{X}_{i}}{n}\leq\overline{\mu}\right)=1.$
In a similar manner, we can prove
$v\left(\liminf\limits_{n\to\infty}\frac{\sum_{i=1}^{n}X_{i}}{n}\geq\underline{\mu}\right)=1.$
Therefore, the proof of (I) is complete.
To prove (II). Denote $S_{n}:=\frac{1}{n}\sum_{i=1}^{n}X_{i}$. If
$\overline{\mu}=\underline{\mu}$, it is trivial. Suppose
$\overline{\mu}>\underline{\mu},$ we prove that, for any
$b\in(\underline{\mu},\overline{\mu})$
$\mathbb{V}\left(\liminf_{n\to\infty}|S_{n}/{n}-b|=0\right)=1.$
To do so, we only need to prove that there exists an increasing subsequence
$\\{n_{k}\\}$ of $\\{n\\}$ such that for any
$b\in(\underline{\mu},\overline{\mu})$
$\mathbb{V}\left(\bigcap_{m=1}^{\infty}\bigcup_{k=m}^{\infty}\\{|S_{n_{k}}/{n_{k}}-b|\leq\varepsilon\\}\right)=1.$
(15)
Indeed, for any
$0<\varepsilon\leq\min\\{\overline{\mu}-b,b-\underline{\mu}\\},$ let us choose
$n_{k}=k^{k}$ for $k\geq 1.$
Set $\overline{S}_{n}:=\sum\limits_{i=1}^{n}(X_{i}-b),$ then
$\begin{array}[]{lcl}\mathbb{V}\left(\left|\frac{S_{n_{k}}-S_{n_{k-1}}}{n_{k}-n_{k-1}}-b\right|\leq\epsilon\right)&=&\mathbb{V}\left(\left|\frac{S_{n_{k}-n_{k-1}}}{n_{k}-n_{k-1}}-b\right|\leq\varepsilon\right)\\\
&=&\mathbb{V}\left(\left|\frac{S_{n_{k}-n_{k-1}}-(n_{k}-n_{k-1})b}{n_{k}-n_{k-1}}\right|\leq\varepsilon\right)\\\
&=&\mathbb{V}\left(\left|\frac{\overline{S}_{n_{k}-n_{k-1}}}{n_{k}-n_{k-1}}\right|\leq\varepsilon\right)\\\
&\geq&\mathbb{E}\left[\phi\left(\frac{\overline{S}_{n_{k}-n_{k-1}}}{n_{k}-n_{k-1}}\right)\right]\end{array}$
where $\phi(x)$ is defined by
$\phi(x)=\left\\{\begin{array}[]{l}1-e^{|x|-\varepsilon},\quad\quad|x|\leq\varepsilon;\\\
0,\quad\quad\quad\quad\quad\;\;|x|>\varepsilon.\end{array}\right.$
Considering the sequence of random variables $\\{X_{i}-b\\}_{i=1}^{\infty}.$
Obviously
$\mathbb{E}[X_{i}-b]=\overline{\mu}_{i}-b,\quad{\cal
E}[X_{i}-b]=\underline{\mu_{i}}-b.$
Applying Theorem 3.1, we have, as $k\to\infty$,
$\mathbb{E}\left[\phi\left(\frac{\overline{S}_{n_{k}-n_{k-1}}}{n_{k}-n_{k-1}}\right)\right]\to\sup_{\underline{\mu}-b\leq
y\leq\overline{\mu}-b}\phi(y)=\phi(0)=1-{\rm e}^{-\varepsilon}>0.$
Thus
$\sum_{k=1}^{\infty}\mathbb{V}\left(\left|\frac{S_{n_{k}}-S_{n_{k-1}}}{n_{k}-n_{k-1}}-b\right|\leq\varepsilon\right)\geq\sum_{k=1}^{\infty}\mathbb{E}\left[\phi\left(\frac{\overline{S}_{n_{k}-n_{k-1}}}{n_{k}-n_{k-1}}\right)\right]=\infty.$
Note the fact that the sequence of $\\{S_{n_{k}}-S_{n_{k-1}}\\}_{k\geq 1}$ is
independent for all $k\geq 1$ . Using the second Borel-Cantelli Lemma, we have
$\liminf_{k\to\infty}\left|\frac{S_{n_{k}}-S_{n_{k-1}}}{n_{k}-n_{k-1}}-b\right|\leq\varepsilon,\quad\hbox{a.s.}\;\mathbb{V}.$
But
$\left|\frac{S_{n_{k}}}{n_{k}}-b\right|\leq\left|\frac{S_{n_{k}}-S_{n_{k-1}}}{n_{k}-n_{k-1}}-b\right|\cdot\frac{n_{k}-n_{k-1}}{n_{k}}+\left[\frac{|S_{n_{k-1}}|}{n_{k-1}}+|b|\right]\frac{{n_{k-1}}}{n_{k}}.$
(16)
Noting the following fact,
$\frac{n_{k}-n_{k-1}}{n_{k}}\to 1,\quad\frac{{n_{k-1}}}{n_{k}}\to
0,\;\hbox{as}\;k\to\infty$
and
$\limsup_{n\to\infty}S_{n}/n\leq\overline{\mu},\quad\limsup_{n\to\infty}(-S_{n})/n\leq-\underline{\mu},$
which implies
$\limsup_{n\to\infty}|S_{n}|/n\leq\max\\{|\overline{\mu}|,|\underline{\mu}|\\}<\infty.$
Hence, from inequality (16), for any $\varepsilon>0,$
$\liminf_{k\to\infty}\left|\frac{S_{n_{k}}}{n_{k}}-b\right|\leq\varepsilon,\hbox{
a.s.}\;\mathbb{V}.$
That is
$\mathbb{V}\left(\liminf_{n\to\infty}|S_{n}/{n}-b|\leq\varepsilon\right)=1.$
Since $\varepsilon$ can be arbitrarily small, we have
$\mathbb{V}\left(\liminf_{n\to\infty}|S_{n}/{n}-b|=0\right)=1.$
Now we prove that
$\mathbb{V}\left(\limsup\limits_{n\to\infty}S_{n}/n=\overline{\mu}\right)=1,\quad\mathbb{V}\left(\liminf\limits_{n\to\infty}S_{n}/n=\underline{\mu}\right)=1.$
Indeed, from
$\mathbb{V}\left(\liminf\limits_{n\rightarrow\infty}\left|\frac{S_{n}}{n}-b\right|=0\right)=1$,
$\forall b\in(\underline{\mu},\overline{\mu})$, we can obtain
$\mathbb{V}\left(\limsup\limits_{n\rightarrow\infty}\frac{S_{n}}{n}\geq
b\right)=1.$
Then
$\mathbb{V}\left(\limsup\limits_{n\rightarrow\infty}\frac{S_{n}}{n}\geq\overline{\mu}\right)=1.$
On the other hand, we have, for each $P\in{\cal P}$,
$P\left(\limsup\limits_{n\rightarrow\infty}\frac{S_{n}}{n}=\overline{\mu}\right)=P\left(\limsup\limits_{n\rightarrow\infty}\frac{S_{n}}{n}\geq\overline{\mu}\right)+P(\limsup\limits_{n\rightarrow\infty}\frac{S_{n}}{n}\leq\overline{\mu})-1.$
But
$P\left(\limsup\limits_{n\rightarrow\infty}\frac{S_{n}}{n}\leq\overline{\mu}\right)\geq
v\left(\underline{\mu}\leq\liminf\limits_{n\rightarrow\infty}\frac{S_{n}}{n}\leq\limsup\limits_{n\rightarrow\infty}\frac{S_{n}}{n}\leq\overline{\mu}\right)=1.$
Thus
$\mathbb{V}\left(\limsup\limits_{n\rightarrow\infty}\frac{S_{n}}{n}=\overline{\mu}\right)=1.$
In a similar manner, we can obtain
$\mathbb{V}\left(\liminf\limits_{n\rightarrow\infty}\frac{S_{n}}{n}=\underline{\mu}\right)=1.$
The proof of Theorem 3.2 is complete.
3.2 General weak law of large numbers for capacities induced by sublinear
expectations
THEOREM 3.3 (General weak law of large numbers for capacities). Under the
conditions of Theorem 3.1, then for any $\varepsilon>0$,
$\lim\limits_{n\rightarrow\infty}v\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\in(\underline{\mu}-\varepsilon,\overline{\mu}+\varepsilon)\right)=1.$
(17)
PROOF. To prove (17), we only need to prove that
$\lim\limits_{n\rightarrow\infty}\mathbb{V}\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\leq\underline{\mu}-\varepsilon\right)=0$
and
$\lim\limits_{n\rightarrow\infty}\mathbb{V}\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\geq\underline{\mu}+\varepsilon\right)=0.$
Now we prove
$\lim\limits_{n\rightarrow\infty}\mathbb{V}\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\leq\underline{\mu}-\varepsilon\right)=0.$
For any $\delta<\varepsilon$, construct two functions $f$, $g$ such that
$f(x)=1\ \ \hbox{for}\ \ x\leq\underline{\mu}-\varepsilon-\delta,$ $f(x)=0\ \
\hbox{for}\ \ x\geq\underline{\mu}-\varepsilon,$
$f(x)=\frac{1}{\delta}(\underline{\mu}-\varepsilon-x)\ \ \hbox{for}\ \
\underline{\mu}-\varepsilon-\delta<x<\underline{\mu}-\varepsilon$
and
$g(x)=1\ \ \hbox{for}\ \ x\leq\underline{\mu}-\varepsilon,$ $g(x)=0\ \
\hbox{for}\ \ x\geq\underline{\mu}-\varepsilon+\delta,$
$g(x)=\frac{1}{\delta}(\underline{\mu}-\varepsilon-x)+1\ \ \hbox{for}\ \
\underline{\mu}-\varepsilon<x<\underline{\mu}-\varepsilon+\delta.$
Obviously, $f$ and $g\in C_{b,Lip}(\mathbb{R})$. By Theorem 3.1, we have
$\lim\limits_{n\rightarrow\infty}\mathbb{E}\left[f\left(\frac{1}{{n}}\sum\limits_{i=1}^{n}X_{i}\right)\right]=\sup\limits_{\underline{\mu}\leq
y\leq\overline{\mu}}f(y)=0,$
$\lim\limits_{n\rightarrow\infty}\mathbb{E}\left[g\left(\frac{1}{{n}}\sum\limits_{i=1}^{n}X_{i}\right)\right]=\sup\limits_{\underline{\mu}\leq
y\leq\overline{\mu}}g(y)=0.$
So
$0=\sup\limits_{\underline{\mu}\leq
y\leq\overline{\mu}}f(y)\leq\liminf\limits_{n\rightarrow\infty}\mathbb{V}\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\leq\underline{\mu}-\varepsilon\right)\leq\limsup\limits_{n\rightarrow\infty}\mathbb{V}\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\leq\underline{\mu}-\varepsilon\right)\leq\sup\limits_{\underline{\mu}\leq
y\leq\overline{\mu}}g(y)=0.$ (18)
Hence
$\lim\limits_{n\rightarrow\infty}\mathbb{V}\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\leq\underline{\mu}-\varepsilon\right)=0.$
In a similar manner, we can obtain
$\lim\limits_{n\rightarrow\infty}\mathbb{V}\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\geq\underline{\mu}+\varepsilon\right)=0.$
The proof of Theorem 3.3 is complete.
## References
* [1] Chen, Z. J., Strong laws of large numbers for capacities, arXiv:1006.0749v1 [math.PR] 3 Jun 2010.
* [2] Li, M. and Shi, Y. F., A general central limit theorem under sublinear expectations, Science China Mathematics, 53, 2010, 1989-1994.
* [3] Maccheroni, F. and Marinacci, M., A strong law of large number for capacities, Ann. Proba., 33, 2005, 1171-1178.
* [4] Marinacci, M., Limit laws for non-additive probabilities and their frequentist interpretation, J. Econom. Theory, 84, 1999, 145-195.
* [5] Peng, S. G., G-expectation, G-Brownian motion and related stochastic calculus of Ito type, in: F.E. Benth, et al. (Eds.), Proceedings of the Second Abel Symposium, 2005, Springer-Verlag, 2006, 541-567.
* [6] Peng, S. G., Law of large number and central limit theorem under nonlinear expectations, arXiv: math. PR/0702358vl 13 Feb 2007.
* [7] Peng, S. G., Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation, Stochastic Process. Appl., 118(12), 2008, 2223-2253.
* [8] Peng, S. G., A new central limit theorem under sublinear expectations, arXiv:0803.2656v1 [math.PR] 18 Mar. 2008.
* [9] Peng, S. G., Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations, Science in China Series A-Mathematics, 52(7), 2009, 1391-1411.
* [10] Wang, L. H., On the regularity of fully nonlinear parabolic equations II, Comm. Pure Appl. Math., 45, 1992, 141-178.
|
arxiv-papers
| 2011-04-28T05:03:16 |
2024-09-04T02:49:18.453109
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Feng Hu",
"submitter": "Feng Hu Dr.",
"url": "https://arxiv.org/abs/1104.5296"
}
|
1104.5309
|
# A Method for Selecting Sensor Waveforms Based Upon Post-Selection Criteria
for Remote Sensing Applications
John E Gray Code Q-31, Electromagnetic and Sensor Systems Department, Naval
Surface Warfare Center Dahlgren, 18444 FRONTAGE ROAD SUITE 328, DAHLGREN VA
22448-5161 Allen D Parks Code Q-31, Electromagnetic and Sensor Systems
Department, Naval Surface Warfare Center Dahlgren, 18444 FRONTAGE ROAD SUITE
328, DAHLGREN VA 22448-5161
###### Abstract
In previous work, we have argued that measurement using a radar can be viewed
as taking the expected value of an operator. The operator usually represents
some aspect of the characteristics of the object being tracked (such as
Doppler, distance, shape, polarization, etc.) that is measured by the radar
while the expectation is taken with respect to an optimal matched filter
design process based on the waveform broadcast by the radar and a receiver
which is optimized to a specific characteristic of the object being tracked.
With digital technology, it is possible to produce designer waveforms both to
broadcast and to mix with the return signal, so it is possible to determine
the maximum of the expectation of the operator by proper choice of the
received signal. We illustrate a method for selecting the choice of the return
signal to detect different ”target operators” using perturbation theory based
on the Matched Filter Principle and illustrate it with different operators and
waveforms.
Electromagnetics, Sensor Waveform, Post-Selection
###### pacs:
13.40.-f, 41.20.Jb, 84.40.-x, 43.60.Vx
###### Contents
1. I Introduction
2. II Operator Approach
3. III Physical Interactions
1. III.1 Multi-dimensional Interaction Operators
2. III.2 Scattering Operators
3. III.3 Single Dimensional Interactions with Signals
4. IV Conclusions
## I Introduction
In the seminal book ”Probability and Information Theory with Applications to
Radar”Woodward1980 , Woodward introduced the ambiguity function as the means
to solve the measurement problem of radar. The measurement problem of an
active sensor is to design a waveform to be broadcast by a radar or sonar, to
maximize the receiver response to the signal which has interacted with an
object. The solution proposed by NorthNorth1943 during World War II is the
”matched filter”, which correlates a known signal template with what is
received in a return signal to detect the presence or absence of the template
in the unknown received signal. This is exactly equivalent to convolving the
unknown signal with the complex conjugate of the time-reversed version of the
known signal template; this is called cross-correlation. Therefore, as has
been shown in many textsVanTrees2002a , the matched filter is the optimal
linear filter for maximizing the signal to noise ratio (SNR) in the presence
of additive noise.
In radar or sonar, a known signal is sent out and the reflected signal from
the object (which is a function of the distance to the object, the relative
speed of the object and the broadcast frequency of the radar), can be examined
at the radar receiver for the common elements of the out-going signal in the
return signal, which, when optimized is a multi-dimensional matched filter or
ambiguity function. The broadband form the return signal is $s(\alpha
t-\tau)$, where, $\tau=\frac{2R}{c}$, is the delay
$\alpha=\frac{c-v_{R}}{c+v_{R}}=\frac{1-\beta}{1+\beta}.$ (1)
Here $c$ is the speed of propagation and $v_{R}$ is the radial velocity of the
object.
There are two forms for the ambiguity function, the more general form is the
wideband (WB): where the return signal can be modeled as a delay in time of
the broadcast signal. The wideband ambiguity function, $\chi_{WB}$, which has
the return signal modeled as both a dilation and delay of the broadcast signal
$\chi_{WB}(\omega,\tau)=\int_{-\infty}^{\infty}e^{-it\omega}\,s^{\ast}(t)s_{N,WB}(t)\,dt,$
(2)
where $s^{\ast}(t)$ means complex conjugate of the broadcast signal. The
ambiguity function is used to design radar signals so that they have desirable
properties useful for various kinds of radars (Leavon is a current up to date
resourceLevanon2004 ). We propose a way to think about the ambiguity function
which is different than the way Woodward presented it. This approach suggests
the ambiguity function can be thought of as the expectation value of an
operator that is connected to the delay and dilation properties associated
with the Doppler effect Gray2010 . Thus, the sensor measurement problem can be
cast in a more abstract setting, which treats interaction between the waveform
and the target as an operator acting on the waveform. This approach can be
termed the operator approach and it can be viewed as an abstraction of the
quantum mechanical formalism applied to a classical setting. This approach
underlies the time-frequency approach to signal processing that has been
championed by CohenCohen1995 . Using this approach, we examine the operator
viewpoint for both single and multi-dimensional operators acting on a signal
by the interaction process. In particular, we propose that the cross-ambiguity
function for certain operators can be used to amplify the return signals. We
illustrate this for several operators, show under what conditions this
amplification can occur, and discuss how the cross-amplification signal can be
constructed given knowledge of the interaction operator and the broadcast
signal. The result of this approach is to suggest a way for recasting problems
in signal processing when we have sufficient knowledge of the interaction of
the broadcast signal.
## II Operator Approach
The notation for the inner product of two signals, $r(t)$ and $s(t)$ that is
used throughout the paper is
$\left\langle
r(t),s(t)\right\rangle=\int_{-\infty}^{\infty}r^{\ast}(t)\,s(t)\;dt,$ (3)
while the Fourier transform, $\mathcal{F}$, of a signal $s(t)$ isPapoulis1961
$S(\omega)=\mathcal{\hat{F}}s(t)=\int_{-\infty}^{\infty}e^{-it\omega}\,s(t)\;dt=\left\langle\,e^{it\omega},s(t)\right\rangle,$
(4)
and the inverse Fourier transform, $\mathcal{\hat{F}}^{-1}$, is
$s(t)=\mathcal{\hat{F}}^{-1}S(\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{it\omega}\,S(\omega)d\omega=\left\langle\,e^{-it\omega},S(\omega)\right\rangle.$
A function of time which is translated by amount, $\tau,$ can be written as
(using the Taylor expansion of function $\hat{D}=\frac{d}{dt}$)
$s(t+\tau)=e^{\tau\hat{D}}s(t)=e^{i\left(-i\tau\frac{d}{dt}\right)}s(t)=e^{i\mathcal{\hat{W}}}s(t)$
(5)
The form of the narrow band (N) ambiguity function $\chi_{N}(\omega,\tau)$,
can be recast as
$\chi_{N}(\omega,\tau)=\int_{-\infty}^{\infty}e^{-it\omega}\,s^{\ast}(t)s(t-\tau)\,dt=\left\langle
s(t)e^{it\omega},e^{-i\tau\mathcal{\hat{W}}}s(t)\right\rangle.$ (6)
From the Doppler effect perspective, translation is the operation of the
frequency operator on the signal, where $\tau$ is the total distance a signal
travels to an object, is reflected, and then returns to the receiver. The
expected value associated with observable, $\hat{A},$ for a signal
$s\left(t\right)$ is
$\left\langle\hat{A}\right\rangle=\int\hat{A}\,\left|s\left(t\right)\right|^{2}dt=\int
s^{\ast}\left(t\right)\,\hat{A}s\left(t\right)\,dt=\left\langle
s\left(t\right),\hat{A}s\left(t\right)\right\rangle.$ (7)
Thus, the narrow band ambiguity function can be written using this definition
as
$\chi_{N}(\omega,\tau)=\left\langle
e^{-i\tau\mathcal{\hat{W}}}\right\rangle_{s\left(t\right)}.$
We can thus interpret $e^{\pm i\tau\mathcal{\hat{W}}}$ as a translation
operation acting on function $s(t)$ which moves the time $t\rightarrow
t\pm\tau$. This way of considering measurement in radar is a natural
continuation of the viewpoint that started with GabourGabor1953 and extended
by WoodwardWoodward1980 and VaidmanVakman1968 for considering measurement in
radar.
The time operator, $\mathcal{\hat{T}},$ is
$\mathcal{\hat{T}}=-\frac{1}{i}\frac{d}{d\omega},$ (8)
while the frequency operator is
$\mathcal{\hat{W}=}\frac{1}{i}\frac{d}{dt}.$ (9)
It is understood that these operators act on signals and that
$\mathcal{\hat{W}}^{n}s\left(t\right)=\left(\frac{1}{i}\frac{d}{dt}\right)^{n}s\left(t\right).$
(10)
A very useful calculation trick is based on a modification of Parceval’s
theorem for an unnormalized signal:
$\displaystyle E$ $\displaystyle=\left\langle
s(t),s(t)\right\rangle=\left\langle 1\right\rangle_{s\left(t\right)}$
$\displaystyle=\frac{1}{2\pi}\left\langle\left\langle
S(\omega^{\prime})e^{i\omega^{\prime}t},1\right\rangle,\left\langle
S(\omega)e^{i\omega t},1\right\rangle,1\right\rangle$
$\displaystyle=\left\langle\left\langle
S(\omega^{\prime}),S(\omega)\delta\left(\omega-\omega^{\prime}\right)\right\rangle\right\rangle$
$\displaystyle=\left\langle S(\omega),S(\omega)\right\rangle=\left\langle
1\right\rangle_{S}.$ (11)
Now it follows that the expected value of the frequency of a signal
$S\left(\omega\right)$ can be written as
$\left\langle\omega\right\rangle=\text{
}_{s(t^{\prime})}\left\langle\mathcal{\hat{W}}\right\rangle_{s(t)}.$
From this result, it follows that
$\left\langle\omega^{n}\right\rangle=\text{
}_{s(t^{\prime})}\left\langle\mathcal{\hat{W}}^{n}\right\rangle_{s(t)},$ (12)
which can be proved by induction. If $g\left(t\right)$ is an analytical
function, it follows that
$\left\langle g\left(\omega\right)\right\rangle=\text{
}_{s(t^{\prime})}\left\langle
g\left(\mathcal{\hat{W}}\right)\right\rangle_{s(t)}.$ (13)
Thus, to calculate the average frequency of a function, we do not have to
calculate the Fourier transform. Rather one simply calculates derivatives of a
function and then integrates.
The frequency translation operator has exactly the same effect:
$e^{i\theta\mathcal{\hat{T}}}S\left(\omega\right)=S\left(\omega+\theta\right).$
(14)
For a complex signal,
$s\left(t\right)=A\left(t\right)e^{i\vartheta\left(t\right)}$,
$e^{i\tau\mathcal{\hat{W}}}s(t)=\left(\vartheta^{\prime}\left(t\right)-i\frac{A^{\prime}\left(t\right)}{A\left(t\right)}\right)s(t)$
(15)
so
$\left\langle\omega\right\rangle_{S}=\left\langle\left(\vartheta^{\prime}\left(t\right)+i\frac{A^{\prime}\left(t\right)}{A\left(t\right)}\right)A\left(t^{\prime}\right),A\left(t\right)\right\rangle=\left\langle\vartheta^{\prime}\left(t\right)\right\rangle_{A\left(t\right)}$
(16)
since the second term in the integral is a perfect differential. The average
frequency is the derivative of the phase, $\vartheta\left(t\right)$, over the
density over all time. Thus the phase at each time must be instantaneous in
some sense, i.e. $\omega_{i}\left(t\right)$, so we can make the identification
that $\omega_{i}\left(t\right)=\vartheta^{\prime}\left(t\right)$. Similarly,
we can show that
$\left\langle\omega^{2}\right\rangle_{S\left(\omega\right)}=\left\langle\vartheta^{\prime
2}\left(t\right)\right\rangle_{A\left(t\right)}+\left\langle\frac{A^{\prime}\left(t\right)}{A\left(t\right)}\right\rangle_{A\left(t\right)}.$
(17)
The covariance of a signal might be thought of as the ”average time”
multiplied by the instantaneous frequency or $\left\langle
t\vartheta^{\prime}\left(t\right)\right\rangle_{s}=\left\langle
t\vartheta^{\prime}\left(t\right)\right\rangle_{A\left(t\right)}.$When time
and frequency are uncorrelated with each other, then it is reasonable to
expect that $\left\langle
t\vartheta^{\prime}\left(t\right)\right\rangle=\left\langle
t\right\rangle\left\langle\omega\right\rangle$, so the difference between the
two is a measure of how time is correlated to the instantaneous frequency.
Thus, the covariance of the signal is
$Cov_{t\omega}=\left\langle
t\vartheta^{\prime}\left(t\right)\right\rangle-\left\langle
t\right\rangle\left\langle\omega\right\rangle,$ (18)
while the correlation coefficient, $r,$ is
$r=\frac{Cov_{t\omega}}{\sigma_{t}\sigma_{\omega}},$which is the normalized
covariance. Real signals have zero correlation coefficients as do signals of
the form $A\left(t\right)e^{i\omega_{0}t}$ or
$S\left(\omega\right)=A\left(\omega\right)e^{i\omega t_{0}}$, so signals with
complicated phase modulation have a non-zero correlation coefficient.
When dealing with more than one operator acting on a signal, we must be able
to interpret the action of multiple operators such as
$\mathcal{\hat{A}\hat{B}}$ acting upon signals. Here
$\mathcal{\hat{A}\hat{B}}$ is taken to mean $\mathcal{\hat{A}}$ acts on the
signal followed by $\mathcal{\hat{B}}$ acting on the signal. The commutator of
$\mathcal{\hat{A}}$ and $\mathcal{\hat{B}}$ is
$\left[\mathcal{\hat{A}},\mathcal{\hat{B}}\right]=\mathcal{\hat{A}\hat{B}-\hat{B}\hat{A}}\text{.}$
(19)
For example, the action of the time and frequency commutator on a signal is
$\left[\mathcal{\hat{T}},\mathcal{\hat{W}}\right]s\left(t\right)=\left(\mathcal{\hat{T}\hat{W}-\hat{W}\hat{T}}\right)s\left(t\right)=is\left(t\right).$
(20)
This is analogous to the same result in quantum mechanics where the commutator
of the position and momentum operator is equal to $i$ when $\hbar=1$. The
scale operator $\mathcal{\hat{C}}$ is defined as
$\mathcal{\hat{C}}=\frac{1}{2}\left[\mathcal{\hat{T}},\mathcal{\hat{W}}\right]_{+}=\frac{1}{2}\left(t\frac{d}{dt}+\frac{d}{dt}t\right).$
(21)
It can also be written as
$\mathcal{\hat{C}}=\mathcal{\hat{T}\hat{W}}+\frac{i}{2}.$ (22)
$\mathcal{\hat{C}}$ has the property that it transforms a signal
$s\left(t\right)$ according to
$e^{i\sigma\mathcal{\hat{C}}}s(t)=e^{\sigma/2}s(e^{\sigma/2}t)$ (23)
for a scaling parameter $\sigma$. Thus, the wideband ambiguity function can be
written as
$\chi_{WB}(\omega,\tau)=\sqrt{\alpha}\left\langle\,e^{-i\alpha\mathcal{\hat{C}}}e^{-i\tau\mathcal{\hat{W}}}\right\rangle_{s(t)},$
(24)
the expected value of the scaling and translation operators for a signal
$s(t)e^{-it\pi f}$, which is equivalent to maximizing the signal to noise
ratio (SNR) at the receiver. We explore what physical interactions, expressed
in terms of operators, can be maximized.
## III Physical Interactions
While the primary scatterer produces the usual Doppler velocity and delay
which is equivalent to the range, the operator viewpoint may hold some promise
for finding interactions between the radar signal and the target that extend
beyond considerations of position and velocity related criteria. Additional
scatters can induce secondary characteristics into the return signal, such as
micro-Doppler, which can be incorporated into the design of a receiver to
maximize the possibility for detecting these types of secondary target induced
characteristics. In addition to a scalar signal, higher dimensional waveform
interactions can be considered as well, such as how the polarization of
materials affects the waveform. The cross ambiguity function (CFA) symmetric
form is defined as
$\chi_{r,s}(\omega,\tau)=\int_{-\infty}^{\infty}e^{-it\omega}\,q^{\ast}(t+\frac{\tau}{2})s(t-\frac{\tau}{2})\,dt,$
(25)
where $s(t)$ is the transmitted signal, while $q(t)$ is the correlation signal
and $\tau$ is the delay parameter. This is the traditional form for the CFA.
Instead of this form, a new type of CFA is proposed based on quantum
mechanics.
Any signal can be expressed as a complex vector. A new approach to signal
amplification is presented here based on work by Aharonov on amplification of
the measurement of some operators in quantum phenomena Aharonov2005 . Since
any quantity that involves the usage of expected values of complex signals can
be expressed in the same mathematical form as the quantum mechanical approach
to signal amplification, the Aharonov approach suggests a potential candidate
for the signal amplification that is similar to a CFA. The classical
equivalent to this is what we choose to call cross correlation signal
amplification. The definition of the cross correlation amplification of an
observable $\hat{A}$ by the waveforms $\left|\Psi_{i}\right\rangle$ and
$\left|\Psi_{f}\right\rangle$ is:
${}_{f}\left\langle\mathcal{\hat{A}}_{cross}\right\rangle_{i}=\frac{\left\langle\Psi_{f}|\hat{A}|\Psi_{i}\right\rangle}{\left\langle\Psi_{f}|\Psi_{i}\right\rangle}$
(26)
where both $\left|\Psi_{i}\right\rangle$ and $\left|\Psi_{f}\right\rangle$ are
normalized. Now, the obvious question is how does the cross correlation
measurement of an observable
${}_{f}\left\langle\mathcal{\hat{A}}_{cross}\right\rangle_{i}$ differ from
that of a normal observable $\hat{A}$?
Note that
$\left\langle\Psi_{f}|\Psi_{i}\right\rangle\leq\left\langle\Psi_{i}|\Psi_{i}\right\rangle\left\langle\Psi_{f}|\Psi_{f}\right\rangle=1$,
by the Cauchy-Schwartz inequality, so
$\left|\left\langle\Psi_{f}|\Psi_{i}\right\rangle\right|\leq 1$. Thus,
$\frac{1}{\left|\left\langle\Psi_{f}|\Psi_{i}\right\rangle\right|}\geq 1,$
and the effect of the denominator is to ”magnify” the numerator provided there
is no counter balancing effect. Note that if
$\hat{A}\left|\Psi_{i}\right\rangle=\lambda_{\hat{A}}\left|\Psi_{i}\right\rangle$,
so
${}_{f}\left\langle\mathcal{\hat{A}}_{cross}\right\rangle_{i}=\frac{\left\langle\Psi_{f}|\hat{A}|\Psi_{i}\right\rangle}{\left\langle\Psi_{f}|\Psi_{i}\right\rangle}=\lambda_{\hat{A}},$
so there is no effect. When there is not this cancellation effect, there can
be a magnification, in some sense of the measurement of an operator. For an
electromagnetic wave, the operator interactions can be treated as either two
by two or four by four matrices. We consider only the two dimensional case.
### III.1 Multi-dimensional Interaction Operators
Thus, the signal can be assumed to be of the form:
$\left|\Psi_{i}\left(t\right)\right\rangle=\left[\begin{array}[]{c}E_{1}^{i}\left(t\right)\\\
E_{2}^{i}\left(t\right)\end{array}\right],$ (27)
and the cross correlation signal is:
$\left|\Psi_{f}\left(t\right)\right\rangle=\left[\begin{array}[]{c}E_{1}^{f}\left(t\right)\\\
E_{2}^{f}\left(t\right)\end{array}\right],$ (28)
where the $E$’s can be real or complex. An interaction with a scattering
object can be thought as a matrix, $\hat{M}_{S}$, which acts on
$\left|\Psi_{i}\left(t\right)\right\rangle$ to give a return signal
$\left|\Psi_{R}\left(t\right)\right\rangle$, so
$\left|\Psi_{R}\left(t\right)\right\rangle=\hat{M}_{S}\left|\Psi_{i}\left(t\right)\right\rangle.$
(29)
The cross correlation measurement amplification of operator $\hat{M}_{S}$ is
${}_{f}\left\langle\mathcal{M}_{cross}\right\rangle_{i}\;\mathcal{=}\frac{\left\langle\Psi_{f}\left(t\right)|\Psi_{R}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}.$
(weak)
This example of amplification, which is analogous to spin systems in quantum
mechanics, applies to polarimetric radars. Consider the four polarization
matrices:
$\hat{\sigma}_{0}=\left[\begin{array}[]{cc}1&0\\\ 0&1\end{array}\right],\text{
\ \ }\hat{\sigma}_{1}=\left[\begin{array}[]{cc}1&0\\\ 0&-1\end{array}\right],$
(30) $\hat{\sigma}_{2}=\left[\begin{array}[]{cc}0&1\\\
1&0\end{array}\right],\text{
}\hat{\sigma}_{3}=\left[\begin{array}[]{cc}0&-i\\\
i&0\end{array}\right]\text{.\ }$ (31)
The first operator, $\sigma_{0}$, acting on
$\left|\Psi_{i}\left(t\right)\right\rangle$ is the identity, so it is
equivalent to the previous no amplification case. Now, if the waveforms are
normalized,
$\left|E_{1}^{i}\left(t\right)\right|^{2}+\left|E_{2}^{i}\left(t\right)\right|^{2}=1$
and
$\left|E_{1}^{f}\left(t\right)\right|^{2}+\left|E_{2}^{f}\left(t\right)\right|^{2}=1$,
so
$\tan\theta=\frac{E_{2}^{i}\left(t\right)}{E_{1}^{i}\left(t\right)}\text{ and
}\tan\theta^{\prime}=\frac{E_{2}^{f}\left(t\right)}{E_{1}^{f}\left(t\right)},$
thus, we have
$\tan\theta\tan\theta^{\prime}=\frac{E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}{E_{1}^{i}\left(t\right)E_{1}^{f}\left(t\right)}.$
Note, that we treated amplitudes as real so the angles are real, this is not
necessary since complex angles are possible. The introduction of a complex
angle as well would introduce a second term which is imaginary that would
produce an additional effect on the imaginary component only. This possibility
will be discussed in a future paper.
Now,
$\hat{\sigma}_{1}\left|\Psi_{i}\left(t\right)\right\rangle=\left[\begin{array}[]{c}E_{1}^{i}\left(t\right)\\\
-E_{2}^{i}\left(t\right)\end{array}\right]$
so
$\displaystyle\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{\sigma}_{1}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}$
$\displaystyle=\frac{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)-E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)+E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}$
$\displaystyle=\frac{1-\tan\theta\tan\theta^{\prime}}{1+\tan\theta\tan\theta^{\prime}}.$
(32)
When $\theta\rightarrow-\frac{\pi}{4}$,
$\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{\sigma}_{1}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}\rightarrow\frac{1+\tan\theta^{\prime}}{1-\tan\theta^{\prime}}\underset{\theta^{\prime}\rightarrow\frac{\pi}{4}}{\rightarrow}\infty,$
so there can be amplification. In addition, we have
$\hat{\sigma}_{2}\left|\Psi_{i}\left(t\right)\right\rangle=\left[\begin{array}[]{c}E_{2}^{i}\left(t\right)\\\
E_{1}^{i}\left(t\right)\end{array}\right],$
so
$\displaystyle\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{\sigma}_{2}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}$
$\displaystyle=\frac{E_{1}^{f}\left(t\right)E_{2}^{i}\left(t\right)+E_{1}^{i}\left(t\right)E_{2}^{f}\left(t\right)}{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)+E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}$
$\displaystyle=\frac{\tan\theta^{\prime}+\tan\theta}{\left(\tan\theta\tan\theta^{\prime}+1\right)}$
$\displaystyle=\frac{\sin\left(\theta+\theta^{\prime}\right)}{\cos\left(\theta-\theta^{\prime}\right)}.$
(33)
When $\theta-\theta^{\prime}\rightarrow\frac{\pi}{2}$,
$\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{\sigma}_{2}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}=\underset{\varepsilon\rightarrow
0}{\lim}\frac{\sin\left(2\theta^{\prime}+\varepsilon+\frac{\pi}{2}\right)}{\cos\left(\varepsilon+\frac{\pi}{2}\right)}\rightarrow\infty,$
so amplification is possible. Finally, we have
$\hat{\sigma}_{3}\left|\Psi_{i}\left(t\right)\right\rangle=i\left[\begin{array}[]{c}-E_{2}^{i}\left(t\right)\\\
E_{1}^{i}\left(t\right)\end{array}\right],$
so
$\displaystyle\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{\sigma}_{3}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}$
$\displaystyle=i\frac{-E_{1}^{f}\left(t\right)E_{2}^{i}\left(t\right)+E_{1}^{i}\left(t\right)E_{2}^{f}\left(t\right)}{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)+E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}$
$\displaystyle=i\tan\left(\theta-\theta^{\prime}\right),$ (34)
by using the trigonometric identity
$\tan\left(\alpha\pm\beta\right)=\frac{\tan\alpha\pm\tan\beta}{1\mp\tan\alpha\tan\beta}.$
So amplification occurs as
$\left(\theta-\theta^{\prime}\right)\rightarrow\frac{\pi}{2}$. Thus, the non-
trivial operators $\hat{\sigma}_{1},\hat{\sigma}_{2},\hat{\sigma}_{3}$ can be
amplified under the right conditions for the components of cross-selection
waveforms.
There are four additional operators to consider:
$\hat{P}_{11}=\left[\begin{array}[]{cc}1&0\\\ 0&0\end{array}\right],$
$\hat{P}_{12}=\left[\begin{array}[]{cc}0&1\\\ 0&0\end{array}\right],$
$\hat{P}_{21}=\left[\begin{array}[]{cc}0&0\\\ 1&0\end{array}\right],$
and
$\hat{P}_{22}=\left[\begin{array}[]{cc}0&0\\\ 0&1\end{array}\right].$
Now,
$\hat{P}_{11}\left|\Psi_{i}\left(t\right)\right\rangle=\left[\begin{array}[]{c}E_{1}^{i}\left(t\right)\\\
0\end{array}\right]$
so
$\displaystyle\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{P}_{11}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}$
$\displaystyle=\frac{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)}{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)+E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}$
$\displaystyle=\frac{1}{\tan\theta\tan\theta^{\prime}+1}.$ (35)
Note, the denominator goes to zero as
$\tan\theta\tan\theta^{\prime}\rightarrow-1$, while the numerator remains
finite, so amplification is possible for this operator. Also, if
$\hat{P}_{11}$ is replaced by a constant $a\hat{P}_{11}$, the amplification
effect works as well. Now,
$\hat{P}_{12}\left|\Psi_{i}\left(t\right)\right\rangle=\left[\begin{array}[]{c}E_{2}^{i}\left(t\right)\\\
0\end{array}\right]$
so
$\displaystyle\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{P}_{12}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}$
$\displaystyle=\frac{E_{1}^{f}\left(t\right)E_{2}^{i}\left(t\right)}{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)+E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}$
$\displaystyle=\frac{\tan\theta}{\tan\theta\tan\theta^{\prime}+1}$ (36)
Note, the denominator goes to zero as
$\tan\theta\tan\theta^{\prime}\rightarrow-1$, while the numerator remains
finite, so amplification is possible for this operator. Also, if
$\hat{P}_{12}$ is replaced by a constant $a\hat{P}_{12}$, the amplification
effect works as well. Now,
$\hat{P}_{21}\left|\Psi_{i}\left(t\right)\right\rangle=\left[\begin{array}[]{c}0\\\
E_{1}^{i}\left(t\right)\end{array}\right]$
so
$\displaystyle\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{P}_{21}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}$
$\displaystyle=\frac{E_{2}^{f}\left(t\right)E_{1}^{i}\left(t\right)}{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)+E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}$
$\displaystyle=\frac{\tan\theta^{\prime}}{\tan\theta\tan\theta^{\prime}+1}.$
(37)
Note, the denominator goes to zero as
$\tan\theta\tan\theta^{\prime}\rightarrow-1$, while the numerator remains
finite, so amplification is possible for this operator. Also, if
$\hat{P}_{21}$ is replaced by a constant $a\hat{P}_{21}$, the amplification
effect works as well. Now,
$\hat{P}_{22}\left|\Psi_{i}\left(t\right)\right\rangle=\left[\begin{array}[]{c}0\\\
E_{2}^{i}\left(t\right)\end{array}\right]$
so
$\displaystyle\frac{\left\langle\Psi_{f}\left(t\right)\right|\hat{P}_{22}\left|\Psi_{i}\left(t\right)\right\rangle}{\left\langle\Psi_{f}\left(t\right)|\Psi_{i}\left(t\right)\right\rangle}$
$\displaystyle=\frac{E_{2}^{f}\left(t\right)E_{2}^{i}\left(t\right)}{E_{1}^{f}\left(t\right)E_{1}^{i}\left(t\right)+E_{2}^{i}\left(t\right)E_{2}^{f}\left(t\right)}$
$\displaystyle=\frac{\tan\theta\tan\theta^{\prime}}{\tan\theta\tan\theta^{\prime}+1}.$
(38)
Note, the denominator goes to zero as
$\tan\theta\tan\theta^{\prime}\rightarrow-1$, while the numerator remains
finite, so amplification is possible for this operator. Also, if
$\hat{P}_{22}$ is replaced by $a\hat{P}_{22}$, the amplification effect works
as well.
Note, we have provided the necessary conditions under which these operators
can be amplified, but they are not sufficient. Sufficiency comes when
waveforms can be shown to obey the conditions the angles obey to produce
amplification. These conditions must be shown to be satisfied by specific
waveforms or classes of waveforms. In addition, noise has to be brought into
the mix.
### III.2 Scattering Operators
The scattering operators for five specific structures are examined from the
viewpoint of amplification of operators. These scattering operators special
cases, two dimensional matrices, of the more general operators, four
dimensional matrices, found in CollettCollett2009 .
1. 1.
For a sphere, a plane, or triangular corner reflector oriented horizontally,
the scattering matrix is:
$S\left(h,r\right)=\left[\begin{array}[]{cc}1&0\\\
0&1\end{array}\right]=\hat{\sigma}_{0}.$ (39)
Since this is the identity, there is no amplification effect. For a sphere, a
plane, or triangular corner reflector vertically polarized, the scattering
matrix is:
$\hat{S}\left(v,r\right)=\left[\begin{array}[]{cc}0&i\\\
i&0\end{array}\right]=i\hat{\sigma}_{2},$ (40)
so it can be amplified. (Note $h$ stands for horizontal polarization and $v$
stands for vertical polarization.)
2. 2.
For a dipole oriented along the vertical axis is :
$\displaystyle\hat{S}\left(h,r\right)$
$\displaystyle=\left[\begin{array}[]{cc}1&0\\\
0&0\end{array}\right]=\hat{P}_{11},$ (43) and
$\displaystyle\hat{S}\left(v,r\right)$
$\displaystyle=\frac{1}{2}\left[\begin{array}[]{cc}1&-i\\\
-i&1\end{array}\right]=-\frac{i}{2}\hat{\sigma}_{2}+\frac{1}{2}\hat{\sigma}_{0}.$
(46)
$\hat{S}\left(h,r\right)$ can be amplified, while the first term of
$\hat{S}\left(v,r\right)$ can be amplified.
3. 3.
For a dipole oriented at the angle $\alpha$ from the positive horizontal axis:
$\displaystyle\hat{S}\left(h,r\right)$
$\displaystyle=\left[\begin{array}[]{cc}\cos^{2}\alpha&\frac{1}{2}\sin
2\alpha\\\ \frac{1}{2}\sin
2\alpha&\sin^{2}\alpha\end{array}\right]=\frac{1}{2}\sin
2\alpha\hat{\sigma}_{2}+\cos^{2}\alpha\hat{P}_{11}+\sin^{2}\alpha\hat{P}_{22},$
(49) and $\displaystyle\hat{S}\left(v,r\right)$
$\displaystyle=\frac{1}{2}\left[\begin{array}[]{cc}e^{i2\alpha}&-i\\\
-i&e^{-i2\alpha}\end{array}\right]=\frac{-i}{2}\hat{\sigma}_{2}+e^{i2\alpha}\hat{P}_{11}+e^{-i2\alpha}\hat{P}_{22}.$
(52)
Since $\hat{\sigma}_{2}$ and
$\alpha\hat{P}_{11}+\beta\hat{P}_{22}\neq\hat{\sigma}_{0}$ can be individually
amplified, then
$\alpha\hat{P}_{11}+\beta\hat{P}_{22}=\Xi=\left[\begin{array}[]{cc}\alpha&0\\\
0&\beta\end{array}\right]$ (53)
where $\Xi=\Xi^{{\dagger}}$which implies that
$\alpha\hat{P}_{11}+\beta\hat{P}_{22}$ is Hermitian.
4. 4.
For a dihedral corner reflector oriented along the horizontal axis:
$\displaystyle\hat{S}\left(h,r\right)$
$\displaystyle=\left[\begin{array}[]{cc}1&0\\\
0&-1\end{array}\right]=\hat{\sigma}_{1},$ (56) and
$\displaystyle\hat{S}\left(v,r\right)$
$\displaystyle=\left[\begin{array}[]{cc}1&0\\\
0&1\end{array}\right]=\hat{\sigma}_{0}.$ (59)
The first operator can be amplified and the second can’t.
5. 5.
For a right helix oriented at an angle $\alpha$ from the positive horizontal
axis:
$\displaystyle\hat{S}\left(h,r\right)$
$\displaystyle=\frac{e^{-i2\alpha}}{2}\left[\begin{array}[]{cc}1&-i\\\
-i&1\end{array}\right]=\frac{e^{-i2\alpha}}{2}\left[\hat{\sigma}_{0}-i\hat{\sigma}_{2}\right],$
(62) and $\displaystyle\hat{S}\left(v,r\right)$
$\displaystyle=\left[\begin{array}[]{cc}0&0\\\
0&e^{-i2\alpha}\end{array}\right]=e^{-i2\alpha}\hat{P}_{22}.$ (65)
Clearly the matrix $\hat{S}\left(v,r\right)$ is amplified. For
$\hat{S}\left(h,r\right)$, although $\hat{\sigma}_{0}$ is not amplified, the
component $\frac{e^{-i2\alpha}}{2}\left[i\hat{\sigma}_{2}\right]$ is amplified
relative to it.
6. 6.
For a left helix oriented at an angle $\alpha$ from the positive horizontal
axis:
$\hat{S}\left(h,v\right)=\frac{e^{-i2\alpha}}{2}\left[\begin{array}[]{cc}1&i\\\
i&-1\end{array}\right]=\frac{e^{-i2\alpha}}{2}\hat{\sigma}_{1}+\frac{ie^{-i2\alpha}}{2}\hat{\sigma}_{2}.$
(66)
Clearly this operator can be amplified.
### III.3 Single Dimensional Interactions with Signals
The goal of receiver design is to maximize the response of a receiver with
respect to the return signal $s_{R}\left(t\right)$. The functional form is
$s_{R}\left(t\right)=s(at+\tau)$ where $\tau$ is the (delay) time it takes the
signal to reach the target and return to the receiver, and $a$ is dilation of
the time axis due to the motion of the object. This is accomplished by taking
the inner product of $s_{R}\left(t\right)$ with $s^{\ast}(t)$ and integrating,
so we are computing the Fourier transform of the product
$\,s^{\ast}(t)s(at+\tau)$:
$\frac{A}{\sqrt{\pi}}\left\langle\,s(t)e^{it\omega},s(at\pm\tau)\right\rangle=\frac{A^{\prime}}{\sqrt{\pi}}\left\langle\,e^{ia\mathcal{\hat{C}}}e^{i\tau\mathcal{\hat{W}}}\right\rangle,$
(67)
which is the expected value of the operators for scale
$e^{ia\mathcal{\hat{C}}}$ and the operator for time shift
$e^{i\tau\mathcal{\hat{W}}}$. Trying to maximize the reception SNR has led to
the ambiguity function which can be interpreted as the expected value of two
specific operators for a given signal $s(t)$.
The non-uniform Doppler effect can be used to illustrate this operator
viewpoint. The effect of non-uniform Doppler on the radar waveform can be
determined by the application of the relativistic boundary conditions to the
D’Alembert solution to the wave equationGray2003 . The scattered waveform in
terms of the incident waveform becomes
$g(\tau)\simeq f\left(\tau-\frac{2r(\tau)}{c}\right).$ (68)
For a dynamic system characterized by single parameter $\alpha$, then a
dynamic variable $u$ evolves along a path in configuration space. The
configuration of the system describes a curve along $\alpha$. Consider the
commutator equation
$\frac{du}{d\alpha}=\left[u,\hat{G}\right].$ (69)
Here, $\hat{G}$ generates the trajectory $u=u(\alpha)$ and $\alpha$ can be
viewed as geometrical parameter. Expanding $u(\alpha)$ in a Taylor series
yieldsSouza1990 a Taylor series, thus the generator equation can be used to
replace the dynamics with the operator equationJordan2004
$u(\alpha)=u_{0}+\alpha\left.\left[u,\hat{G}\right]\right|_{\alpha=0}+\frac{\alpha^{2}}{2!}\left.\left[\left[u,\hat{G}\right],\hat{G}\right]\right|_{\alpha=0}+...\text{
}=\exp\left(\alpha\hat{G}\right)\left.u(\alpha)\right|_{\alpha=0}\text{.}$
(70)
For physical systems, it is evident that the generator of dynamics is time, so
any function of time can be thought of as being generated by an operator,
$\hat{G}$, acting on $u\left(t\right)$, so it can be thought of being
”generated” by that operator. It is evident how to ”generate” any function of
a parameter using operator methodsFernandez2002 . For a given $r(\tau)$, we
can assume it is generated by a equation such as
$r(\tau)=\exp\left(k\hat{G}\right)\left.x(\tau)\right|_{\tau=0}$, so
$f\left(\tau-\frac{2r(\tau)}{c}\right)=f(\tau-\alpha\exp\left(\hat{G}\right)\left.r(\tau)\right|_{\tau=0})=\exp\left(\alpha\mathcal{\hat{H}}\right)s\left(\tau\right),$
(71)
where $\mathcal{\hat{H}}$ depends on the specifics of the interaction. For
example, $\mathcal{\hat{H}}$ would be a comb operator in the frequency domain
for a periodic function. In this case, we are estimating the expected value
$\left\langle\exp\left(\tau\mathcal{\hat{H}}\right)\right\rangle$ at the
receiver. Since any scalar interaction on the waveform can be thought of as
the action of an operator on the broadcast waveform, a more general ambiguity
function can always be defined as
$\chi_{\hat{O}}(\omega,\tau)=\frac{A}{\sqrt{\pi}}\int_{-\infty}^{\infty}e^{-it\omega}\,s^{\ast}(t)\exp\left(\tau\mathcal{\hat{H}}\right)s(t)dt=\left\langle
e^{-it\omega}\,s^{\ast}(t),\exp\left(\tau\mathcal{\hat{H}}\right)s(t)\right\rangle.$
(72)
For the remainder of the discussion, we assume the signal is not normalized.
The typical signal processing application is to minimize the effect of the
noise $\tilde{n}$ so as to maximize the signal-to-noise ratio (SNR) for a
received signal $\tilde{y}$. In order to understand how to do this, one uses a
linear model for the combination of signal plus noise
$\tilde{y}=s\left(t\right)+\tilde{n}$. The response to an input $f(t)$ of a
system function $h(t)$ is the response $g(t)$, which at a time $t_{0}$ is
$g(t_{0})=\int_{-\infty}^{\infty}s(t)h(t-t_{0})dt=\frac{1}{2\pi}\int_{-\infty}^{\infty}S\left(\omega\right)e^{i\omega
t_{0}}H\left(\omega\right)d\omega.$
Here we wish to determine the maximum value of $g(t_{0})$; this allows us to
maximize SNR depending on which of several integral constraints that are
specific to the problem being considered. The SNR depends on the mean squared
constraint under consideration: it could be based on the energy spectrum
$\left|S\left(\omega\right)\right|^{2}$, it could be based on the constrained
energy spectrum
$\left|S\left(\omega\right)\right|^{2}\left|R(\omega)\right|^{2}$, it could be
based on multiple constraints such as higher order moments of the energy
spectrum, or it could be based on amplitude constraints. Each constraint leads
to a different choice for the system response function $H\left(\omega\right)$.
If we have a specified energy
$E=\left\langle
s(t),s(t)\right\rangle=\frac{1}{2\pi}\int_{-\infty}^{\infty}\left|S\left(\omega\right)\right|^{2}d\omega,$
(73)
then by the Cauchy-Schwartz inequality
$\displaystyle\left|\int_{-\infty}^{\infty}S\left(\omega\right)e^{i\omega
t_{0}}H\left(\omega\right)d\omega\right|^{2}$
$\displaystyle\leq\int_{-\infty}^{\infty}\left|S\left(\omega\right)\right|^{2}d\omega\int_{-\infty}^{\infty}\left|e^{i\omega
t_{0}}H\left(\omega\right)\right|^{2}d\omega$
$\displaystyle\leq\frac{E}{2\pi}\int_{-\infty}^{\infty}\left|e^{i\omega
t_{0}}H\left(\omega\right)\right|^{2}d\omega.$ (74)
The inequality becomes an equality only if
$S\left(\omega\right)=ke^{i\omega t_{0}}H^{\ast}\left(\omega\right),$ (75)
so the maximum value for $g(t_{0})$ is obtained by the choice
$s(t)=kh^{\ast}(t_{0}-t)$ (76)
since $H^{\ast}\left(\omega\right)\longleftrightarrow h^{\ast}(-t)$ and $k$ is
an arbitrary constant. For a linear system
$\tilde{y}\left(t\right)=s\left(t\right)+\tilde{n}\left(t\right)$ with an
impulse response $h(t)$, the output $\widehat{u}(t)$ is
$\tilde{u}(t)=\tilde{y}\left(t\right)\ast
h(t)=\tilde{u}_{s}(t)+\tilde{u}_{n}(t)$ (77)
where $\tilde{u}_{s}(t)=s\left(t\right)\ast h(t)$ and
$\tilde{u}_{n}(t)=\tilde{u}\left(t\right)\ast h(t)$. Now the response to the
signal $s\left(t\right)$ is
$\tilde{u}_{s}(t_{0})=\frac{1}{2\pi}\int_{-\infty}^{\infty}S\left(\omega\right)e^{i\omega
t_{0}}H\left(\omega\right)d\omega,$ (78)
so
$\left|\tilde{u}_{s}(t_{0})\right|^{2}\leq\int_{-\infty}^{\infty}S\left(\omega\right)e^{i\omega
t_{0}}H\left(\omega\right)d\omega\int_{-\infty}^{\infty}S\left(\omega\right)e^{i\omega
t_{0}}H\left(\omega\right)d\omega;$ (79)
this is what Papoulis has called the _Matched Filter Principle_Papoulis1977 .
From the operator perspective, the operator acting on the signal should
replace the operator acting on the system response function in this argument,
so
$g(t_{0})=\int_{-\infty}^{\infty}s\left(\tau\right)\exp\left(\alpha\mathcal{\hat{H}}\right)h(\tau-\tau_{0})d\tau=\frac{1}{2\pi}\int_{-\infty}^{\infty}S\left(\omega\right)e^{i\omega\tau_{0}}R\left(\omega\right)d\omega.$
where
$\left\langle
e^{i\omega\tau},\exp\left(\alpha\mathcal{\hat{H}}\right)h(\tau-\tau_{0})\right\rangle=\left\langle
e^{i\omega\tau},\exp\left(\alpha\mathcal{\hat{H}}\right)e^{\tau_{0}\frac{d}{d\tau}}h(\tau)\right\rangle=e^{i\omega\tau_{0}}\left\langle
e^{i\omega\tau},\exp\left(\alpha\mathcal{\hat{H}}\right)h(\tau)\right\rangle=e^{i\omega\tau_{0}}R\left(\omega\right)$
(80)
since the operators commute. When
$\exp\left(\alpha\mathcal{\hat{H}}\right)=\,e^{ia\mathcal{\hat{C}}}e^{i\tau\mathcal{\hat{W}}},$
(81)
and the optimum choice is a rescaled version of the transmitted signal time
scale $t_{0}\rightarrow at\pm\tau$, the wideband matched filter.
The Matched Filter Principle is quite general and can be used to introduce a
variety of constraints, which are equivalent to a cost function minimization
approach. For example, if one wanted to maximize the response to the
derivative of the energy $E_{1}$, while requiring the energy to be normalized,
then one has
$E_{1}=\int_{-\infty}^{\infty}\left|s^{\prime}\left(\tau\right)\right|^{2}d\tau=\frac{1}{2\pi}\int_{-\infty}^{\infty}\omega^{2}\left|F\left(\omega\right)\right|^{2}d\omega$
(82)
so $\left|s^{\prime}\left(\tau\right)\right|\leq\sqrt[4]{E_{1}}$ with equality
at time $t_{0}$ if
$s\left(\tau\right)=\sqrt[4]{E_{1}}\exp\left(\frac{\tau-\tau_{0}}{\sqrt{E_{1}}}\right)$.
In general, using this approach, arbitrary constraints can be considered. If
we have a signal $\exp\left(\alpha\mathcal{\hat{H}}\right)s\left(\tau\right)$
of where the energy of $s\left(\tau\right)$ is $E$, that we want maximize the
system response $g\left(t_{0}\right)$ of the system $h\left(t\right)$, then to
obtain the maximum subject to the constraints
$\int_{-\infty}^{\infty}s\left(t\right)\Phi_{i}\left(t\right)dt=\vartheta_{i},$
(83)
where the functions $\Phi_{i}\left(t\right)$ and constraints $\vartheta_{i}$
are given. Then, with the definition
$u_{i}\left(t\right)=\Phi_{i}\left(t\right)-\frac{\vartheta_{i}}{S(0)},$ (84)
that the constraint equation becomes:
$\int_{-\infty}^{\infty}s\left(t\right)u_{i}\left(t\right)dt=0$ (85)
because $S(0)$ is the area of $s\left(t\right)$. Thus, it follows that the
system response is
$g(\tau_{0})=\int_{-\infty}^{\infty}s\left(\tau\right)\left[\exp\left(\alpha\mathcal{\hat{H}}\right)e^{\tau_{0}\frac{d}{d\tau}}h(\tau)+\sum\limits_{i=1}^{n}\beta_{i}u_{i}\left(\tau\right)\right]d\tau,$
(86)
for arbitrary $\beta_{i}$. Therefore, $g(\tau_{0})$ can be bounded by
$\left|g(\tau_{0})\right|^{2}\leq
E\int_{-\infty}^{\infty}\left|\left[\exp\left(\alpha\mathcal{\hat{H}}\right)e^{\tau_{0}\frac{d}{d\tau}}h(\tau)+\sum\limits_{i=1}^{n}\beta_{i}u_{i}\left(\tau\right)\right]\right|^{2}d\tau.$
(87)
Equality is achieved if
$s\left(\tau\right)=\left[\exp\left(\alpha\mathcal{\hat{H}}\right)e^{\tau_{0}\frac{d}{d\tau}}h^{\ast}(\tau)+\sum\limits_{i=1}^{n}\beta_{i}^{\ast}u_{i}^{\ast}\left(\tau\right)\right].$
(88)
This gives a method for choosing the correlation waveform to achieve maximum
response for a given set of constraints.
## IV Conclusions
The operator method is a much richer way to look at the radar measurement
problem because of its ability to produce a wide variety of distributions
associated with the information contained in a signal. In particular, it is
possible to put the ambiguity function in a wider context as part of a general
theory of measurement. There is a much greater freedom of description of the
same physical situation which suggests that we can find information present in
waveforms that a waveform designer would not think to look for. This approach
to incorporating quantum mechanical ideas has been championed by
BaraniukBaraniuk1995 Baraniuk1998 recently by extending the Hermitian
operator approach in quantum mechanics to unitary operators in signal
processing. The specifics of the type of operators matter relative to the
physics of the interaction of the target with the waveform, so this may be
important for future extensions of this work.
Acknowledgement: This work was supported by NSWCDD In-House Laboratory
Independent Research (ILIR) Program.
## References
* (1) Y. Aharonov and D. Rohrlich, _Quantum Paradoxes: Quantum Theory for the Perplexed_ , Wiley-VCH, 2005.
* (2) R. G. Baraniuk, Unitary Equivalence: A New Twist on Signal Processing,_IEEE TRANSACTIONS ON SIGNAL PROCESSING_ , VOL. 43, NO. 10, OCT 1995.
* (3) R. G. Baraniuk, ”Beyond Time–Frequency Analysis: Energy Densities in One and Many Dimensions”, _IEEE TRANSACTIONS ON SIGNAL PROCESSING_ , VOL. 46, NO. 9, SEPT 1998.
* (4) L. Cohen, _Time-Frequency Analysis_ , Prentice-Hall, 1995\.
* (5) L. Cohen, ”The Scale Representation”, _IEEE Transactions on Signal Processing_ , Vol. 41, NO. 12. Dec., 1993.
* (6) E. Collett, _Field Guide to Polarization_ , SPIE Press, 2009.
* (7) F. M Fernandez, Operator Methods in Classical Mechanics, _Am. J. Phys._ , 70 (9), September, 2002.
* (8) D. Gabor, ”Communication theory and physics”, _Information Theory, Transactions of the IRE Professional Group on Information_ , Feb. 1953, Volume: 1, Issue: 1.
* (9) J. E. Gray and S. R. Addison, “Effect of Non-uniform Motion on the Doppler Spectrum of Scattered Continuous Waveforms,” _IEE Proceedings-Radar, Sonar and Navigation_ , August 2003, Vol.150, Issue 4.
* (10) J. E. Gray, ”An Interpretation of Woodward.s Ambiguity Function and Its Generalization”, 2010 IEEE International Radar Conference, May 10-14, Washington DC, USA, (Invited Paper)
* (11) T. F. Jordan, ”Steppingstones in Hamiltonian Dynamics”, _Am. J. Phys._ , 72 (8), August 2004.
* (12) N. Levanon and E. Mozeson, _Radar Signals_ , Wiley-IEEE Press, 2004.
* (13) D. O. North, ”An analysis of the factors which determine signal/noise discrimination in pulsed carrier systems”. RCA Labs., Princeton, NJ, _Rep. PTR-6C,_ 1943\.
* (14) A. Papoulis, _Signal Analysis_ , McGraw-Hill, 1977\.
* (15) A. Papoulis, S. U. Pillai, _Probability, Random Variables, and Stochastic Processes, $4^{th}$ Ed._ McGraw-Hill, New York, 2002\.
* (16) A. Papoulis, _The Fourier Integral and its Applications_ , McGraw-Hill, 1961.
* (17) C. F. de Souza and M. M. Gandelman, An Algebraic Approach for Solving Mechanical Problems, _Am. J. Phys._ , 58 (5), May 1990\.
* (18) D. E. Vakman, _Sophisticated Signals and the Uncertainty Principle in Radar_ , Springer-Verlag New York Inc., 1968.
* (19) H. L. Van Trees, _Detection, Estimation, and Modulation Theory, Part I_ , Wiley-Interscience; Reprint edition, 2002.
* (20) P. M. Woodward, _Probability and Information Theory with Applications to Radar_ , Artech House Publishers, 1980.
|
arxiv-papers
| 2011-04-28T06:42:24 |
2024-09-04T02:49:18.459236
|
{
"license": "Public Domain",
"authors": "John E Gray and Allen D Parks",
"submitter": "John E Gray Mr",
"url": "https://arxiv.org/abs/1104.5309"
}
|
1104.5337
|
# Complex connections on conformal Kähler manifolds with Norden metric
Marta Teofilova
###### Abstract.
An eight-parametric family of complex connections on a class complex manifolds
with Norden metric is introduced. The form of the curvature tensor with
respect to each of these connections is obtained. The conformal group of the
considered connections is studied and some conformal invariants are obtained.
_2010 Mathematics Subject Classification_ : 53C15, 53C50.
_Keywords_ : complex connection, complex manifold, Norden metric.
## 1\. Introduction
Almost complex manifolds with Norden metric were first studied by A. P. Norden
[9] and are introduced in [4] as generalized $B$-manifolds. A classification
of these manifolds with respect to the covariant derivative of the almost
complex structure is obtained in [1] and two equivalent classifications are
given in [2, 3].
An important problem in the geometry of almost complex manifolds with Norden
metric is the study of linear connections preserving the almost complex
structure or preserving both, the structure and the metric. The first ones are
called almost complex connections, and the second ones are known as natural
connections. A special type of natural connection is the canonical one. In [2]
it is proved that on an almost complex manifold with Norden metric there
exists a unique canonical connection. The canonical connection (called also
the $B$-connection) and its conformal group on a conformal Kähler manifold
with Norden metric are studied in [3].
In [11] we have obtained a two-parametric family of complex connections on a
conformal Kähler manifold with Norden metric and have proved that the
curvature tensors corresponding to these connections coincide with the
curvature tensor of the canonical connection.
In the present work we continue our research on complex connections on complex
manifolds with Norden metric by focusing our attention on the class of the
conformal Kähler manifolds, i.e. manifolds which are conformally equivalent to
Käher manifolds with Norden metric. We introduce an eight-parametric family of
complex connections on such manifolds and consider their curvature properties.
We also study the conformal group of these connections and obtain some
conformal invariants. In the last section we give an example of a four-
dimensional conformal Kähler manifold with Norden metric, on which the
considered complex connections are flat.
## 2\. Preliminaries
Let $(M,J,g)$ be a $2n$-dimensional almost complex manifold with Norden
metric, i.e. $J$ is an almost complex structure and $g$ is a pseudo-Riemannian
metric on $M$ such that
$J^{2}x=-x,\qquad g(Jx,Jy)=-g(x,y)$ (2.1)
for all differentiable vector fields $x$, $y$ on $M$, i.e.
$x,y\in\mathfrak{X}(M)$.
The associated metric $\widetilde{g}$ of $g$ is given by
$\widetilde{g}(x,y)=g(x,Jy)$ and is a Norden metric, too. Both metrics are
necessarily neutral, i.e. of signature $(n,n)$.
If $\nabla$ is the Levi-Civita connection of the metric $g$, the fundamental
tensor field $F$ of type $(0,3)$ on $M$ is defined by
$F(x,y,z)=g\left((\nabla_{x}J)y,z\right)$ (2.2)
and has the following symmetries
$F(x,y,z)=F(x,z,y)=F(x,Jy,Jz).$ (2.3)
Let $\left\\{e_{i}\right\\}$ ($i=1,2,\ldots,2n$) be an arbitrary basis of
$T_{p}M$ at a point $p$ of $M$. The components of the inverse matrix of $g$
are denoted by $g^{ij}$ with respect to the basis $\left\\{e_{i}\right\\}$.
The Lie 1-forms $\theta$ and $\theta^{\ast}$ associated with $F$, and the Lie
vector $\Omega$, corresponding to $\theta$, are defined by, respectively
$\theta(x)=g^{ij}F(e_{i},e_{j},x),\qquad\theta^{\ast}=\theta\circ
J,\qquad\theta(x)=g(x,\Omega).$ (2.4)
The Nijenhuis tensor field $N$ for $J$ is given by [7]
$N(x,y)=[Jx,Jy]-[x,y]-J[Jx,y]-J[x,Jy].$
It is known [8] that the almost complex structure is complex if and only if it
is integrable, i.e. iff $N=0$.
A classification of the almost complex manifolds with Norden metric is
introduced in [1], where eight classes of these manifolds are characterized
according to the properties of $F$. The three basic classes $\mathcal{W}_{i}$
($i=1,2,3$) are given by
$\bullet$ the class $\mathcal{W}_{1}$:
$\begin{array}[]{l}F(x,y,z)=\frac{1}{2n}\left[g(x,y)\theta(z)+g(x,Jy)\theta(Jz)\right.\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\quad\qquad\qquad\quad\left.+g(x,z)\theta(y)+g(x,Jz)\theta(Jy)\right];\end{array}$
(2.5)
$\bullet$ the class $\mathcal{W}_{2}$ of the _special complex manifolds with
Norden metric_ :
$F(x,y,Jz)+F(y,z,Jx)+F(z,x,Jy)=0,\quad\theta=0\quad\Leftrightarrow\quad
N=0,\quad\theta=0;$ (2.6)
$\bullet$ the class $\mathcal{W}_{3}$ of the _quasi-Kähler manifolds with
Norden metric_ :
$F(x,y,z)+F(y,z,x)+F(z,x,y)=0.$ (2.7)
The special class $\mathcal{W}_{0}$ of the _Kähler manifolds with Norden
metric_ is characterized by the condition $F=0$ ($\nabla J=0$) and is
contained in each one of the other classes.
Let $R$ be the curvature tensor of $\nabla$, i.e.
$R(x,y)z=\nabla_{x}\nabla_{y}z-\nabla_{y}\nabla_{x}z-\nabla_{\left[x,y\right]}z$.
The corresponding (0,4)-type tensor is defined by
$R(x,y,z,u)=g\left(R(x,y)z,u\right)$.
A tensor $L$ of type (0,4) is called a _curvature-like_ tensor if it has the
properties of $R$, i.e.
$\begin{array}[]{l}L(x,y,z,u)=-L(y,x,z,u)=-L(x,y,u,z),\vskip 6.0pt plus 2.0pt
minus 2.0pt\\\ L(x,y,z,u)+L(y,z,x,u)+L(z,x,y,u)=0.\end{array}$ (2.8)
The Ricci tensor $\rho(L)$ and the scalar curvatures $\tau(L)$ and
$\tau^{\ast}(L)$ of $L$ are defined by:
$\begin{array}[]{c}\rho(L)(y,z)=g^{ij}L(e_{i},y,z,e_{j}),\vskip 6.0pt plus
2.0pt minus 2.0pt\\\
\tau(L)=g^{ij}\rho(L)(e_{i},e_{j}),\quad\tau^{\ast}(L)=g^{ij}\rho(L)(e_{i},Je_{j}).\end{array}$
(2.9)
A curvature-like tensor $L$ is called a _Kähler tensor_ if
$L(x,y,Jz,Ju)=-L(x,y,z,u).$ (2.10)
Let $S$ be a tensor of type (0,2). We consider the following tensors [3]:
$\begin{array}[]{l}\psi_{1}(S)(x,y,z,u)=g(y,z)S(x,u)-g(x,z)S(y,u)\vskip 3.0pt
plus 1.0pt minus 1.0pt\\\
\phantom{\psi_{1}(S)(x,y,z,u)}+g(x,u)S(y,z)-g(y,u)S(x,z),\vskip 6.0pt plus
2.0pt minus 2.0pt\\\ \psi_{2}(S)(x,y,z,u)=g(y,Jz)S(x,Ju)-g(x,Jz)S(y,Ju)\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
\phantom{\psi_{1}(S)(x,y,z,u)}+g(x,Ju)S(y,Jz)-g(y,Ju)S(x,Jz),\vskip 6.0pt plus
2.0pt minus 2.0pt\\\
\pi_{1}=\frac{1}{2}\psi_{1}(g),\qquad\pi_{2}=\frac{1}{2}\psi_{2}(g),\qquad\pi_{3}=-\psi_{1}(\widetilde{g})=\psi_{2}(\widetilde{g}).\end{array}$
(2.11)
The tensor $\psi_{1}(S)$ is curvature-like if $S$ is symmetric, and the tensor
$\psi_{2}(S)$ is curvature-like if $S$ is symmetric and hybrid with respect to
$J$, i.e. $S(x,Jy)=S(y,Jx)$. In the last case the tensor
$\\{\psi_{1}-\psi_{2}\\}(S)$ is Kählerian. The tensors $\pi_{1}-\pi_{2}$ and
$\pi_{3}$ are also Kählerian.
The usual conformal transformation of the Norden metric $g$ (conformal
transformation of type I [3]) is defined by
$\overline{g}=e^{2u}g,$ (2.12)
where $u$ is a pluriharmonic function, i.e. the 1-form $du\circ J$ is closed.
A $\mathcal{W}_{1}$-manifold with closed 1-forms $\theta$ and $\theta^{\ast}$
(i.e. $\mathrm{d}\theta=\mathrm{d}\theta^{\ast}=0$) is called a _conformal
Kähler manifold with Norden metric_. Necessary and sufficient conditions for a
$\mathcal{W}_{1}$-manifold to be conformal Kählerian are:
$(\nabla_{x}\theta)y=(\nabla_{y}\theta)x,\qquad(\nabla_{x}\theta)Jy=(\nabla_{y}\theta)Jx.$
(2.13)
The subclass of these manifolds is denoted by $\mathcal{W}_{1}^{\hskip
0.72229pt0}$.
It is proved [3] that a $\mathcal{W}_{1}^{\hskip 0.72229pt0}$-manifold is
conformally equivalent to a Kähler manifold with Norden metric by the
transformation (2.12).
It is known that on a pseudo-Riemannian manifold $M$ ($\dim M=2n\geq 4$) the
conformal invariant Weyl tensor has the form
$W(R)=R-\frac{1}{2(n-1)}\big{\\{}\psi_{1}(\rho)-\frac{\tau}{2n-1}\pi_{1}\big{\\}}.$
(2.14)
Let $L$ be a Kähler curvature-like tensor on an almost complex manifold with
Norden metric $(M,J,g)$, $\dim M=2n\geq 6$. Then the Bochner tensor $B(L)$ for
$L$ is defined by [3]:
$\begin{array}[]{l}B(L)=L-\frac{1}{2(n-2)}\big{\\{}\psi_{1}-\psi_{2}\big{\\}}\big{(}\rho(L)\big{)}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{B(L)}+\frac{1}{4(n-1)(n-2)}\big{\\{}\tau(L)\big{(}\pi_{1}-\pi_{2}\big{)}+\tau^{\ast}(L)\pi_{3}\big{\\}}.\end{array}$
(2.15)
## 3\. Complex Connections on $\mathcal{W}_{1}$-manifolds
###### Definition 3.1 ([7]).
_A linear connection $\nabla^{\prime}$ on an almost complex manifold $(M,J)$
is said to be_ almost complex _if $\nabla^{\prime}J=0$._
We introduce an eight-parametric family of complex connections in the
following
###### Theorem 3.1.
On a $\mathcal{W}_{1}$-manifold with Norden metric there exists an eight-
parametric family of complex connections $\nabla^{\prime}$ defined by
$\begin{array}[]{l}\nabla_{x}^{\prime}y=\nabla_{x}y+Q(x,y),\end{array}$ (3.1)
where the deformation tensor $Q(x,y)$ is given by
$\begin{array}[]{l}Q(x,y)=\frac{1}{2n}\left[\theta(Jy)x-g(x,y)J\Omega\right]\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ \hskip
4.69772pt\phantom{Q(x,y)}+\frac{1}{n}\left\\{\lambda_{1}\theta(x)y+\lambda_{2}\theta(x)Jy+\lambda_{3}\theta(Jx)y+\lambda_{4}\theta(Jx)Jy\right.\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ \phantom{Q(x,y)}\hskip
4.33601pt+\lambda_{5}\left[\theta(y)x-\theta(Jy)Jx\right]+\lambda_{6}\left[\theta(y)Jx+\theta(Jy)x\right]\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ \left.\hskip
4.33601pt\phantom{Q(x,y)}+\lambda_{7}\left[g(x,y)\Omega-g(x,Jy)J\Omega\right]+\lambda_{8}\left[g(x,Jy)\Omega+g(x,y)J\Omega\right]\right\\},\end{array}$
(3.2)
$\lambda_{i}\in\mathbb{R}$, $i=1,2,...,8$.
###### Proof.
By (2.5), (3.1) and (3.2) we verify that
$(\nabla^{\prime}_{x}J)y=\nabla^{\prime}_{x}Jy-J\nabla^{\prime}_{x}y=0$, and
hence the connections $\nabla^{\prime}$ are complex for any
$\lambda_{i}\in\mathbb{R}$, $i=1,2,...,8$. ∎
Let us remark that the two-parametric family of complex connections obtained
for $\lambda_{1}=\lambda_{4}$, $\lambda_{3}=-\lambda_{2}$,
$\lambda_{5}=\lambda_{7}=0$, $\lambda_{8}=-\lambda_{6}=\frac{1}{4}$, is
studied in [11].
Let $T^{\prime}$ be the torsion tensor of $\nabla^{\prime}$, i.e.
$T^{\prime}(x,y)=\nabla^{\prime}_{x}y-\nabla^{\prime}_{x}y-[x,y]$. Taking into
account that the Levi-Civita connection $\nabla$ is symmetric, we have
$T^{\prime}(x,y)=Q(x,y)-Q(y,x)$. Then by (3.2) we obtain
$\begin{array}[]{l}T^{\prime}(x,y)=\frac{1}{n}\left\\{\left(\lambda_{1}-\lambda_{5}\right)\left[\theta(x)y-\theta(y)x\right]+\left(\lambda_{2}-\lambda_{6}\right)\left[\theta(x)Jy-\theta(y)Jx\right]\right.\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ \qquad\quad\hskip
1.4457pt\left.+\left(\lambda_{3}-\lambda_{6}-\frac{1}{2}\right)\left[\theta(Jx)y-\theta(Jy)x\right]+\left(\lambda_{4}+\lambda_{5}\right)\left[\theta(Jx)Jy-\theta(Jy)Jx\right]\right\\}.\end{array}$
(3.3)
It is easy to verify the following
$\underset{x,y,z}{\mathfrak{S}}T^{\prime}(x,y,z)=\underset{x,y,z}{\mathfrak{S}}T^{\prime}(Jx,Jy,z)=\underset{x,y,z}{\mathfrak{S}}T^{\prime}(x,y,Jz)=0,$
(3.4)
where $\mathfrak{S}$ is the cyclic sum by the arguments $x,y,z$.
Next, we obtain necessary and sufficient conditions for the complex
connections $\nabla^{\prime}$ to be symmetric (i.e. $T^{\prime}=0$).
###### Theorem 3.2.
The complex connections $\nabla^{\prime}$ defined by (3.1) and (3.2) are
symmetric on a $\mathcal{W}_{1}$-manifold with Norden metric if and only if
$\lambda_{1}=-\lambda_{4}=\lambda_{5}$,
$\lambda_{2}=\lambda_{3}-\frac{1}{2}=\lambda_{6}$.
Then, by putting $\lambda_{1}=-\lambda_{4}=\lambda_{5}=\mu_{1}$,
$\lambda_{2}=\lambda_{6}=\lambda_{3}-\frac{1}{2}=\mu_{2}$,
$\lambda_{7}=\mu_{3}$, $\lambda_{8}=\mu_{4}$ in (3.2) we obtain a four-
parametric family of complex symmetric connections $\nabla^{\prime\prime}$ on
a $\mathcal{W}_{1}$-manifold which are defined by
$\begin{array}[]{l}\nabla^{\prime\prime}_{x}y=\nabla_{x}y+\frac{1}{2n}\left[\theta(Jx)y+\theta(Jy)x-g(x,y)J\Omega\right]\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{\nabla^{\prime}_{x}y}+\frac{1}{n}\left\\{\mu_{1}\left[\theta(x)y+\theta(y)x-\theta(Jx)Jy-\theta(Jy)Jx\right]\right.\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{\nabla^{\prime}_{x}y}+\mu_{2}\left[\theta(Jx)y+\theta(Jy)x+\theta(x)Jy+\theta(y)Jx\right]\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{\nabla^{\prime}_{x}y}+\left.\mu_{3}\left[g(x,y)\Omega-g(x,Jy)J\Omega\right]+\mu_{4}\left[g(x,Jy)\Omega+g(x,y)J\Omega\right]\right\\}.\end{array}$
(3.5)
The well-known Yano connection [12, 13] on a $\mathcal{W}_{1}$-manifold is
obtained from (3.5) for $\mu_{1}=\mu_{3}=0$, $\mu_{4}=-\mu_{2}=\frac{1}{4}$.
###### Definition 3.2 ([2]).
_A linear connection $\nabla^{\prime}$ on an almost complex manifold with
Norden metric $(M,J,g)$ is said to be_ natural _if
$\nabla^{\prime}J=\nabla^{\prime}g=0$ (or equivalently,
$\nabla^{\prime}g=\nabla^{\prime}\widetilde{g}=0$)._
From (3.1) and (3.2) we compute the covariant derivatives of $g$ and
$\widetilde{g}$ with respect to the complex connections $\nabla^{\prime}$ as
follows
$\begin{array}[]{l}\left(\nabla^{\prime}_{x}g\right)(y,z)=-Q(x,y,z)-Q(x,z,y)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
=-\frac{1}{n}\left\\{2\left[\lambda_{1}\theta(x)g(y,z)+\lambda_{2}\theta(x)g(y,Jz)+\lambda_{3}\theta(Jx)g(y,z)\right.\right.\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\left.+\lambda_{4}\theta(Jx)g(y,Jz)\right]+(\lambda_{5}+\lambda_{7})\left[\theta(y)g(x,z)+\theta(z)g(x,y)\right.\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\left.-\theta(Jy)g(x,Jz)-\theta(Jz)g(x,Jy)\right]+(\lambda_{6}+\lambda_{8})\left[\theta(y)g(x,Jz)\right.\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\left.+\theta(z)g(x,Jy)+\theta(Jy)g(x,z)+\theta(Jz)g(x,y)\right]\left.\right\\},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\left(\nabla^{\prime}_{x}\widetilde{g}\right)(y,z)=-Q(x,y,Jz)-Q(x,Jz,y).\end{array}$
(3.6)
Then, by (3.6) we get the following
###### Theorem 3.3.
The complex connections $\nabla^{\prime}$ defined by (3.1) and (3.2) are
natural on a $\mathcal{W}_{1}$-manifold if and only if
$\lambda_{1}=\lambda_{2}=\lambda_{3}=\lambda_{4}=0$,
$\lambda_{7}=-\lambda_{5}$, $\lambda_{8}=-\lambda_{6}$.
If we put $\lambda_{8}=-\lambda_{6}=s$, $\lambda_{7}=-\lambda_{5}=t$,
$\lambda_{i}=0$, $i=1,2,3,4$, in (3.2), we obtain a two-parametric family of
natural connections $\nabla^{\prime\prime\prime}$ defined by
$\begin{array}[]{l}\nabla^{\prime\prime\prime}_{x}y=\nabla_{x}y+\frac{1-2s}{2n}\left[\theta(Jy)x-g(x,y)J\Omega\right]+\frac{1}{n}\left\\{s\left[g(x,Jy)\Omega-\theta(y)Jx\right]\right.\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{\nabla^{\prime}_{x}y}\left.+t\left[g(x,y)\Omega-g(x,Jy)J\Omega-\theta(y)x+\theta(Jy)Jx\right]\right\\}.\end{array}$
(3.7)
The well-known canonical connection [2] (or $B$-connection [3]) on a
$\mathcal{W}_{1}$-manifold with Norden metric is obtained from (3.7) for
$s=\frac{1}{4}$, $t=0$.
We give a summery of the obtained results in the following table
Connection type | Symbol | Parameters
---|---|---
$\begin{array}[]{l}\text{Complex}\end{array}$ | $\nabla^{\prime}$ | $\lambda_{i}\in\mathbb{R}$, $i=1,2,...,8$.
$\begin{array}[]{l}\text{Complex}\vskip 3.0pt plus 1.0pt minus 1.0pt\\\ \text{symmetric}\end{array}$ | $\nabla^{\prime\prime}$ | $\begin{array}[]{c}\mu_{i},\hskip 2.168pti=1,2,3,4,\vskip 3.0pt plus 1.0pt minus 1.0pt\\\ \mu_{1}=\lambda_{1}=-\lambda_{4}=\lambda_{5},\hskip 2.168pt\mu_{2}=\lambda_{2}=\lambda_{6}=\lambda_{3}-\frac{1}{2},\vskip 3.0pt plus 1.0pt minus 1.0pt\\\ \mu_{3}=\lambda_{7},\hskip 2.168pt\mu_{4}=\lambda_{8}\end{array}$
$\begin{array}[]{l}\text{Natural}\end{array}$ | $\nabla^{\prime\prime\prime}$ | $\begin{array}[]{c}s,t,\vskip 3.0pt plus 1.0pt minus 1.0pt\\\ s=\lambda_{8}=-\lambda_{6},\hskip 2.168ptt=\lambda_{7}=-\lambda_{5},\vskip 3.0pt plus 1.0pt minus 1.0pt\\\ \lambda_{i}=0,\hskip 2.168pti=1,2,3,4.\end{array}$
Our next aim is to study the curvature properties of the complex connections
$\nabla^{\prime}$. Let us first consider the natural connection $\nabla^{0}$
obtained from (3.7) for $s=t=0$, i.e.
$\nabla^{0}_{x}y=\nabla_{x}y+\frac{1}{2n}\left[\theta(Jy)x-g(x,y)J\Omega\right].$
(3.8)
This connection is a semi-symmetric metric connection, i.e. a connection of
the form $\nabla_{x}y+\omega(y)x-g(x,y)U$, where $\omega$ is a 1-form and $U$
is the corresponding vector of $\omega$, i.e. $\omega(x)=g(x,U)$. Semi-
symmetric metric connections are introduced in [5] and studied in [6, 14]. The
form of the curvature tensor of an arbitrary connection of this type is
obtained in [14]. The geometry of such connections on almost complex manifolds
with Norden metric is considered in [10].
Let us denote by $R^{0}$ the curvature tensor of $\nabla^{0}$, i.e.
$R^{0}(x,y)z=\nabla^{0}_{x}\nabla^{0}_{y}z-\nabla^{0}_{y}\nabla^{0}_{x}z-\nabla^{0}_{[x,y]}z$.
The corresponding tensor of type (0,4) is defined by
$R^{0}(x,y,z,u)=g(R^{0}(x,y,)z,u)$. According to [14] it is valid
###### Proposition 3.1.
On a $\mathcal{W}_{1}$-manifold with closed 1-form $\theta^{\ast}$ the Kähler
curvature tensor $R^{0}$ of $\nabla^{0}$ defined by (3.8) has the form
$R^{0}=R-\frac{1}{2n}\psi_{1}(P),$ (3.9)
where
$P(x,y)=\left(\nabla_{x}\theta\right)Jy+\frac{1}{2n}\theta(x)\theta(y)+\frac{\theta(\Omega)}{4n}g(x,y)+\frac{\theta(J\Omega)}{2n}g(x,Jy).$
(3.10)
Since the Weyl tensor $W(\psi_{1}(S))=0$ (where $S$ is a symmetric
(0,2)-tensor), from (3.9) and (3.10) we conclude that
$W(R^{0})=W(R).$ (3.11)
Thus, the last equality implies
###### Proposition 3.2.
Let $(M,J,g)$ be a $\mathcal{W}_{1}$-manifold with closed 1-form
$\theta^{\ast}$, and $\nabla^{0}$ be the natural connection defined by (3.8).
Then, the Weyl tensor is invariant by the transformation
$\nabla\rightarrow\nabla^{0}$.
Further in this section we study the curvature properties of the complex
connections $\nabla^{\prime}$ defined by (3.1) and (3.2). Let us denote by
$R^{\prime}$ the curvature tensors corresponding to these connections.
If a linear connection $\nabla^{\prime}$ and the Levi-Civita connection
$\nabla$ are related by an equation of the form (3.1), then, because of
$\nabla g=0$, their curvature tensors of type (0,4) satisfy
$\begin{array}[]{l}g(R^{\prime}(x,y)z,u)=R(x,y,z,u)+(\nabla_{x}Q)(y,z,u)-(\nabla_{y}Q)(x,z,u)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{g(R^{\prime}(x,y)z,u)}+Q(x,Q(y,z),u)-Q(y,Q(x,z),u),\end{array}$
(3.12)
where $Q(x,y,z)=g(Q(x,y),z)$.
Then, by (3.1), (3.2), (3.9), (3.10), (3.12) we obtain the relation between
$R^{\prime}$ and $R^{0}$ as follows
$\begin{array}[]{l}R^{\prime}(x,y,z,u)=R^{0}(x,y,z,u)+g(y,z)A_{1}(x,u)-g(x,z)A_{1}(y,u)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
+g(x,u)A_{2}(y,z)-g(y,u)A_{2}(x,z)-g(y,Jz)A_{1}(x,Ju)\vskip 6.0pt plus 2.0pt
minus 2.0pt\\\ +g(x,Jz)A_{1}(y,Ju)-g(x,Ju)A_{2}(y,Jz)+g(y,Ju)A_{2}(x,Jz)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
+\left[\frac{\lambda_{5}\lambda_{7}-\lambda_{6}\lambda_{8}}{n^{2}}\theta(\Omega)+\frac{\lambda_{7}-\lambda_{5}+2(\lambda_{5}\lambda_{8}+\lambda_{6}\lambda_{7})}{2n^{2}}\theta(J\Omega)\right]\\{\pi_{1}-\pi_{2}\\}(x,y,z,u)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
-\left[\frac{\lambda_{5}\lambda_{8}+\lambda_{6}\lambda_{7}}{n^{2}}\theta(\Omega)-\frac{\lambda_{6}-\lambda_{8}+2(\lambda_{5}\lambda_{7}-\lambda_{6}\lambda_{8})}{2n^{2}}\theta(J\Omega)\right]\pi_{3}(x,y,z,u),\end{array}$
(3.13)
where
$\begin{array}[]{l}A_{1}(x,y)=\frac{\lambda_{7}}{n}\left\\{\left(\nabla_{x}\theta\right)y+\frac{\lambda_{7}}{n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\right\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{A_{1}(x,y)}+\frac{\lambda_{8}}{n}\left\\{\left(\nabla_{x}\theta\right)Jy+\frac{1-2\lambda_{8}}{2n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\right\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{A_{1}(x,y)}+\frac{\lambda_{7}(4\lambda_{8}-1)}{2n^{2}}[\theta(x)\theta(Jy)+\theta(Jx)\theta(y)],\vskip
12.0pt plus 4.0pt minus 4.0pt\\\
A_{2}(x,y)=-\frac{\lambda_{5}}{n}\left\\{\left(\nabla_{x}\theta\right)y-\frac{\lambda_{5}}{n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\right\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{A_{2}(x,y)}-\frac{\lambda_{6}}{n}\left\\{\left(\nabla_{x}\theta\right)Jy+\frac{1+2\lambda_{6}}{2n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\right\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{A_{2}(x,y)}+\frac{\lambda_{5}(4\lambda_{6}+1)}{2n^{2}}[\theta(x)\theta(Jy)+\theta(Jx)\theta(y)].\end{array}$
(3.14)
We are interested in necessary and sufficient conditions for $R^{\prime}$ to
be a Kähler curvature-like tensor, i.e. to satisfy (2.8) and (2.10). From
(2.11), (3.13) and (3.14) it follows that $R^{\prime}$ is Kählerian if and
only if $A_{1}(x,y)=A_{2}(x,y)$. Hence we obtain
###### Theorem 3.4.
Let $(M,J,g)$ be a conformal Kähler manifold with Norden metric, and
$\nabla^{\prime}$ be the complex connection defined by (3.1) and (3.2). Then,
$R^{\prime}$ is a Kähler curvature-like tensor if and only if
$\lambda_{7}=-\lambda_{5}$ and $\lambda_{8}=-\lambda_{6}$. In this case, from
(3.1) and (3.2) we obtain a six-parametric family of complex connections
$\nabla^{\prime}$ whose curvature tensors $R^{\prime}$ have the form
$\begin{array}[]{l}R^{\prime}=R^{0}+\frac{\lambda_{7}}{n}\left\\{\psi_{1}-\psi_{2}\right\\}(S_{1})+\frac{\lambda_{8}}{n}\left\\{\psi_{1}-\psi_{2}\right\\}(S_{2})\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{R^{\prime}}+\frac{\lambda_{7}(4\lambda_{8}-1)}{2n^{2}}\left\\{\psi_{1}-\psi_{2}\right\\}(S_{3})+\frac{\lambda_{7}(1-2\lambda_{8})\theta(J\Omega)}{n^{2}}\left\\{\pi_{1}-\pi_{2}\right\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{R^{\prime}}+\frac{2\lambda_{7}\lambda_{8}\theta(\Omega)}{n^{2}}\pi_{3},\end{array}$
(3.15)
where $R^{0}$ is given by (3.9), (3.10), and
$\begin{array}[]{l}S_{1}(x,y)=\left(\nabla_{x}\theta\right)y+\frac{\lambda_{7}}{n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]-\frac{\lambda_{7}\theta(\Omega)}{2n}g(x,y)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{S_{1}(x,y)}+\frac{\lambda_{7}\theta(J\Omega)}{2n}g(x,Jy),\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\
S_{2}(x,y)=\left(\nabla_{x}\theta\right)Jy+\frac{1-2\lambda_{8}}{2n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{S_{2}(x,y)}+\frac{\lambda_{8}\theta(\Omega)}{2n}g(x,y)+\frac{(1-\lambda_{8})\theta(J\Omega)}{2n}g(x,Jy),\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
S_{3}(x,y)=\theta(x)\theta(Jy)+\theta(Jx)\theta(y).\end{array}$ (3.16)
From (3.15), (3.16) and Theorem 3.4 we get
###### Corollary 3.1.
Let $(M,J,g)$ be a conformal Kähler manifold with Norden metric and
$\nabla^{\prime}$ be the eight-parametric family of complex connections
defined by (3.1) and (3.2). Then $R^{\prime}=R^{0}$ if and only if
$\lambda_{i}=0$ for $i=5,6,7,8$.
Let us remark that by putting $\lambda_{i}=0$ for $i=1,2,5,6,7,8$ in (3.2) we
obtain a two-parametric family of complex connections whose Kähler curvature
tensors coincides with $R^{0}$ on a $\mathcal{W}_{1}$-manifold with closed
1-form $\theta^{\ast}$.
Theorem 3.2 and Corollary 3.1 imply
###### Corollary 3.2.
On a conformal Kähler manifold with Norden metric the Weyl tensor is invariant
by the transformation of the Levi-Civita connection in any of the complex
connection $\nabla^{\prime}$ defined by (3.1) and (3.2) for $\lambda_{i}=0$,
$i=5,6,7,8$.
Since for the Bochner tensor of $\\{\psi_{1}-\psi_{2}\\}(S)$ it is valid
$B\left(\\{\psi_{1}-\psi_{2}\\}(S)\right)=0$, where $S$ is symmetric and
hybrid with respect to $J$, from Theorem 3.4 and (2.11) it follows
$B(R^{\prime})=B(R^{0}).$ (3.17)
By this way we proved the truthfulness of the following
###### Theorem 3.5.
Let $(M,J,g)$ be a conformal Kähler manifold with Norden metric, $R^{\prime}$
be the curvature tensor of $\nabla^{\prime}$ defined by (3.1) and (3.2) for
$\lambda_{7}=-\lambda_{5}$, $\lambda_{8}=-\lambda_{6}$, and $R^{0}$ be the
curvature tensor of $\nabla^{0}$ given by (3.8). Then the Bochner tensor is
invariant by the transformations $\nabla^{0}\rightarrow\nabla^{\prime}$.
## 4\. Conformal transformations of complex connections
In this section we study usual conformal transformations of the complex
connections $\nabla^{\prime}$ defined in the previous section.
Let $(M,J,g)$ and $(M,J,\bar{g})$ be conformally equivalent almost complex
manifolds with Norden metric by the transformation (2.12). It is known that
the Levi-Civita connections $\nabla$ and $\overline{\nabla}$ of $g$ and
$\overline{g}$, respectively, are related as follows
$\overline{\nabla}_{x}y=\nabla_{x}y+\sigma(x)y+\sigma(y)x-g(x,y)\Theta,$ (4.1)
where $\sigma(x)=du(x)$ and $\Theta=\textrm{grad}\hskip 1.4457pt\sigma$, i.e.
$\sigma(x)=g(x,\Theta)$. Let $\overline{\theta}$ be the Lie 1-form of
$(M,J,\overline{g})$. Then by (2.4) and (4.1) we obtain
$\overline{\theta}=\theta+2n\big{(}\sigma\circ
J\big{)},\qquad\quad\overline{\Omega}=e^{-2u}\big{(}\Omega+2nJ\Theta\big{)}.$
(4.2)
It is valid the following
###### Lemma 4.1.
Let $(M,J,g)$ be an almost complex manifold with Norden metric and
$(M,J,\overline{g})$ be its conformally equivalent manifold by the
transformation (2.12). Then, the curvature tensors $R$ and $\overline{R}$ of
$\nabla$ and $\overline{\nabla}$, respectively, are related as follows
$\overline{R}=e^{2u}\big{\\{}R-\psi_{1}\big{(}V\big{)}-\pi_{1}\sigma\big{(}\Theta\big{)}\big{\\}},$
(4.3)
where $V(x,y)=\big{(}\nabla_{x}\sigma\big{)}y-\sigma(x)\sigma(y)$.
Let us first study the conformal group of the natural connection $\nabla^{0}$
given by (3.8). Equalities (3.8) and (4.1) imply that its conformal group is
defined analytically by
$\overline{\nabla}^{\hskip 1.4457pt0}_{x}y=\nabla^{0}_{x}y+\sigma(x)y.$ (4.4)
It is known that if two linear connections are related by an equation of the
form (4.4), where $\sigma$ is a 1-form, then the curvature tensors of these
connections coincide iff $\sigma$ is closed. Hence, it is valid
###### Theorem 4.1.
Let $(M,J,g)$ be a $\mathcal{W}_{1}$-manifold with closed 1-form
$\theta^{\ast}$. Then the curvature tensor $R^{0}$ of $\nabla^{0}$ is
conformally invariant, i.e.
$\overline{R}^{0}=e^{2u}R^{0}.$ (4.5)
Further in this section let $(M,J,g)$ be a conformal Kähler manifold with
Norden metric. Then $(M,J,\overline{g})$ is a Kähler manifold and thus
$\overline{\theta}=0$. From (4.2) it follows $\sigma=\frac{1}{2n}(\theta\circ
J)$. Then, from (3.1) and (3.2) we get $\overline{\nabla}^{\hskip
1.4457pt\prime}=\overline{\nabla}$ and hence
$\overline{R}^{\prime}=\overline{R}$ for all $\lambda_{i}\in\mathbb{R}$,
$i=1,2,...,8$. In particular, $\overline{R}^{\prime}=\overline{R}^{0}$. Then,
Theorems 3.5 and (3.17) imply
###### Theorem 4.2.
On a conformal Kähler manifold with Norden metric the Bochner curvature tensor
of the complex connections $\nabla^{\prime}$ defined by (3.1) and (3.2) with
the conditions $\lambda_{7}=-\lambda_{5}$ and $\lambda_{8}=-\lambda_{6}$ is
conformally invariant by the transformation (2.12), i.e.
$B(\overline{R}^{\prime})=e^{2u}B(R^{\prime}).$ (4.6)
Let us remark that the conformal invariancy of the Bochner tensor of the
canonical connection on a conformal Kähler manifold with Norden metric is
proved in [3].
From Theorem 4.1 and Corollary 3.1 we obtain
###### Corollary 4.1.
Let $(M,J,g)$ be a conformal Kähler manifold with Norden metric and
$\nabla^{\prime}$ be a complex connection defined by (3.1) and (3.2). If
$\lambda_{i}=0$ for $i=5,6,7,8$, then the curvature tensor of
$\nabla^{\prime}$ is conformally invariant by the transformation (2.12).
## 5\. An Example
Let $G$ be a real connected four-dimensional Lie group, and $\mathfrak{g}$ be
its corresponding Lie algebra. If $\\{e_{1},e_{2},e_{3},e_{4}\\}$ is a basis
of $\mathfrak{g}$, we equip $G$ with a left-invariant almost complex structure
$J$ by
$Je_{1}=e_{3},\qquad Je_{2}=e_{4},\qquad Je_{3}=-e_{1},\qquad Je_{4}=-e_{2}.$
(5.1)
We also define a left-invariant pseudo-Riemannian metric $g$ on $G$ by
$\begin{array}[]{l}g(e_{1},e_{1})=g(e_{2},e_{2})=-g(e_{3},e_{3})=-g(e_{4},e_{4})=1,\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ g(e_{i},e_{j})=0,\quad i\neq j,\quad
i,j=1,2,3,4.\end{array}$ (5.2)
Then, because of (2.1), (5.1) and (5.2), $(G,J,g)$ is an almost complex
manifold with Norden metric.
Further, let the Lie algebra $\mathfrak{g}$ be defined by the following
commutator relations
$\begin{array}[]{l}[e_{1},e_{2}]=[e_{3},e_{4}]=0,\vskip 6.0pt plus 2.0pt minus
2.0pt\\\
[e_{1},e_{4}]=[e_{2},e_{3}]=\lambda(e_{1}+e_{4})+\mu(e_{2}-e_{3}),\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\
[e_{1},e_{3}]=-[e_{2},e_{4}]=\mu(e_{1}+e_{4})-\lambda(e_{2}-e_{3}),\end{array}$
(5.3)
where $\lambda,\mu\in\mathbb{R}$.
The well-known Koszul’s formula for the Levi-Civita connection of $g$ on $G$,
i.e. the equality
$2g(\nabla_{e_{i}}e_{j},e_{k})=g([e_{i},e_{j}],e_{k})+g([e_{k},e_{i}],e_{j})+g([e_{k},e_{j}],e_{i}),$
(5.4)
and (5.2) imply the following essential non-zero components of the Levi-Civita
connection:
$\begin{array}[]{ll}\nabla_{e_{1}}e_{1}=\nabla_{e_{2}}e_{2}=\mu e_{3}+\lambda
e_{4},&\nabla_{e_{3}}e_{3}=\nabla_{e_{4}}e_{4}=-\lambda e_{1}+\mu e_{2},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\nabla_{e_{1}}e_{3}=\mu(e_{1}+e_{4}),&\nabla_{e_{1}}e_{4}=\lambda e_{1}-\mu
e_{3},\vskip 6.0pt plus 2.0pt minus 2.0pt\\\ \nabla_{e_{2}}e_{3}=\mu
e_{1}+\lambda e_{4},&\nabla_{e_{2}}e_{4}=\lambda(e_{2}-e_{3}).\end{array}$
(5.5)
Then, by (2.2), (2.3) and (5.5) we compute the following essential non-zero
components $F_{ijk}=F(e_{i},e_{j},e_{k})$ of $F$:
$\begin{array}[]{l}F_{111}=F_{422}=2\mu,\quad F_{222}=-F_{311}=2\lambda,\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
F_{112}=-F_{214}=F_{314}=-F_{412}=\lambda,\quad
F_{212}=-F_{114}=F_{312}=-F_{414}=\mu.\end{array}$ (5.6)
Having in mind (2.4) and (5.6), the components $\theta_{i}=\theta(e_{i})$ and
$\theta_{i}^{\ast}=\theta^{\ast}(e_{i})$ of the 1-forms $\theta$ and
$\theta^{\ast}$, respectively, are:
$\theta_{2}=\theta_{3}=\theta^{\ast}_{1}=-\theta^{\ast}_{4}=4\lambda,\qquad\theta_{1}=-\theta_{4}=-\theta^{\ast}_{2}=-\theta^{\ast}_{3}=4\mu.$
(5.7)
By (2.4) and (5.7) we compute
$\begin{array}[]{c}\Omega=4\mu(e_{1}+e_{4})+4\lambda(e_{2}-e_{3}),\qquad
J\Omega=4\lambda(e_{1}+e_{4})-4\mu(e_{2}-e_{3}),\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ \theta(\Omega)=\theta(J\Omega)=0.\end{array}$ (5.8)
By the characteristic condition (2.5) and equalities (5.6), (5.7) we prove
that the manifold $(G,J,g)$ with Lie algebra $\mathfrak{g}$ defined by (5.3)
belongs to the basic class $\mathcal{W}_{1}$. Moreover, by (5.5) and (5.7) it
follows that the conditions (2.13) hold and thus
###### Proposition 5.1.
The manifold $(G,J,g)$ defined by (5.1), (5.2) and (5.3) is a conformal Kähler
manifold with Norden metric.
According to (3.8), (5.2), (5.5) and (5.7) the components of the natural
connection $\nabla^{0}$ are given by
$\begin{array}[]{ll}\nabla_{e_{1}}^{0}e_{1}=-\nabla_{e_{4}}^{0}e_{1}=\mu
e_{2},&\nabla_{e_{2}}^{0}e_{1}=\nabla_{e_{3}}^{0}e_{1}=\lambda e_{2},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\nabla_{e_{1}}^{0}e_{2}=-\nabla_{e_{4}}^{0}e_{2}=-\mu
e_{1},&\nabla_{e_{2}}^{0}e_{2}=\nabla_{e_{3}}^{0}e_{2}=-\lambda e_{1},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\nabla_{e_{1}}^{0}e_{3}=-\nabla_{e_{4}}^{0}e_{3}=\mu
e_{4},&\nabla_{e_{2}}^{0}e_{3}=\nabla_{e_{3}}^{0}e_{3}=\lambda e_{4},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\nabla_{e_{1}}^{0}e_{4}=-\nabla_{e_{4}}^{0}e_{4}=-\mu
e_{3},&\nabla_{e_{2}}^{0}e_{4}=\nabla_{e_{4}}^{0}e_{4}=-\lambda
e_{3}.\end{array}$ (5.9)
By (5.9) we obtain $R^{0}=0$. Then, by (3.9) and (3.10) the curvature tensor
$R$ of $(G,J,g)$ has the form
$R=\frac{1}{4}\psi_{1}(A),\qquad
A(x,y)=(\nabla_{x}\theta)Jy+\frac{1}{4}\theta(x)\theta(y).$ (5.10)
Moreover, having in mind (3.16), (5.5), (5.7) and (5.8), we compute
$S_{1}=S_{2}=S_{3}=0$. Hence, for the tensors $R^{\prime}$ of the complex
connections $\nabla^{\prime}$ given by (3.15), it is valid $R^{\prime}=0$.
###### Proposition 5.2.
The complex connections $\nabla^{\prime}$ defined by (3.1) and (3.2) are flat
on $(G,J,g)$.
## References
* [1] G. Ganchev, A. Borisov, _Note on the almost complex manifolds with a Norden metric_ , Compt. Rend. Acad. Bulg. Sci. 39(5) (1986), 31–34.
* [2] G. Ganchev, V. Mihova, _Canonical connection and the canonical conformal group on an almost complex manifold with $B$-metric_, Ann. Univ. Sofia Fac. Math. Inform., 81(1) (1987), 195–206.
* [3] G. Ganchev, K. Gribachev, V. Mihova, $B$_-connections and their conformal invariants on conformally Kähler manifolds with $B$-metric_, Publ. Inst. Math. (Beograd) (N.S.) 42(56) (1987), 107–121.
* [4] K. Gribachev, D. Mekerov, G. Djelepov, _Generalized_ B _-manifolds_ , Compt. Rend. Acad. Bulg. Sci. 38(3) (1985), 299–302.
* [5] A. Hayden, _Subspaces of a space with torsion_ , Proc. London Math. Soc. 34 (1932), 27–50.
* [6] T. Imai, _Notes on semi-symmetric metric connections_ , Tensor N.S. 24 (1972), 293–296.
* [7] S. Kobayshi, K. Nomizu, _Foundations of differential geometry_ vol. 1, 2, Intersc. Publ., New York, 1963, 1969.
* [8] A. Newlander, L. Nirenberg, _Complex analytic coordinates in almost complex manifolds_ , Ann. Math. 65 (1957), 391–404.
* [9] A. P. Norden, _On a class of four-dimensional A-spaces_ , Russian Math. (Izv VUZ) 17(4) (1960), 145–157.
* [10] S. D. Singh, A. K. Pandey, _Semi-symmetric metric connections in an almost Norden metric manifold_ , Acta Cienc. Indica Math. 27(1) (2001), 43–54.
* [11] M. Teofilova, _Almost complex connections on almost complex manifolds with Norden metric_ , In: Trends in Differential Geometry, Complex Analysis and Mathematical Physics, eds. K. Sekigawa, V. Gerdjikov and S. Dimiev, World Sci. Publ., Singapore (2009), 231–240.
* [12] K. Yano, _Affine connections in an almost product space_ , Kodai Math. Semin. Rep. 11(1) (1959), 1–24.
* [13] K. Yano, _Differential geometry on complex and almost complex spaces_ , Pure and Applied Math. vol. 49, Pergamon Press Book, New York, 1965.
* [14] K. Yano, _On semi-symmetric metric connection_ , Rev. Roumanie Math. Pure Appl. 15 (1970), 1579–1586.
_Marta Teofilova
Department of Geometry
Faculty of Mathematics and Informatics
University of Plovdiv
236 Bulgaria Blvd.
4003 Plovdiv, Bulgaria
e-mail:_ `marta@uni-plovdiv.bg`
|
arxiv-papers
| 2011-04-28T09:54:36 |
2024-09-04T02:49:18.465637
|
{
"license": "Public Domain",
"authors": "Marta Teofilova",
"submitter": "Marta Teofilova",
"url": "https://arxiv.org/abs/1104.5337"
}
|
1104.5342
|
11footnotetext: This work is partially supported by The Fund for Scientific
Research of the University of Plovdiv, Bulgaria, Project RS09-FMI-003.
# Linear Connections on Normal Almost Contact Manifolds with Norden Metric
Marta Teofilova
###### Abstract
Families of linear connections are constructed on almost contact manifolds
with Norden metric. An analogous connection to the symmetric Yano connection
is obtained on a normal almost contact manifold with Norden metric and closed
structural 1-form. The curvature properties of this connection are studied on
two basic classes of normal almost contact manifolds with Norden metric.
MSC (2010): 53C15, 53C50, 53B30.
_Keywords_ : almost contact manifold, Norden metric, $B$-metric, linear
connection.
## Introduction
The geometry of the almost contact manifolds with Norden metric ($B$-metric)
is a natural extension of the geometry of the almost complex manifolds with
Norden metric [2] in the case of an odd dimension. A classification of the
almost contact manifolds with Norden metric consisting of eleven basic classes
is introduced in [3].
An important problem in the geometry of the manifolds equipped with an
additional tensor structure and a compatible metric is the study of linear
connections preserving the structures of the manifold. One such connection on
an almost contact manifold with Norden metric, namely the canonical
connection, is considered in [1, 4].
In the present work we construct families of linear connections on an almost
contact manifold with Norden metric, which preserve by covariant
differentiation some or all of the structural tensors of the manifold. We
obtain a symmetric connection on a normal almost contact manifold with Norden
metric, which can be considered as an analogue to the well-known Yano
connection [6, 7].
## 1 Preliminaries
Let $(M,\varphi,\xi,\eta,g)$ be a $(2n+1)$-dimensional _almost contact
manifold with Norden metric_ , i.e. $(\varphi,\xi,\eta)$ is an _almost contact
structure_ : $\varphi$ is an endomorphism of the tangent bundle of $M$, $\xi$
is a vector field, $\eta$ is its dual 1-form, and $g$ is a pseudo-Riemannian
metric, called a _Norden metric_ (or a $B$-_metric_), such that
(1.1) $\begin{array}[]{l}\varphi^{2}x=-x+\eta(x)\xi,\qquad\eta(\xi)=1,\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ g(\varphi x,\varphi
y)=-g(x,y)+\eta(x)\eta(y)\end{array}$
for all $x,y\in\mathfrak{X}(M)$.
From (1.1) it follows $\varphi\xi=0$, $\eta\circ\varphi=0$,
$\eta(x)=g(x,\xi)$, $g(\varphi x,y)=g(x,\varphi y)$.
The associated metric $\tilde{g}$ of $g$ is defined by
$\tilde{g}(x,y)=g(x,\varphi y)+\eta(x)\eta(y)$ and is a Norden metric, too.
Both metrics are necessarily of signature $(n+1,n)$.
Further, $x,y,z,u$ will stand for arbitrary vector fields in
$\mathfrak{X}(M)$.
Let $\nabla$ be the Levi-Civita connection of $g$. The fundamental tensor $F$
is defined by
(1.2) $F(x,y,z)=g\left((\nabla_{x}\varphi)y,z\right)$
and has the following properties
(1.3) $\begin{array}[]{l}F(x,y,z)=F(x,z,y),\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ F(x,\varphi y,\varphi
z)=F(x,y,z)-F(x,\xi,z)\eta(y)-F(x,y,\xi)\eta(z).\end{array}$
Equalities (1.3) and $\varphi\xi=0$ imply $F(x,\xi,\xi)=0$.
Let $\\{e_{i},\xi\\}$, ($i=1,2,...,2n$) be a basis of the tangent space
$T_{p}M$ at an arbitrary point $p$ of $M$, and $g^{ij}$ be the components of
the inverse matrix of $(g_{ij})$ with respect to $\\{e_{i},\xi\\}$.
The following 1-forms are associated with $F$:
$\theta(x)=g^{ij}F(e_{i},e_{j},x),\qquad\theta^{\ast}(x)=g^{ij}F(e_{i},\varphi
e_{j},x),\qquad\omega(x)=F(\xi,\xi,x).$
A classification of the almost contact manifolds with Norden metric is
introduced in [3]. Eleven basic classes $\mathcal{F}_{i}$ ($i=1,2,...,11$) are
characterized there according to the properties of $F$.
The Nijenhuis tensor $N$ of the almost contact structure $(\varphi,\xi,\eta)$
is defined in [5] by $N(x,y)=[\varphi,\varphi](x,y)+d\eta(x,y)\xi$, i.e.
$N(x,y)=\varphi^{2}[x,y]+[\varphi x,\varphi y]-\varphi[\varphi
x,y]-\varphi[x,\varphi y]+(\nabla_{x}\eta)y.\xi-(\nabla_{y}\eta)x.\xi$. The
almost contact structure in said to be integrable if $N=0$. In this case the
almost contact manifold is called _normal_ [5].
In terms of the covariant derivatives of $\varphi$ and $\eta$ the tensor $N$
is expressed by $N(x,y)=(\nabla_{\varphi x}\varphi)y-(\nabla_{\varphi
y}\varphi)x-\varphi(\nabla_{x}\varphi)y+\varphi(\nabla_{y}\varphi)x+(\nabla_{x}\eta)y.\xi-(\nabla_{y}\eta)x.\xi$,
where $(\nabla_{x}\eta)y=F(x,\varphi y,\xi)$. Then, according to (1.2), the
corresponding tensor of type (0,3), defined by $N(x,y,z)=g(N(x,y),z)$, has the
form
(1.4) $\begin{array}[]{l}N(x,y,z)=F(\varphi x,y,z)-F(\varphi
y,x,z)-F(x,y,\varphi z)\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{N(x,y,z)}+F(y,x,\varphi z)+F(x,\varphi y,\xi)\eta(z)-F(y,\varphi
x,\xi)\eta(z).\end{array}$
The condition $N=0$ and (1.4) imply
(1.5) $F(x,y,\xi)=F(y,x,\xi),\qquad\omega=0.$
The 1-form $\eta$ is said to be closed if $\mathrm{d}\eta=0$, i.e. if
$(\nabla_{x}\eta)y=(\nabla_{y}\eta)x$. The class of the almost contact
manifolds with Norden metric satisfying the conditions $N=0$ and
$\mathrm{d}\eta=0$ is
$\mathcal{F}_{1}\oplus\mathcal{F}_{2}\oplus\mathcal{F}_{4}\oplus\mathcal{F}_{5}\oplus\mathcal{F}_{6}$
[3].
Analogously to [2], we define an associated tensor $\widetilde{N}$ of $N$ by
(1.6) $\begin{array}[]{l}\widetilde{N}(x,y,z)=F(\varphi x,y,z)+F(\varphi
y,x,z)-F(x,y,\varphi z)\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{N^{\ast}(x,y,z)}-F(y,x,\varphi z)+F(x,\varphi
y,\xi)\eta(z)+F(y,\varphi x,\xi)\eta(z).\end{array}$
From $\widetilde{N}=0$ it follows $F(\varphi x,\varphi y,\xi)=F(y,x,\xi)$,
$\omega=0$. The class with $\widetilde{N}=0$ is
$\mathcal{F}_{3}\oplus\mathcal{F}_{7}$ [3].
The curvature tensor $R$ of $\nabla$ is defined as usually by
$R(x,y)z=\nabla_{x}\nabla_{y}z-\nabla_{y}\nabla_{x}z-\nabla_{[x,y]}z,$
and its corresponding tensor of type (0,4) is given by
$R(x,y,z,u)=g(R(x,y)z,u)$.
A tensor $L$ of type (0,4) is said to be curvature-like if it has the
properties of $R$, i.e. $L(x,y,z,u)=-L(y,x,z,u)=-L(x,y,u,z)$ and
$\underset{x,y,z}{\mathfrak{S}}L(x,y,z,u)=0$ (first Bianchi identity), where
$\mathfrak{S}$ is the cyclic sum by three arguments.
A curvature-like tensor $L$ is called a $\varphi$-Kähler-type tensor if
$L(x,y,\varphi z,\varphi u)=-L(x,y,z,u)$.
## 2 Connections on almost contact manifolds
with Norden metric
Let $\nabla^{\prime}$ be a linear connection with deformation tensor $Q$, i.e.
$\nabla^{\prime}_{x}y=\nabla_{x}y+Q(x,y)$. If we denote
$Q(x,y,z)=g(Q(x,y),z)$, then
(2.1) $g(\nabla^{\prime}_{x}y-\nabla_{x}y,z)=Q(x,y,z).$
###### Definition 2.1.
_A linear connection $\nabla^{\prime}$ on an almost contact manifold is called
an_ almost $\varphi$-connection _if $\varphi$ is parallel with respect to
$\nabla^{\prime}$, i.e. if $\nabla^{\prime}\varphi=0$._
Because of (1.2), equality (2.1) and $\nabla^{\prime}\varphi=0$ imply the
condition for $Q$
(2.2) $F(x,y,z)=Q(x,y,\varphi z)-Q(x,\varphi y,z).$
###### Theorem 2.1.
On an almost contact manifold with Norden metric there exists a 10-parametric
family of almost $\varphi$-connections $\nabla^{\prime}$ of the form (2.1)
with deformation tensor $Q$ given by
(2.3) $\begin{array}[]{l}Q(x,y,z)=\frac{1}{2}\\{F(x,\varphi y,z)+F(x,\varphi
y,\xi)\eta(z)\\}-F(x,\varphi z,\xi)\eta(y)\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ +t_{1}\\{F(y,x,z)+F(\varphi y,\varphi
x,z)-F(y,x,\xi)\eta(z)-F(\varphi y,\varphi x,\xi)\eta(z)\vskip 6.0pt plus
2.0pt minus 2.0pt\\\
-F(y,z,\xi)\eta(x)-F(\xi,x,z)\eta(y)+\eta(x)\eta(y)\omega(z)\\}\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\ +t_{2}\\{F(z,x,y)+F(\varphi z,\varphi
x,y)-F(z,x,\xi)\eta(y)-F(\varphi z,\varphi x,\xi)\eta(y)\vskip 6.0pt plus
2.0pt minus 2.0pt\\\
-F(z,y,\xi)\eta(x)-F(\xi,x,y)\eta(z)+\eta(x)\eta(z)\omega(y)\\}\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\ +t_{3}\\{F(y,\varphi x,z)-F(\varphi
y,x,z)-F(y,\varphi x,\xi)\eta(z)+F(\varphi y,x,\xi)\eta(z)\vskip 6.0pt plus
2.0pt minus 2.0pt\\\ -F(y,\varphi z,\xi)\eta(x)-F(\xi,\varphi
x,z)\eta(y)+\eta(x)\eta(y)\omega(\varphi z)\\}\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ +t_{4}\\{F(z,\varphi x,y)-F(\varphi z,x,y)-F(z,\varphi
x,\xi)\eta(y)+F(\varphi z,x,\xi)\eta(y)\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
-F(z,\varphi y,\xi)\eta(x)-F(\xi,\varphi
x,y)\eta(z)+\eta(x)\eta(z)\omega(\varphi y)\\}\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ +t_{5}\\{F(\varphi y,z,\xi)+F(y,\varphi z,\xi)-\eta(y)\omega(\varphi
z)\\}\eta(x)\vskip 6.0pt plus 2.0pt minus 2.0pt\\\ +t_{6}\\{F(\varphi
z,y,\xi)+F(z,\varphi y,\xi)-\omega(\varphi y)\eta(z)\\}\eta(x)\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\ +t_{7}\\{F(\varphi y,\varphi
z,\xi)-F(y,z,\xi)+\eta(y)\omega(z)\\}\eta(x)\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ +t_{8}\\{F(\varphi z,\varphi
y,\xi)-F(z,y,\xi)+\omega(y)\eta(z)\\}\eta(x)\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ +t_{9}\omega(x)\eta(y)\eta(z)+t_{10}\omega(\varphi
x)\eta(y)\eta(z),\qquad t_{i}\in\mathbb{R},\quad i=1,2,...,10.\end{array}$
###### Proof.
The proof of the statement follows from (2.2), (2.3) and (1.3) by direct
verification that $\nabla^{\prime}\varphi=0$ for all $t_{i}$. ∎
Let $N=\mathrm{d}\eta=0$. By (1.4), (1.5) from Theorem 2.1 we obtain
###### Corollary 2.1.
Let $(M,\varphi,\xi,\eta,g)$ be a
$\mathcal{F}_{1}\oplus\mathcal{F}_{2}\oplus\mathcal{F}_{4}\oplus\mathcal{F}_{5}\oplus\mathcal{F}_{6}$-manifold.
Then, the deformation tensor $Q$ of the almost $\varphi$-connections
$\nabla^{\prime}$ defined by (2.1) and (2.3) has the form
(2.4) $\begin{array}[]{l}Q(x,y,z)=\frac{1}{2}\\{F(x,\varphi y,z)+F(x,\varphi
y,\xi)\eta(z)\\}-F(x,\varphi z,\xi)\eta(y)\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ +s_{1}\\{F(y,x,z)+F(\varphi y,\varphi x,z)\\}+s_{2}\\{F(y,\varphi
x,z)-F(\varphi y,x,z)\\}\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
+s_{3}F(y,\varphi z,\xi)\eta(x)+s_{4}F(y,z,\xi)\eta(x),\end{array}$
where $s_{1}=t_{1}+t_{2}$, $s_{2}=t_{3}+t_{4}$,
$s_{3}=2(t_{5}+t_{6})-t_{3}-t_{4}$, $s_{4}=-t_{1}-t_{2}-2(t_{7}+t_{8})$.
###### Definition 2.2.
_A linear connection $\nabla^{\prime}$ is said to be_ almost contact _if the
almost contact structure $(\varphi,\xi,\eta)$ is parallel with respect to it,
i.e. if $\nabla^{\prime}\varphi=\nabla^{\prime}\xi=\nabla^{\prime}\eta=0$._
Then, in addition to the condition (2.2), for the deformation tensor $Q$ of an
almost contact connection given by (2.1) we also have
(2.5) $F(x,\varphi y,\xi)=Q(x,y,\xi)=-Q(x,\xi,y).$
###### Definition 2.3.
_A linear connection on an almost contact manifold with Norden metric
$(M,\varphi,\xi,\eta,g)$ is said to be_ natural _if
$\nabla^{\prime}\varphi=\nabla^{\prime}\eta=\nabla^{\prime}g=0$._
The condition $\nabla^{\prime}g=0$ and (2.1) yield
(2.6) $Q(x,y,z)=-Q(x,z,y).$
From (1.6) and (2.3) we compute
(2.7) $\begin{array}[]{l}Q(x,y,z)+Q(x,z,y)=\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ =(t_{1}+t_{2})\\{\widetilde{N}(y,z,\varphi
x)-\widetilde{N}(\xi,y,\varphi x)\eta(z)-\widetilde{N}(\xi,z,\varphi
x)\eta(y)\\}\vskip 6.0pt plus 2.0pt minus 2.0pt\\\ \hskip
2.8903pt-(t_{3}+t_{4})\\{\widetilde{N}(y,z,x)-\widetilde{N}(\xi,y,x)\eta(z)-\widetilde{N}(\xi,z,x)\eta(y)\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ \hskip
2.8903pt-(t_{5}+t_{6})\\{\widetilde{N}(\varphi^{2}z,y,\xi)+\widetilde{N}(z,\xi,\xi)\eta(y)\\}\eta(x)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ \hskip
2.8903pt+(t_{7}+t_{8})\\{\widetilde{N}(\varphi z,y,\xi)-\widetilde{N}(\varphi
z,\xi,\xi)\eta(y)\\}\eta(x)\vskip 6.0pt plus 2.0pt minus 2.0pt\\\ \hskip
2.8903pt+2\\{t_{9}\omega(x)+t_{10}\omega(\varphi
x)\\}\eta(y)\eta(z).\end{array}$
By (2.1), (2.3), (2.5), (2.6) and (2.7) we prove the following
###### Proposition 2.1.
Let $(M,\varphi,\xi,\eta,g)$ be an almost contact manifold with Norden metric,
and let $\nabla^{\prime}$ be the 10-parametric family of almost
$\varphi$-connections defined by (2.1) and (2.3). Then
(i)
$\nabla^{\prime}$ are almost contact iff
$t_{1}+t_{2}-t_{9}=t_{3}+t_{4}-t_{10}=0$;
(ii)
$\nabla^{\prime}$ are natural iff
$t_{1}+t_{2}=t_{3}+t_{4}=t_{5}+t_{6}=t_{7}+t_{8}=t_{9}=t_{10}=0$.
Taking into account equation (1.4), Theorem 2.1, Proposition 2.1, and by
putting $p_{1}=t_{1}=-t_{2}$, $p_{2}=t_{3}=-t_{4}$, $p_{3}=t_{5}=-t_{6}$,
$p_{4}=t_{7}=-t_{8}$, we obtain
###### Theorem 2.2.
On an almost contact manifold with Norden metric there exists a 4-parametric
family of natural connections $\nabla^{\prime\prime}$ defined by
$\begin{array}[]{l}g(\nabla^{\prime\prime}_{x}y-\nabla_{x}y,z)=\frac{1}{2}\\{F(x,\varphi
y,z)+F(x,\varphi y,\xi)\eta(z)\\}-F(x,\varphi z,\xi)\eta(y)\vskip 6.0pt plus
2.0pt minus 2.0pt\\\
\phantom{g(\nabla^{\prime\prime}_{x}y-\nabla_{x}y,z)}+p_{1}\\{N(y,z,\varphi
x)+N(\xi,y,\varphi x)\eta(z)+N(z,\xi,\varphi x)\eta(y)\\}\vskip 6.0pt plus
2.0pt minus 2.0pt\\\
\phantom{g(\nabla^{\prime\prime}_{x}y-\nabla_{x}y,z)}+p_{2}\\{N(z,y,x)+N(y,\xi,x)\eta(z)+N(\xi,z,x)\eta(y)\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{g(\nabla^{\prime\prime}_{x}y-\nabla_{x}y,z)}+p_{3}\\{N(\varphi^{2}z,y,\xi)+N(z,\xi,\xi)\eta(y)\\}\eta(x)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{g(\nabla^{\prime\prime}_{x}y-\nabla_{x}y,z)}+p_{4}\\{N(y,\varphi
z,\xi)+N(\varphi z,\xi,\xi)\eta(y)\\}\eta(x).\end{array}$
Since $N=0$ on a normal almost contact manifold with Norden metric, the family
$\nabla^{\prime\prime}$ consists of a unique natural connection on such
manifolds
(2.8)
$\begin{array}[]{l}\nabla^{\prime\prime}_{x}y=\nabla_{x}y+\frac{1}{2}\\{(\nabla_{x}\varphi)\varphi
y+(\nabla_{x}\eta)y.\xi\\}-\nabla_{x}\xi.\eta(y),\end{array}$
which is the well-known canonical connection [1].
Because of Proposition 2.1, (2.7) and the condition $\widetilde{N}=0$, the
connections $\nabla^{\prime}$ given by (2.1) and (2.3) are natural on a
$\mathcal{F}_{3}\oplus\mathcal{F}_{7}$-manifold iff $t_{1}=-t_{2}$ and
$t_{3}=-t_{4}$.
Let $(M,\varphi,\xi,\eta,g)$ be in the class
$\mathcal{F}_{1}\oplus\mathcal{F}_{2}\oplus\mathcal{F}_{4}\oplus\mathcal{F}_{5}\oplus\mathcal{F}_{6}$.
Then, $N=\mathrm{d}\eta=0$ and hence the torsion tensor $T$ of the
4-parametric family of almost $\varphi$-connections $\nabla^{\prime}$ defined
by (2.1) and (2.4) has the form
(2.9) $\begin{array}[]{l}T(x,y,z)=s_{1}\left\\{F(y,x,z)-F(x,y,z)+F(\varphi
y,\varphi x,z)\right.\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\left.\phantom{T(x,y,z)}-F(\varphi x,\varphi
y,z)\right\\}+\frac{1-4s_{2}}{2}\\{F(x,\varphi y,z)-F(y,\varphi x,z)\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{T(x,y,z)}-s_{4}\\{F(x,z,\xi)\eta(y)-F(y,z,\xi)\eta(x)\\}\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\ \phantom{T(x,y,z)}-(1-s_{2}+s_{3})\\{F(x,\varphi
z,\xi)\eta(y)-F(y,\varphi z,\xi)\eta(x)\\}.\end{array}$
Then, from (2.9) we derive that $T=0$ if and only if $s_{1}=s_{4}=0$,
$s_{2}=\frac{1}{4}$, $s_{3}=-\frac{3}{4}$. By this way we prove
###### Theorem 2.3.
On a
$\mathcal{F}_{1}\oplus\mathcal{F}_{2}\oplus\mathcal{F}_{4}\oplus\mathcal{F}_{5}\oplus\mathcal{F}_{6}$-manifold
there exists a symmetric almost $\varphi$-connection $\nabla^{\prime}$ defined
by
(2.10)
$\begin{array}[]{l}\nabla_{x}^{\prime}y=\nabla_{x}y+\frac{1}{4}\left\\{2(\nabla_{x}\varphi)\varphi
y+(\nabla_{y}\varphi)\varphi x-(\nabla_{\varphi y}\varphi)x\right.\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\
\left.\phantom{\nabla_{x}^{*}y}+2(\nabla_{x}\eta)y.\xi-3\eta(x).\nabla_{y}\xi-4\eta(y).\nabla_{x}\xi\right\\}.\end{array}$
Let us remark that the connection (2.10) can be considered as an analogous
connection to the well-known Yano connection [6, 7] on a normal almost contact
manifold with Norden metric and closed 1-form $\eta$.
## 3 Connections on $\mathcal{F}_{4}\oplus\mathcal{F}_{5}$-manifolds
In this section we study the curvature properties of the 4-parametric family
of almost $\varphi$-connections $\nabla^{\prime}$ given by (2.1) and (2.4) on
two of the basic classes normal almost contact manifolds with Norden metric,
namely the classes $\mathcal{F}_{4}$ and $\mathcal{F}_{5}$. These classes are
defined in [3] by the following characteristic conditions for $F$,
respectively:
(3.1)
$\begin{array}[]{l}\mathcal{F}_{4}:F(x,y,z)=-\frac{\theta(\xi)}{2n}\\{g(\varphi
x,\varphi y)\eta(z)+g(\varphi x,\varphi z)\eta(y)\\},\end{array}$ (3.2)
$\begin{array}[]{l}\mathcal{F}_{5}:F(x,y,z)=-\frac{\theta^{\ast}(\xi)}{2n}\\{g(\varphi
x,y)\eta(z)+g(\varphi x,z)\eta(y)\\}.\end{array}$
The subclasses of $\mathcal{F}_{4}$ and $\mathcal{F}_{5}$ with closed 1-form
$\theta$ and $\theta^{\ast}$, respectively, are denoted by
$\mathcal{F}_{4}^{0}$ and $\mathcal{F}_{5}^{0}$. Then, it is easy to prove
that on a $\mathcal{F}_{4}^{0}\oplus\mathcal{F}_{5}^{0}$-manifold it is valid:
(3.3) $x\theta(\xi)=\xi\theta(\xi)\eta(x),\qquad
x\theta^{\ast}(\xi)=\xi\theta^{\ast}(\xi)\eta(x).$
Taking into consideration (3.1) and (3.2), from (2.1) and (2.4) we obtain
###### Proposition 3.1.
Let $(M,\varphi,\xi,\eta,g)$ be a
$\mathcal{F}_{4}\oplus\mathcal{F}_{5}$-manifold. Then, the connections
$\nabla^{\prime}$ defined by (2.1) and (2.4) are given by
(3.4)
$\begin{array}[]{l}\nabla_{x}^{\prime}y=\nabla_{x}y+\frac{\theta(\xi)}{2n}\\{g(x,\varphi
y)\xi-\eta(y)\varphi
x\\}+\frac{\theta^{\ast}(\xi)}{2n}\\{g(x,y)\xi-\eta(y)x\\}\vskip 6.0pt plus
2.0pt minus 2.0pt\\\
\phantom{\nabla_{x}^{\prime}y}+\frac{\lambda\theta(\xi)+\mu\theta^{\ast}(\xi)}{2n}\\{\eta(x)y-\eta(x)\eta(y)\xi\\}+\frac{\mu\theta(\xi)-\lambda\theta^{\ast}(\xi)}{2n}\eta(x)\varphi
y,\end{array}$
where $\lambda=s_{1}+s_{4}$, $\mu=s_{3}-s_{2}$.
The Yano-type connection (2.10) is obtained from (3.4) for $\lambda=0$,
$\mu=-1$.
Let us denote by $R^{\prime}$ the curvature tensor of $\nabla^{\prime}$, i.e.
$R^{\prime}(x,y)z=\nabla^{\prime}_{x}\nabla^{\prime}_{y}z-\nabla^{\prime}_{y}\nabla^{\prime}_{x}z-\nabla^{\prime}_{[x,y]}z$.
The corresponding tensor of type (0,4) with respect to $g$ is defined by
$R^{\prime}(x,y,z,u)=g(R^{\prime}(x,y)z,u)$. Then, it is valid
###### Proposition 3.2.
Let $(M,\varphi,\xi,\eta,g)$ be a
$\mathcal{F}_{4}^{0}\oplus\mathcal{F}_{5}^{0}$-manifold, and $\nabla^{\prime}$
be 2-parametric family of almost $\varphi$-connections defined by (3.4). Then,
the curvature tensor $R^{\prime}$ of an arbitrary connection in the family
(3.4) is of $\varphi$-Kähler-type and has the form
(3.5)
$\begin{array}[]{l}R^{\prime}=R+\frac{\xi\theta(\xi)}{2n}\pi_{5}+\frac{\xi\theta^{\ast}(\xi)}{2n}\pi_{4}+\frac{\theta(\xi)^{2}}{4n^{2}}\\{\pi_{2}-\pi_{4}\\}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{R^{\prime}}+\frac{\theta^{\ast}(\xi)^{2}}{4n^{2}}\pi_{1}-\frac{\theta(\xi)\theta^{\ast}(\xi)}{4n^{2}}\\{\pi_{3}-\pi_{5}\\},\end{array}$
where the curvature-like tensors $\pi_{i}$ ($i=1,2,3,4,5$) are defined by
_[4]:_
(3.6) $\begin{array}[]{l}\pi_{1}(x,y,z,u)=g(y,z)g(x,u)-g(x,z)g(y,u),\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ \pi_{2}(x,y,z,u)=g(y,\varphi z)g(x,\varphi
u)-g(x,\varphi z)g(y,\varphi u),\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\pi_{3}(x,y,z,u)=-g(y,z)g(x,\varphi u)+g(x,z)g(y,\varphi u)\vskip 6.0pt plus
2.0pt minus 2.0pt\\\ \phantom{\pi_{3}(x,y,z,u)}-g(x,u)g(y,\varphi
z)+g(y,u)g(x,\varphi z),\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\pi_{4}(x,y,z,u)=g(y,z)\eta(x)\eta(u)-g(x,z)\eta(y)\eta(u)\vskip 6.0pt plus
2.0pt minus 2.0pt\\\
\phantom{\pi_{4}(x,y,z,u)}+g(x,u)\eta(y)\eta(z)-g(y,u)\eta(x)\eta(z),\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ \pi_{5}(x,y,z,u)=g(y,\varphi
z)\eta(x)\eta(u)-g(x,\varphi z)\eta(y)\eta(u)\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ \phantom{\pi_{5}(x,y,z,u)}+g(x,\varphi u)\eta(y)\eta(z)-g(y,\varphi
u)\eta(x)\eta(z).\end{array}$
###### Proof.
It is known that the curvature tensors of two linear connections related by an
equation of the form (2.1) satisfy
(3.7)
$\begin{array}[]{l}g(R^{\prime}(x,y)z,u)=R(x,y,z,u)+(\nabla_{x}Q)(y,z,u)-(\nabla_{y}Q)(x,z,u)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{g(R^{\prime}(x,y)z,u)}+Q(x,Q(y,z),u)-Q(y,Q(x,z),u).\end{array}$
Then, (3.5) follows from (3.7), (3.4), (3.3) and (3.6) by straightforward
computation. ∎
Let us remark that (3.5) is obtained in [4] for the curvature tensor of the
canonical connection (2.8) on a
$\mathcal{F}_{4}^{0}\oplus\mathcal{F}_{5}^{0}$-manifold, i.e. the connection
(3.4) for $\lambda=\mu=0$.
## References
* [1] V. Alexiev, G. Ganchev, _Canonical connection on a conformal almost contact metric manifold_ , Ann. Univ. Sofia-Math. 81 (1987) 29–38.
* [2] G. Ganchev, A. Borisov, _Note on the almost complex manifolds with a Norden metric_ , Compt. Rend. Acad. Bulg. Sci. 39(5) (1986), 31–34.
* [3] G. Ganchev, V. Mihova, K. Gribachev, _Almost contact manifolds with B-metric_ , Math. Balk. N.S. 7(3-4) (1993), 261–276.
* [4] M. Manev, _On the curvature properties of almost contact $B$-metric hypersurfaces of Kaehlerian manifolds with B-metric_, Plovdiv Univ. Sci. Works – Math. 33(3) (2001), 61 -72.
* [5] S. Sasaki, _Almost-contact manifolds_ , Lecture Notes Math., Inst. Tôhoko Univ. 1-3, 1965, 1967, 1968.
* [6] K. Yano, _Affine connections in an almost product space_ , Kodai Math. Semin. Rep. 11(1) (1959), 1–24.
* [7] K. Yano, _Differential geometry on complex and almost complex spaces_ , Pure and Applied Math. vol. 49, Pergamon Press Book, New York, 1965.
_Marta Teofilova_
---
_University of Plovdiv_
_Faculty of Mathematics and Informatics_
_236 Bulgaria Blvd._
_4003 Plovdiv, Bulgaria_
e-mail: marta@uni-plovdiv.bg
|
arxiv-papers
| 2011-04-28T10:04:50 |
2024-09-04T02:49:18.470997
|
{
"license": "Public Domain",
"authors": "Marta Teofilova",
"submitter": "Marta Teofilova",
"url": "https://arxiv.org/abs/1104.5342"
}
|
1104.5343
|
11footnotetext: This work is partially supported by The Fund for Scientific
Research of the University of Plovdiv, Bulgaria, Project RS09-FMI-003.
# On a Class Almost Contact Manifolds
with Norden Metric
Marta Teofilova
###### Abstract
Certain curvature properties and scalar invariants of the manifolds belonging
to one of the main classes almost contact manifolds with Norden metric are
considered. An example illustrating the obtained results is given and studied.
MSC (2010): 53C15, 53C50, 53B30.
_Keywords_ : almost contact manifold, Norden metric, $B$-metric, isotropic
Käher manifold.
## Introduction
The geometry of the almost contact manifolds with Norden metric ($B$-metric)
is a natural extension of the geometry of the almost complex manifolds with
Norden metric ($B$-metric) in the odd dimensional case.
Almost contact manifolds with Norden metric are introduced in [1]. Eleven
basic classes of these manifolds are characterized there according to the
properties of the covariant derivatives of the almost contact structure.
In this work we focus our attention on one of the basic classes almost contact
manifolds with Norden metric, namely the class $\mathcal{F}_{11}$. We study
some curvature properties and relations between certain scalar invariants of
the manifolds belonging to this class. In the last section we illustrate the
obtained results by constructing and studying an example of an
$\mathcal{F}_{11}$-manifold on a Lie group.
## 1 Preliminaries
Let $M$ be a $(2n+1)$-dimensional smooth manifold, and let
$(\varphi,\xi,\eta)$ be an almost contact structure on $M$, i.e. $\varphi$ is
an endomorphism of the tangent bundle of $M$, $\xi$ is a vector field, and
$\eta$ is its dual 1-form such that
(1.1) $\varphi^{2}=-\mathrm{Id}+\eta\otimes\xi,\qquad\eta(\xi)=1.$
Then, $(M,\varphi,\xi,\eta)$ is called an _almost contact manifold_.
We equip $(M,\varphi,\xi,\eta)$ with a compatible pseudo-Riemannian metric $g$
satisfying
(1.2) $g(\varphi x,\varphi y)=-g(x,y)+\eta(x)\eta(y)$
for arbitrary $x$, $y$ in the Lie algebra $\mathfrak{X}(M)$ of the smooth
vector fields on $M$. Then, $g$ is called a _Norden metric_ ($B$-_metric_),
and $(M,\varphi,\xi,\eta,g)$ is called an _almost contact manifold with Norden
metric_.
From (1.1), $(\ref{1-2})$ it follows $\varphi\xi=0$, $\eta\circ\varphi=0$,
$\eta(x)=g(x,\xi)$, $g(\varphi x,y)=g(x,\varphi y)$.
The associated metric $\tilde{g}$ of $g$ is defined by
$\tilde{g}(x,y)=g(x,\varphi y)+\eta(x)\eta(y)$ and is a Norden metric, too.
Both metrics are necessarily of signature $(n+1,n)$.
Further, $x,y,z,u$ will stand for arbitrary vector fields in
$\mathfrak{X}(M)$.
Let $\nabla$ be the Levi-Civita connection of $g$. The fundamental tensor $F$
of type (0,3) is defined by
(1.3) $F(x,y,z)=g\left((\nabla_{x}\varphi)y,z\right)$
and has the properties
(1.4) $\begin{array}[]{l}F(x,y,z)=F(x,z,y),\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ F(x,\varphi y,\varphi
z)=F(x,y,z)-F(x,\xi,z)\eta(y)-F(x,y,\xi)\eta(z).\end{array}$
From the last equation and $\varphi\xi=0$ it follows $F(x,\xi,\xi)=0$.
Let $\\{e_{i},\xi\\}$ ($i=1,2,...,2n$) be a basis of the tangent space
$T_{p}M$ at an arbitrary point $p$ of $M$, and $g^{ij}$ be the components of
the inverse matrix of $(g_{ij})$ with respect to $\\{e_{i},\xi\\}$. The
following 1-forms are associated with $F$:
(1.5)
$\begin{array}[]{ll}\theta(x)=g^{ij}F(e_{i},e_{j},x),&\theta^{\ast}(x)=g^{ij}F(e_{i},\varphi
e_{j},x),\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\omega(x)=F(\xi,\xi,x),&\omega^{\ast}=\omega\circ\varphi.\end{array}$
We denote by $\Omega$ the vector field corresponding to $\omega$, i.e.
$\omega(x)=g(x,\Omega)$.
The Nijenhuis tensor $N$ of the almost contact structure $(\varphi,\xi,\eta)$
is defined by [6] $N(x,y)=[\varphi,\varphi](x,y)+d\eta(x,y)\xi$, i.e.
$N(x,y)=\varphi^{2}[x,y]+[\varphi x,\varphi y]-\varphi[\varphi
x,y]-\varphi[x,\varphi y]+(\nabla_{x}\eta)y.\xi-(\nabla_{y}\eta)x.\xi$
In terms of the covariant derivatives of $\varphi$ and $\eta$ the tensor $N$
is expressed as follows
(1.6) $\begin{array}[]{l}N(x,y)=(\nabla_{\varphi x}\varphi)y-(\nabla_{\varphi
y}\varphi)x-\varphi(\nabla_{x}\varphi)y+\varphi(\nabla_{y}\varphi)x\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{N(x,y)}+(\nabla_{x}\eta)y.\xi-(\nabla_{y}\eta)x.\xi,\end{array}$
where $(\nabla_{x}\eta)y=F(x,\varphi y,\xi)$.
The almost contact structure in said to be integrable if $N=0$. In this case
the almost contact manifold is called _normal_ [6].
A classification of the almost contact manifolds with Norden metric is
introduced in [1]. This classification consists of eleven basic classes
$\mathcal{F}_{i}$ ($i=1,2,...,11$) characterized according to the properties
of $F$. The special class $\mathcal{F}_{0}$ of the $\varphi$-Kähler-type
almost contact manifolds with Norden metric is given by the condition $F=0$
($\nabla\varphi=\nabla\xi=\nabla\eta=0$). The classes for which $F$ is
expressed explicitly by the other structural tensors are called _main
classes_.
In the present work we focus our attention on one of the main classes of these
manifolds, namely the class $\mathcal{F}_{11}$, which is defined by the
characteristic condition [1]
(1.7) $F(x,y,z)=\eta(x)\\{\eta(y)\omega(z)+\eta(z)\omega(y)\\}.$
By (1.5) and (1.7) we get that on a $\mathcal{F}_{11}$-manifold
$\theta=\omega$, $\theta^{\ast}=0$. We also have
(1.8) $(\nabla_{x}\omega^{\ast})y=(\nabla_{x}\omega)\varphi
y+\eta(x)\eta(y)\omega(\Omega).$
The 1-forms $\omega$ and $\omega^{\ast}$ are said to be closed if
$\text{d}\omega=\text{d}\omega^{\ast}=0$. Since $\nabla$ is symmetric,
necessary and sufficient conditions for $\omega$ and $\omega^{\ast}$ to be
closed are
(1.9) $(\nabla_{x}\omega)y=(\nabla_{y}\omega)x,\qquad(\nabla_{x}\omega)\varphi
y=(\nabla_{y}\omega)\varphi x.$
The curvature tensor $R$ of $\nabla$ is defined as usually by
(1.10) $R(x,y)z=\nabla_{x}\nabla_{y}z-\nabla_{y}\nabla_{x}z-\nabla_{[x,y]}z,$
and its corresponding tensor of type (0,4) is given by
$R(x,y,z,u)=g(R(x,y)z,u)$. The Ricci tensor $\rho$ and the scalar curvatures
$\tau$ and $\tau^{\ast}$ are defined by, respectively
(1.11)
$\rho(y,z)=g^{ij}R(e_{i},y,z,e_{j}),\qquad\tau=g^{ij}\rho(e_{i},e_{j}),\qquad\tau^{\ast}=g^{ij}\rho(e_{i},\varphi
e_{j}).$
The tensor $R$ is said to be of $\varphi$-Kähler-type if
(1.12) $R(x,y,\varphi z,\varphi u)=-R(x,y,z,u).$
Let $\alpha=\\{x,y\\}$ be a non-degenerate 2-section spanned by the vectors
$x,y\in T_{p}M$, $p\in M$. The sectional curvature of $\alpha$ is defined by
(1.13) $k(\alpha;p)=\frac{R(x,y,y,x)}{\pi_{1}(x,y,y,x)},$
where $\pi_{1}(x,y,z,u)=g(y,z)g(x,u)-g(x,z)g(y,u)$.
In [5] there are introduced the following special sections in $T_{p}M$: a
$\xi$-section if $\alpha=\\{x,\xi\\}$, a $\varphi$-holomorphic section if
$\varphi\alpha=\alpha$ and a totally real section if
$\varphi\alpha\perp\alpha$ with respect to $g$.
The square norms of $\nabla\varphi$, $\nabla\eta$ and $\nabla\xi$ are defined
by, respectively [3]:
(1.14)
$\begin{array}[]{l}||\nabla\varphi||^{2}=g^{ij}g^{ks}g\left((\nabla_{e_{i}}\varphi)e_{k},(\nabla_{e_{j}}\varphi)e_{s}\right),\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
||\nabla\eta||^{2}=||\nabla\xi||^{2}=g^{ij}g^{ks}(\nabla_{e_{i}}\eta)e_{k}(\nabla_{e_{j}}\eta)e_{s}.\end{array}$
We introduce the notion of an isotropic Kähler-type almost contact manifold
with Norden metric analogously to [2].
###### Definition 1.1.
_An almost contact manifold with Norden metric is called_ isotropic Kählerian
_if $||\nabla\varphi||^{2}=||\nabla\eta||^{2}=0$ (
$||\nabla\varphi||^{2}=||\nabla\xi||^{2}=0$)._
## 2 Curvature properties of $\mathcal{F}_{11}$-manifolds
In this section we obtain relations between certain scalar invariants on
$\mathcal{F}_{11}$-manifolds with Norden metric and give necessary and
sufficient conditions for such manifolds to be isotropic Käherian.
First, by help of (1.3), (1.4), (1.6), (1.7), (1.14) and direct computation we
obtain
###### Proposition 2.1.
On a $\mathcal{F}_{11}$-manifold it is valid
(2.1) $||\nabla\varphi||^{2}=-||N||^{2}=-2||\nabla\eta||^{2}=2\omega(\Omega).$
Then, (2.1) and Definition 1.1 yield
###### Corollary 2.1.
On a $\mathcal{F}_{11}$-manifold the following conditions are equivalent _:_
* _(i)_
the manifold is isotropic Kählerian _;_
* _(ii)_
the vector $\Omega$ is isotopic, i.e. $\omega(\Omega)=0$_;_
* _(iii)_
the Nijenhuis tensor $N$ is isotropic.
It is known that the almost contact structure satisfies the Ricci identity,
i.e.
(2.2)
$\begin{array}[]{l}(\nabla_{x}\nabla_{y}\varphi)z-(\nabla_{y}\nabla_{x}\varphi)z=R(x,y)\varphi
z-\varphi R(x,y)z,\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
(\nabla_{x}\nabla_{y}\eta)z-(\nabla_{y}\nabla_{x}\eta)z=-\eta(R(x,y)z).\end{array}$
Then, taking into account the definitions of $\varphi$, $F$, and $\nabla g=0$,
the equalities (2.2) imply
(2.3) $\begin{array}[]{c}(\nabla_{x}F)(y,z,\varphi
u)-(\nabla_{y}F)(x,z,\varphi u)=R(x,y,z,u)\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ +R(x,y,\varphi z,\varphi u)-R(x,y,z,\xi)\eta(u),\end{array}$ (2.4)
$(\nabla_{x}F)(y,\varphi z,\xi)-(\nabla_{y}F)(x,\varphi z,\xi)=-R(x,y,z,\xi).$
By (1.3), (2.3) and (2.4) we get
(2.5) $R(x,y,\varphi z,\varphi u)=-R(x,y,z,u)+\psi_{4}(S)(x,y,z,u),$
where the tensor $\psi_{4}(S)$ is defined by [4]
(2.6)
$\begin{array}[]{l}\psi_{4}(S)(x,y,z,u)=\eta(y)\eta(z)S(x,u)-\eta(x)\eta(z)S(y,u)\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\phantom{\psi_{4}(S)(x,y,z,u)}+\eta(x)\eta(u)S(y,z)-\eta(y)\eta(u)S(x,z).\end{array}$
and
(2.7) $S(x,y)=(\nabla_{x}\omega)\varphi y-\omega(\varphi x)\omega(\varphi y).$
Then, the following holds
###### Proposition 2.2.
On a $\mathcal{F}_{11}$-manifold we have
$\tau+\tau^{\ast\ast}=2\mathrm{div}(\varphi\Omega)=2\rho(\xi,\xi),$
where $\tau^{\ast\ast}=g^{is}g^{jk}R(e_{i},e_{j},\varphi e_{k},\varphi
e_{s})$.
###### Proof.
The truthfulness of the statement follows from (1.11) and (2.5) by
straightforward computation. ∎
Having in mind (1.12) and (2.5), we conclude that the curvature tensor on a
$\mathcal{F}_{11}$-manifold is of $\varphi$-Käher-type if and only if
$\psi_{4}(S)=0$. Because of (2.6) the last condition holds true iff $S=0$.
Then, taking into account (1.8) and (2.7) we prove
###### Proposition 2.3.
The curvature tensor of a $\mathcal{F}_{11}$-manifold with Norden metric is
$\varphi$-Käherian iff
(2.8)
$(\nabla_{x}\omega^{\ast})y=\eta(x)\eta(y)\omega(\Omega)+\omega^{\ast}(x)\omega^{\ast}(y).$
The condition (2.8) implies $\text{d}\omega^{\ast}=0$, i.e. $\omega^{\ast}$ is
closed.
## 3 An example
In this section we present and study a $(2n+1)$-dimensional example of a
$\mathcal{F}_{11}$-manifold constructed on a Lie group.
Let $G$ be a $(2n+1)$-dimensional real connected Lie group, and $\mathfrak{g}$
be its corresponding Lie algebra. If $\\{x_{0},x_{1},...,x_{2n}\\}$ is a basis
of left-invariant vector fields on $G$, we define a left-invariant almost
contact structure $(\varphi,\xi,\eta)$ by
(3.1) $\begin{array}[]{llll}\varphi x_{i}=x_{i+n},&\varphi
x_{i+n}=-x_{i},&\varphi x_{0}=0,&i=1,2,...,n,\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ \xi=x_{0},&\eta(x_{0})=1,&\eta(x_{j})=0,&j=1,2,...,2n.\end{array}$
We also define a left-invariant pseudo-Riemannian metric $g$ on $G$ by
(3.2)
$\begin{array}[]{l}g(x_{0},x_{0})=g(x_{i},x_{i})=-g(x_{i+n},x_{i+n})=1,\quad
i=1,2,...,n,\vskip 6.0pt plus 2.0pt minus 2.0pt\\\ g(x_{j},x_{k})=0,\quad
j\neq k,\quad j,k=0,1,,...,2n.\end{array}$
Then, according to (1.1) and (1.2), $(G,\varphi,\xi,\eta,g)$ is an almost
contact manifold with Norden metric.
Let the Lie algebra $\mathfrak{g}$ of $G$ be given by the following non-zero
commutators
(3.3) $[x_{i},x_{0}]=\lambda_{i}x_{0},\qquad i=1,2,...,2n,$
where $\lambda_{i}\in\mathbb{R}$. Equalities (3.3) determine a $2n$-parametric
family of solvable Lie algebras.
Further, we study the manifold $(G,\varphi,\xi,\eta,g)$ with Lie algebra
$\mathfrak{g}$ defined by (3.3). The well-known Koszul’s formula for the Levi-
Civita connection of $g$ on $G$, i.e. the equality
$2g(\nabla_{x_{i}}x_{j},x_{k})=g([x_{i},x_{j}],x_{k})+g([x_{k},x_{i}],x_{j})+g([x_{k},x_{j}],x_{i}),$
implies the following components of the Levi-Civita connection:
(3.4)
$\begin{array}[]{l}\nabla_{x_{i}}x_{j}=\nabla_{x_{i}}\xi=0,\qquad\nabla_{\xi}x_{i}=-\lambda_{i}\xi,\quad
i,j=1,2,...,2n,\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\nabla_{\xi}\xi=\sum_{k=1}^{n}(\lambda_{k}x_{k}-\lambda_{k+n}x_{k+n}).\end{array}$
Then, by (1.3) and (3.4) we obtain the essential non-zero components of $F$:
(3.5) $F(\xi,\xi,x_{i})=\omega(x_{i})=-\lambda_{i+n},\qquad
F(\xi,\xi,x_{i+n})=\omega(x_{i+n})=\lambda_{i},$
for $i=1,2,...,n$. Hence, by (1.7) and (3.5) we have
###### Proposition 3.1.
The almost contact manifold with Norden metric $(G,\varphi,\xi,\eta,g)$
defined by (3.1), (3.2) and (3.3) belongs to the class $\mathcal{F}_{11}$.
Moreover, by (1.9), (3.4) and (3.5) we establish that the considered manifold
has closed 1-forms $\omega$ and $\omega^{\ast}$.
Taking into account (3.4) and (1.10) we obtain the essential non-zero
components of the curvature tensor as follows
(3.6) $R(x_{i},\xi,\xi,x_{j})=-\lambda_{i}\lambda_{j},\quad i,j=1,2,...,2n.$
By (3.6) it follows that $R(x_{i},x_{j},\varphi x_{k},\varphi x_{s})=0$ for
all $i,j,k,s=0,1,...,2n$. Then, according to (2.5) and (2.7) we get
###### Proposition 3.2.
The curvature tensor and the Ricci tensor of the $\mathcal{F}_{11}$-manifold
$(G,\varphi,\xi,\eta,g)$ defined by (3.1), (3.2) and (3.3) have the form,
respectively
$R=\psi_{4}(S),\qquad\rho(x,y)=\eta(x)\eta(y){\rm tr}S+S(x,y),$
where $S$ is defined by (2.7) and ${\rm tr}S=\mathrm{div}(\varphi\Omega)$.
We compute the essential non-zero components of the Ricci tensor as follows
(3.7) $\begin{array}[]{l}\rho(x_{i},x_{j})=-\lambda_{i}\lambda_{j},\quad
i=1,2,...,2n,\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\rho(\xi,\xi)=-\sum_{k=1}^{n}\left(\lambda_{k}^{2}-\lambda_{k+n}^{2}\right).\end{array}$
By (1.11) and (3.7) we obtain the curvatures of the considered manifold
(3.8)
$\tau=-2\sum_{k=1}^{n}\left(\lambda_{k}^{2}-\lambda_{k+n}^{2}\right),\qquad\tau^{\ast}=-2\sum_{k=1}^{n}\lambda_{k}\lambda_{k+n}.$
Let us consider the characteristic 2-sections $\alpha_{ij}$ spanned by the
vectors $\\{x_{i},x_{j}\\}$: $\xi$-sections $\alpha_{0,i}$ ($i=1,2,...,2n$),
$\varphi$-holomorphic sections $\alpha_{i,i+n}$ ($i=1,2,...,n$), and the rest
are totally real sections. Then, by (1.13), (3.1) and (3.6) it follows
###### Proposition 3.3.
The $\mathcal{F}_{11}$-manifold with Norden metric $(G,\varphi,\xi,\eta,g)$
defined by (3.1), (3.2) and (3.3) has zero totally real and
$\varphi$-holomorphic sectional curvatures, and its $\xi$-sectional curvatures
are given by
$k(\alpha_{0,i})=-\frac{\lambda_{i}^{2}}{g(x_{i},x_{i})},\qquad i=1,2,...,2n.$
By (3.2) and (3.5) we obtain the corresponding vector $\Omega$ to $\omega$ and
its square norm
(3.9)
$\Omega=-\sum_{k=1}^{n}\left(\lambda_{k+n}x_{k}+\lambda_{k}x_{k+n}\right),\qquad\omega(\Omega)=-\sum_{k=1}^{n}\left(\lambda_{k}^{2}-\lambda_{k+n}^{2}\right).$
Then, by (3.9) and Corollary 2.1 we prove
###### Proposition 3.4.
The $\mathcal{F}_{11}$-manifold with Norden metric $(G,\varphi,\xi,\eta,g)$
defined by (3.1), (3.2) and (3.3) is isotropic Kählerian iff the condition
$\sum_{k=1}^{n}\left(\lambda_{k}^{2}-\lambda_{k+n}^{2}\right)=0$ holds.
## References
* [1] G. Ganchev, V. Mihova, K. Gribachev, _Almost contact manifolds with B-metric_ , Math. Balk. N.S. 7(3-4) (1993), 261–276.
* [2] K. Gribachev, M. Manev, D. Mekerov, _A Lie group as a four-dimensional Quasi-Kähler manifold with Norden metric_ , JP Jour. Geom. Topol. 6(1) (2006), 55–68.
* [3] S. Kobayashi, K. Nomizu, _Foundations of differential geometry_ , vol. 1, Wiley, New York, 1963.
* [4] M. Manev, _Contactly conformal transformations of general type of almost contact manifolds with B-metric_ , Math. Balkanica (N.S.) 11(3-4) (1997), 347 -357.
* [5] G. Nakova, K. Gribachev, _Submanifolds of some almost contact manifolds with B-metric with codimension two_ , Math. Balkanica (N.S.) 12(1-2) (1998), 93 -108.
* [6] S. Sasaki, _Almost-contact manifolds_ , Lecture Notes Math., Inst. Tôhoko Univ. 1-3, 1965, 1967, 1968.
_Marta Teofilova_
---
_University of Plovdiv_
_Faculty of Mathematics and Informatics_
_236 Bulgaria Blvd._
_4003 Plovdiv, Bulgaria_
e-mail: marta@uni-plovdiv.bg
|
arxiv-papers
| 2011-04-28T10:06:46 |
2024-09-04T02:49:18.476095
|
{
"license": "Public Domain",
"authors": "Marta Teofilova",
"submitter": "Marta Teofilova",
"url": "https://arxiv.org/abs/1104.5343"
}
|
1104.5349
|
# Lie groups as four-dimensional special complex manifolds with Norden metric
Marta Teofilova
###### Abstract
An example of a four-dimensional special complex manifold with Norden metric
of constant holomorphic sectional curvature is constructed via a two-
parametric family of solvable Lie algebras. The curvature properties of the
obtained manifold are studied. Necessary and sufficient conditions for the
manifold to be isotropic Kählerian are given.
2000 Mathematics Subject Classification: 53C15, 53C50.
Keywords: almost complex manifold, Norden metric, Lie group, Lie algebra.
## 1 Preliminaries
Let $(M,J,g)$ be a $2n$-dimensional almost complex manifold with Norden
metric, i.e. $J$ is an almost complex structure and $g$ is a metric on $M$
such that:
(1.1) $J^{2}x=-x,\qquad g(Jx,Jy)=-g(x,y),\qquad x,y\in\mbox{\got X}(M).$
The associated metric $\widetilde{g}$ of $g$ on $M$, given by
$\widetilde{g}(x,y)=g(x,Jy)$, is a Norden metric, too. Both metrics are
necessarily neutral, i.e. of signature $(n,n)$.
If $\nabla$ is the Levi-Civita connection of $g$, the tensor field $F$ of type
$(0,3)$ is defined by
(1.2) $F(x,y,z)=g\left((\nabla_{x}J)y,z\right)$
and has the following symmetries
(1.3) $F(x,y,z)=F(x,z,y)=F(x,Jy,Jz).$
Let $\left\\{e_{i}\right\\}$ ($i=1,2,\ldots,2n$) be an arbitrary basis of
$T_{p}M$ at a point $p$ of $M$. The components of the inverse matrix of $g$
are denoted by $g^{ij}$ with respect to the basis $\left\\{e_{i}\right\\}$.
The Lie 1-forms $\theta$ and $\theta^{\ast}$ associated with $F$ are defined
by, respectively
(1.4) $\theta(x)=g^{ij}F(e_{i},e_{j},x),\qquad\theta^{\ast}=\theta\circ J.$
The Nijenhuis tensor field $N$ for $J$ is given by
(1.5) $N(x,y)=[Jx,Jy]-[x,y]-J[Jx,y]-J[x,Jy].$
It is known [4] that the almost complex structure is complex iff it is
integrable, i.e. $N=0$.
A classification of the almost complex manifolds with Norden metric is
introduced in [2], where eight classes of these manifolds are characterized
according to the properties of $F$. The three basic classes:
$\mathcal{W}_{1}$, $\mathcal{W}_{2}$ of _the special complex manifolds with
Norden metric_ and $\mathcal{W}_{3}$ of _the quasi-Kähler manifolds with
Norden metric_ are given as follows:
(1.6)
$\begin{array}[]{l}\mathcal{W}_{1}:F(x,y,z)=\frac{1}{2n}\left[g(x,y)\theta(z)+g(x,z)\theta(y)\right.\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
\qquad\qquad\qquad\qquad\quad\left.+g(x,Jy)\theta(Jz)+g(x,Jz)\theta(Jy)\right];\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\mathcal{W}_{2}:F(x,y,Jz)+F(y,z,Jx)+F(z,x,Jy)=0,\quad\theta=0\hskip
5.05942pt\Leftrightarrow\hskip 5.05942ptN=0,\quad\theta=0;\vskip 6.0pt plus
2.0pt minus 2.0pt\\\ \mathcal{W}_{3}:F(x,y,z)+F(y,z,x)+F(z,x,y)=0.\end{array}$
The class $\mathcal{W}_{0}$ of _the Kähler manifolds with Norden metric_ is
defined by $F=0$ and is contained in each of the other classes.
Let $R$ be the curvature tensor of $\nabla$, i.e.
$R(x,y)z=\nabla_{x}\nabla_{y}z-\nabla_{y}\nabla_{x}z-\nabla_{\left[x,y\right]}z$.
The corresponding (0,4)-type tensor is defined by
$R(x,y,z,u)=g\left(R(x,y)z,u\right)$. The Ricci tensor $\rho$ and the scalar
curvatures $\tau$ and $\tau^{\ast}$ are given by:
(1.7)
$\begin{array}[]{c}\rho(y,z)=g^{ij}R(e_{i},y,z,e_{j}),\qquad\tau=g^{ij}\rho(e_{i},e_{j}),\qquad\tau^{\ast}=g^{ij}\rho(e_{i},Je_{j}).\end{array}$
A tensor of type (0,4) is said to be _curvature-like_ if it has the properties
of $R$. Let $S$ be a symmetric (0,2)-tensor. We consider the following
curvature-like tensors:
(1.8) $\begin{array}[]{l}\psi_{1}(S)(x,y,z,u)=g(y,z)S(x,u)-g(x,z)S(y,u)\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
\phantom{\psi_{1}(S)(x,y,z,u)}+g(x,u)S(y,z)-g(y,u)S(x,z),\vskip 3.0pt plus
1.0pt minus 1.0pt\\\
\pi_{1}=\frac{1}{2}\psi_{1}(g),\quad\pi_{2}(x,y,z,u)=g(y,Jz)g(x,Ju)-g(x,Jz)g(y,Ju).\end{array}$
It is known that on a pseudo-Riemannian manifold $M$ ($\dim M=2n\geq 4$) the
conformal invariant Weyl tensor has the form
(1.9)
$W(R)=R-\frac{1}{2(n-1)}\big{\\{}\psi_{1}(\rho)-\frac{\tau}{2n-1}\pi_{1}\big{\\}}.$
Let $\alpha=\left\\{x,y\right\\}$ be a non-degenerate $2$-plane spanned by the
vectors $x,y\in T_{p}M$, $p\in M$. The sectional curvature of $\alpha$ is
given by
(1.10) $k(\alpha;p)=\frac{R(x,y,y,x)}{\pi_{1}(x,y,y,x)}.$
We consider the following basic sectional curvatures in $T_{p}M$ with respect
to the structures $J$ and $g$: _holomorphic sectional curvatures_ if
$J\alpha=\alpha$ and _totally real sectional curvatures_ if
$J\alpha\perp\alpha$ with respect to $g$.
The square norm of $\nabla J$ is defined by $\left\|\nabla
J\right\|^{2}=g^{ij}g^{kl}g\left((\nabla_{e_{i}}J)e_{k},(\nabla_{e_{j}}J)e_{l}\right)$.
Then, by (1.2) we get
(1.11) $\left\|\nabla J\right\|^{2}=g^{ij}g^{kl}g^{pq}F_{ikp}F_{jlq},$
where $F_{ikp}=F(e_{i},e_{k},e_{p})$.
An almost complex manifold with Norden metric satisfying the condition
$\left\|\nabla J\right\|^{2}=0$ is called an _isotropic Kähler manifold with
Norden metric_ [3].
## 2 Almost complex manifolds with Norden metric of constant holomorphic
sectional curvature
In this section we obtain a relation between the vanishing of the holomorphic
sectional curvature and the vanishing of $\left\|\nabla J\right\|^{2}$ on
$\mathcal{W}_{2}$-manifolds and $\mathcal{W}_{3}$-manifolds with Norden
metric.
In [1] it is proved the following
Theorem A. ([1]) _An almost complex manifold with Norden metric is of
pointwise constant holomorphic sectional curvature if and only if_
(2.1)
$\begin{array}[]{l}3\\{R(x,y,z,u)+R(x,y,Jz,Ju)+R(Jx,Jy,z,u)+R(Jx,Jy,Jz,Ju)\\}\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
-R(Jy,Jz,x,u)+R(Jx,Jz,y,u)-R(y,z,Jx,Ju)+R(x,z,Jy,Ju)\vskip 3.0pt plus 1.0pt
minus 1.0pt\\\ -R(Jx,z,y,Ju)+R(Jy,z,x,Ju)-R(x,Jz,Jy,u)+R(y,Jz,Jx,u)\vskip
3.0pt plus 1.0pt minus 1.0pt\\\ =8H\\{\pi_{1}+\pi_{2}\\}\end{array}$
_for some_ $H\in FM$ _and all_ $x,y,z,u\in\mathfrak{X}(M)$. _In this case_
$H(p)$ _is the holomorphic sectional curvature of all holomorphic non-
degenerate 2-planes in_ $T_{p}M$, $p\in M$.
Taking into account (1.7) and (1.8), the total trace of (2.1) implies
(2.2) $H(p)=\frac{1}{4n^{2}}(\tau+\tau^{\ast\ast}),$
where $\tau^{\ast\ast}=g^{il}g^{jk}R(e_{i},e_{j},Je_{k},Je_{l})$.
In [5] we have proved that on a $\mathcal{W}_{2}$-manifold it is valid
(2.3) $\left\|\nabla J\right\|^{2}=2(\tau+\tau^{\ast\ast}),$
and in [3] it is proved that on a $\mathcal{W}_{3}$-manifold
(2.4) $\left\|\nabla J\right\|^{2}=-2(\tau+\tau^{\ast\ast}).$
Then, by Theorem A, (2.2), (2.3) and (2.4) we obtain
###### Theorem 2.1.
Let $(M,J,g)$ be an almost complex manifold with Norden metric of pointwise
constant holomorphic sectional curvature $H(p)$, $p\in M$. Then
(i)
$\left\|\nabla J\right\|^{2}=8n^{2}H(p)$ if $(M,J,g)\in\mathcal{W}_{2}$_;_
(ii)
$\left\|\nabla J\right\|^{2}=-8n^{2}H(p)$ if $(M,J,g)\in\mathcal{W}_{3}$.
Theorem 2.1 implies
###### Corollary 2.2.
Let $(M,J,g)$ be a $\mathcal{W}_{2}$-manifold or $\mathcal{W}_{3}$-manifold of
pointwise constant holomorphic sectional curvature $H(p)$, $p\in M$. Then,
$(M,J,g)$ is isotropic Kählerian iff $H(p)=0$.
In the next section we construct an example of a $\mathcal{W}_{2}$-manifold of
constant holomorphic sectional curvature.
## 3 Lie groups as four-dimensional $\mathcal{W}_{2}$-manifolds
Let g be a real 4-dimensional Lie algebra corresponding to a real connected
Lie group $G$. If $\left\\{X_{1},X_{2},X_{3},X_{4}\right\\}$ is a basis of
left invariant vector fields on $G$ and $[X_{i},X_{j}]=C_{ij}^{k}X_{k}$
($i,j,k=1,2,3,4$) then the structural constants $C_{ij}^{k}$ satisfy the anti-
commutativity condition $C_{ij}^{k}=-C_{ji}^{k}$ and the Jacobi identity
$C_{ij}^{k}C_{ks}^{l}+C_{js}^{k}C_{ki}^{l}+C_{si}^{k}C_{kj}^{l}=0$.
We define an almost complex structure $J$ and a compatible metric $g$ on $G$
by the conditions, respectively:
(3.1) $JX_{1}=X_{3},\quad JX_{2}=X_{4},\quad JX_{3}=-X_{1},\quad
JX_{4}=-X_{2},$ (3.2)
$\begin{array}[]{l}g(X_{1},X_{1})=g(X_{2},X_{2})=-g(X_{3},X_{3})=-g(X_{4},X_{4})=1,\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ g(X_{i},X_{j})=0,\quad i\neq j,\quad
i,j=1,2,3,4.\end{array}$
Because of (1.1), (3.1) and (3.2) $g$ is a Norden metric. Thus, $(G,J,g)$ is a
4-dimensional almost complex manifold with Norden metric.
From (3.2) it follows that the well-known Levi-Civita identity for $g$ takes
the form
(3.3)
$2g(\nabla_{X_{i}}X_{j},X_{k})=g([X_{i},X_{j}],X_{k})+g([X_{k},X_{i}],X_{j})+g([X_{k},X_{j}],X_{i}).$
Let us denote $F_{ijk}=F(X_{i},X_{j},X_{k})$. Then, by (1.2) and (3.3) we have
(3.4)
$\begin{array}[]{l}2F_{ijk}=g\big{(}[X_{i},JX_{j}]-J[X_{i},X_{j}],X_{k}\big{)}+g\big{(}J[X_{k},X_{i}]-[JX_{k},X_{i}],X_{j}\big{)}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\qquad\qquad\qquad\qquad+g\big{(}[X_{k},JX_{j}]-[JX_{k},X_{j}],X_{i}\big{)}.\end{array}$
According to (1.6) to construct an example of a $\mathcal{W}_{2}$-manifold we
need to find sufficient conditions for the Nijenhuis tensor $N$ and the Lie
1-form $\theta$ to vanish on $\mathfrak{g}$.
By (1.2), (1.5), (3.2) and (3.4) we compute the essential components
$N_{ij}^{k}$ ($N(X_{i},X_{j})=N_{ij}^{k}X_{k}$) of $N$ and
$\theta_{i}=\theta(X_{i})$ of $\theta$, respectively, as follows:
(3.5)
$\begin{array}[]{l}N_{12}^{1}=C_{34}^{1}-C_{12}^{1}-C_{23}^{3}+C_{14}^{3},\qquad\theta_{1}=2C_{13}^{1}-C_{12}^{4}+C_{14}^{2}+C_{23}^{2}-C_{34}^{4},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
N_{12}^{2}=C_{34}^{2}-C_{12}^{2}-C_{23}^{4}+C_{14}^{4},\qquad\theta_{2}=2C_{24}^{2}+C_{12}^{3}+C_{14}^{1}+C_{23}^{1}+C_{34}^{3},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
N_{12}^{3}=C_{34}^{3}-C_{12}^{3}+C_{23}^{1}-C_{14}^{1},\qquad\theta_{3}=2C_{13}^{3}+C_{12}^{2}+C_{14}^{4}+C_{23}^{4}+C_{34}^{2},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
N_{12}^{4}=C_{34}^{4}-C_{12}^{4}+C_{23}^{2}-C_{14}^{2},\qquad\theta_{4}=2C_{24}^{4}-C_{12}^{1}+C_{14}^{3}+C_{23}^{3}-C_{34}^{1}.\end{array}$
Then, (1.6) and (3.5) imply
###### Theorem 3.1.
Let $(G,J,g)$ be a 4-dimensional almost complex manifold with Norden metric
defined by (3.1) and (3.2). Then, $(G,J,g)$ is a $\mathcal{W}_{2}$-manifold
iff for the Lie algebra $\mathfrak{g}$ of $G$ are valid the conditions _:_
(3.6)
$\begin{array}[]{l}C_{13}^{1}=C_{12}^{4}-C_{23}^{2}=C_{34}^{4}-C_{14}^{2},\qquad
C_{13}^{3}=-\left(C_{12}^{2}+C_{23}^{4}\right)=-\left(C_{14}^{4}+C_{34}^{2}\right),\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
C_{24}^{4}=C_{12}^{1}-C_{14}^{3}=C_{34}^{1}-C_{23}^{3},\qquad
C_{24}^{2}=-\left(C_{12}^{3}+C_{14}^{1}\right)=-\left(C_{23}^{1}+C_{34}^{3}\right),\end{array}$
where $C_{ij}^{k}$ ($i,j,k=1,2,3,4$) satisfy the Jacodi identity.
One solution to (3.6) and the Jacobi identity is the 2-parametric family of
solvable Lie algebras $\mathfrak{g}$ given by
(3.7) $\mathfrak{g}:\begin{array}[]{l}[X_{1},X_{2}]=\lambda X_{1}-\lambda
X_{2},\qquad[X_{2},X_{3}]=\mu X_{1}+\lambda X_{4},\vskip 6.0pt plus 2.0pt
minus 2.0pt\\\ [X_{1},X_{3}]=\mu X_{2}+\lambda X_{4},\qquad[X_{2},X_{4}]=\mu
X_{1}+\lambda X_{3},\vskip 6.0pt plus 2.0pt minus 2.0pt\\\ [X_{1},X_{4}]=\mu
X_{2}+\lambda X_{3},\qquad[X_{3},X_{4}]=-\mu X_{3}+\mu
X_{4},\qquad\lambda,\mu\in\mathbb{R}.\end{array}$
Let us study the curvature properties of the $\mathcal{W}_{2}$-manifold
$(G,J,g)$, where the Lie algebra $\mathfrak{g}$ of $G$ is defined by (3.7).
By (3.2), (3.3) and (3.7) we obtain the components of the Levi-Civita
connection:
(3.8) $\begin{array}[]{ll}\nabla_{X_{1}}X_{2}=\lambda
X_{1}+\mu(X_{3}+X_{4}),&\quad\nabla_{X_{2}}X_{1}=\lambda
X_{2}+\mu(X_{3}+X_{4}),\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\nabla_{X_{3}}X_{4}=-\lambda(X_{1}+X_{2})-\mu
X_{3},&\quad\nabla_{X_{4}}X_{3}=-\lambda(X_{1}+X_{2})-\mu X_{4},\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\ \nabla_{X_{1}}X_{1}=-\lambda
X_{2},\quad\nabla_{X_{2}}X_{2}=-\lambda X_{1},&\quad\nabla_{X_{3}}X_{3}=\mu
X_{4},\quad\nabla_{X_{4}}X_{4}=\mu X_{3},\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ \nabla_{X_{1}}X_{3}=\nabla_{X_{1}}X_{4}=\mu
X_{2},&\quad\nabla_{X_{2}}X_{3}=\nabla_{X_{2}}X_{4}=\mu X_{1},\vskip 6.0pt
plus 2.0pt minus 2.0pt\\\ \nabla_{X_{3}}X_{1}=\nabla_{X_{3}}X_{2}=-\lambda
X_{4},&\quad\nabla_{X_{4}}X_{1}=\nabla_{X_{4}}X_{2}=-\lambda
X_{3}.\end{array}$
Taking into account (3.4) and (3.7) we compute the essential non-zero
components of $F$:
(3.9)
$\begin{array}[]{l}F_{114}=-F_{214}=F_{312}=\frac{1}{2}F_{322}=\frac{1}{2}F_{411}=F_{412}=-\lambda,\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
F_{112}=\frac{1}{2}F_{122}=\frac{1}{2}F_{211}=F_{212}=-F_{314}=F_{414}=\mu.\end{array}$
The other non-zero components of $F$ are obtained from (1.3).
By (1.11) and (3.9) for the square norm of $\nabla J$ we get
(3.10) $\left\|\nabla J\right\|^{2}=-32(\lambda^{2}-\mu^{2}).$
Further, we obtain the essential non-zero components
$R_{ijks}=R(X_{i},X_{j},X_{k},X_{s})$ of the curvature tensor $R$ as follows:
(3.11)
$\begin{array}[]{l}-\frac{1}{2}R_{1221}=-R_{1341}=-R_{2342}=R_{3123}=\frac{1}{2}R_{3443}=R_{4124}=\lambda^{2}+\mu^{2},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
R_{1331}=R_{1441}=R_{2332}=R_{2442}=-R_{1324}=-R_{1423}=\lambda^{2}-\mu^{2},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\ R_{1231}=R_{1241}=R_{2132}=R_{2142}\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
=-R_{3143}=-R_{3243}=-R_{4134}=-R_{4234}=2\lambda\mu.\end{array}$
Then, by (1.7) and (3.11) we get the components $\rho_{ij}=\rho(X_{i},X_{j})$
of the Ricci tensor and the values of the scalar curvatures $\tau$ and
$\tau^{\ast}$:
(3.12)
$\begin{array}[]{l}\rho_{11}=\rho_{22}=-4\lambda^{2},\qquad\qquad\quad\rho_{33}=\rho_{44}=-4\mu^{2},\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\rho_{12}=\rho_{34}=-2(\lambda^{2}+\mu^{2}),\qquad\rho_{13}=\rho_{14}=\rho_{23}=\rho_{24}=4\lambda\mu,\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
\tau=-8(\lambda^{2}-\mu^{2}),\qquad\qquad\quad\hskip
3.61371pt\tau^{\ast}=16\lambda\mu.\end{array}$
Let us consider the characteristic 2-planes $\alpha_{ij}$ spanned by the basic
vectors $\\{X_{i},X_{j}\\}$: totally real 2-planes - $\alpha_{12}$,
$\alpha_{14}$, $\alpha_{23}$, $\alpha_{34}$ and holomorphic 2-planes -
$\alpha_{13}$, $\alpha_{24}$. By (1.10) and (3.11) for the sectional
curvatures of the holomorphic 2-planes we obtain
(3.13) $k(\alpha_{13})=k(\alpha_{24})=-(\lambda^{2}-\mu^{2}).$
Then it is valid
###### Theorem 3.2.
The manifold $(G,J,g)$ is of constant holomorphic sectional curvature.
Using (1.9), (3.11) and (3.12) for the essential non-zero components
$W_{ijks}=W(X_{i},X_{j},X_{k},X_{s})$ of the Weyl tensor $W$ we get:
(3.14)
$\begin{array}[]{l}\frac{1}{2}W_{1221}=W_{1331}=W_{1441}=W_{2332}=W_{2442}=\frac{1}{2}W_{3443}\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
=-\frac{1}{3}W_{1324}=-\frac{1}{3}W_{1423}=\frac{1}{3}(\lambda^{2}-\mu^{2}).\end{array}$
Finally, by (1.9), (3.10), (3.12), (3.13) and (3.14) we establish the
truthfulness of
###### Theorem 3.3.
The following conditions are equivalent _:_
(i)
$(G,J,g)$ is isotropic Kählerian _;_
(ii)
$|\lambda|=|\mu|$_;_
(iii)
$\tau=0$_;_
(iv)
$(G,J,g)$ is of zero holomorphic sectional curvature _;_
(v)
the Weyl tensor vanishes.
(vi)
$R=\frac{1}{2}\psi_{1}(\rho)$.
## References
* [1] G. Djelepov, K. Gribachev, _Generalized $B$-manifolds of constant holomorphic sectional curvature_, Plovdiv Univ. Sci. Works – Math. 23(1) (1985), 125–131.
* [2] G. Ganchev, A. Borisov, _Note on the almost complex manifolds with a Norden metric_ , Compt. Rend. Acad. Bulg. Sci. 39(5) (1986), 31–34.
* [3] D. Mekerov, M. Manev, _On the geometry of Quasi-Kähler manifolds with Norden metric_ , Nihonkai Math. J. 16(2) (2005), 89–93.
* [4] A. Newlander, L. Niremberg, _Complex analytic coordinates in almost complex manifolds_ , Ann. Math. 65 (1957), 391–404.
* [5] M. Teofilova, _Lie groups as four-dimensional conformal Kähler manifolds with Norden metric_ , In:Topics of Contemporary Differential Geometry, Complex Analysis and Mathematical Physics, eds. S. Dimiev and K. Sekigawa, World Sci. Publ., Hackensack, NJ (2007), 319–326.
Faculty of Mathematics and Informatics,
---
University of Plovdiv,
236 Bulgaria Blvd., Plovdiv 4003, Bulgaria.
e-mail: marta@uni-plovdiv.bg
|
arxiv-papers
| 2011-04-28T10:35:33 |
2024-09-04T02:49:18.480624
|
{
"license": "Public Domain",
"authors": "Marta Teofilova",
"submitter": "Marta Teofilova",
"url": "https://arxiv.org/abs/1104.5349"
}
|
1104.5453
|
# Application of the gradient method to Hartree-Fock-Bogoliubov theory
L.M. Robledo Departamento de Fisica Teorica, Univeridad Autonoma de Madrid,
E-28049 Madrid, Spain G.F. Bertsch Institute for Nuclear Theory and Dept. of
Physics, University of Washington, Seattle, Washington
###### Abstract
A computer code is presented for solving the equations of Hartree-Fock-
Bogoliubov (HFB) theory by the gradient method, motivated by the need for
efficient and robust codes to calculate the configurations required by
extensions of HFB such as the generator coordinate method. The code is
organized with a separation between the parts that are specific to the details
of the Hamiltonian and the parts that are generic to the gradient method. This
permits total flexibility in choosing the symmetries to be imposed on the HFB
solutions. The code solves for both even and odd particle number ground
states, the choice determined by the input data stream. Application is made to
the nuclei in the $sd$-shell using the USDB shell-model Hamiltonian.
## I Introduction
An important goal of nuclear structure theory is to develop the computational
tools for a systematic description of nuclei across the chart of the nuclides.
There is hardly any alternative to self-consistent mean-field (SCMF) for the
starting point of a global theory, but the SCMF has to be extended by the
generator coordinate method (GCM) or other means to calculate spectroscopic
observables. There is a need for computational tools to carry out the SCMF
efficiently in the presence of the multiple constraints to be used for the
GCM. Besides particle number, quantities that may be constrained include
moments of the density, angular momentum, and in the Hartree-Fock-Bogoliubov
(HFB) theory, characteristics of the anomalous densities.
The gradient method described by Ring and Schuck (RS , Section 7.3.3) is very
suitable for this purpose: it is robust and easily deals with multiple
constraints. However, the actual computational aspects of the method as
applied to HFB have not been well documented in the literature. This is in
contrast to methods based on diagonalizing the HFB matrix eigenvalue equation.
Here there are several codes available in the literature, eg. po97 ; bo05 ;
be05 ; do05 ; st05 . Other, less used, methods to solve the HFB equation with
multiple constraints can be found in the literature; for example the method
described in Ref. eg95 is close in spirit to the one presented here. We note
also that the computational issues for using the gradient method in nuclear
Hartree-Fock theory have been discussed in detail in Ref. re82 . That paper
also contains references to related techniques such as the imaginary time step
method.
Here we will describe an implementation of the gradient algorithm for HFB
following the iterative method used by Robledo and collaborators wa02 . The
code presented here, hfb_shell, is available as supplementary material to this
article (see Appendix). The code has separated out the parts that are basic to
the gradient method and the parts that are specific to the details of the
Hamiltonian. As an example, the code here contains a module for application to
the $sd$-shell with a shell-model Hamiltonian containing one-body and two-body
terms. There is a long-term motivation for this application as well. The
$sd$-shell could be a good testing ground for the extensions of SCMF such as
the GCM and approximations derived from GCM. Since one has a Hamiltonian for
the $sd$-shell that describes the structure very well, one could test the
approximations to introduce correlations, such as projection, the random-phase
approximation, etc and compare them with the exact results from the Shell
Model. Preliminary results along this line are discussed in ro08 ; ma11 . As a
first step in this program, one needs a robust SCMF code that treats shell-
model Hamiltonians. Extensions to other shell model configuration spaces are
straightforward and only limited by the availability of computational
resources.
The code described here is more general than earlier published codes in that
it can treat even or odd systems equally well. The formalism for the extension
to odd systems and to a statistical density matrix will be presented elsewhere
Robledo-Bertsch . We also mention that the present code (with a different
Hamiltonian module) has already been applied to investigate neutron-proton
pairing in heavy nucleige11 .
## II Summary of the gradient method
The fundamental numerical problem to be addressed is the minimization of a
one- plus two-body Hamiltonian under the set of Bogoliubov transformations in
a finite-dimensional Fock space. We remind the reader of the most essential
equations, using the notation of Ring and Schuck RS . The basic variables are
the $U$ and $V$ matrices defining the Bogoliubov transformation. The main
physical variables are the one-body matrices for the density $\rho$ and the
anomalous density $\kappa$, given by
$\rho=V^{*}V^{t};\,\,\,\,\,\,\kappa=V^{*}U^{t}.$ (1)
The Hamiltonian may be defined in the Fock-space representation as
$\hat{H}=\sum_{12}\varepsilon_{12}c^{\dagger}_{1}c_{2}+{1\over
4}\sum_{1234}v_{1234}c^{\dagger}_{1}c^{\dagger}_{2}c_{4}c_{3}.$ (2)
The expectation value of the Hamiltonian under a Bogoliubov transformation of
the vacuum is given by
$H^{00}\equiv\langle\hat{H}\rangle={\rm Tr}(\varepsilon\rho+\hbox{${1\over
2}$}\Gamma\rho-\hbox{\rm${1\over 2}$}\Delta\kappa^{*}).$ (3)
in terms of the fields for the ordinary potential $\Gamma$ and the pairing
potential $\Delta$. These are defined as
$\Gamma_{12}=\sum_{34}v_{1423}\rho_{34};\,\,\,\,\Delta_{12}=\hbox{${1\over
2}$}\sum_{34}v_{1234}\kappa_{34}.$ (4)
The gradient method makes extensive use of the quasiparticle representation
for operators related to the ordinary and anomalous densities. For a single-
particle operator $\hat{F}=\sum_{ij}F_{ij}c^{\dagger}_{i}c_{j}$ we write
$\sum_{ij}F_{ij}c^{\dagger}_{i}c_{j}\equiv
c^{\dagger}Fc=F^{00}+\beta^{\dagger}F^{11}\beta^{\dagger}+\hbox{${1\over
2}$}\left(\beta F^{02}\beta+\beta^{\dagger}F^{20}\beta^{\dagger}\right).$ (5)
where $\beta,\beta^{\dagger}$ are quasiparticle annihilation and creation
operators. The gradients will be constructed from the skew-symmetric matrix
$F^{20}$, which for a normal one-body operator is given by
$F^{20}=U^{\dagger}FV^{*}-V^{\dagger}F^{t}U^{*}.$ (6)
The corresponding representation for an operator $\hat{G}$ of the anomalous
density is
$\hbox{${1\over
2}$}(c^{\dagger}Gc^{\dagger}-cG^{*}c)=G^{00}+\beta^{\dagger}G^{11}\beta+\hbox{${1\over
2}$}(\beta^{\dagger}G^{20}\beta^{\dagger}+\beta G^{02}\beta)$ (7)
The skew-symmetric matrix $G^{20}$ is given by
$G^{20}=U^{\dagger}GU^{*}-V^{\dagger}G^{*}V^{*}.$ (8)
Two operators that are particularly useful to characterize the HFB states are
the axial quadrupole operator $Q_{Q}$ and the number fluctuation operator
$\Delta N^{2}$. We define $Q_{Q}$ as
$Q_{Q}=2z^{2}-x^{2}-y^{2};$ (9)
its expectation value distinguishes spherical and deformed minima. The number
fluctuation is an indicator of the strength of pairing condensates and is zero
in the absence of a condensate. It depends on the two-body operator
$\hat{N}^{2}$, but like the Hamiltonian can be expressed in terms of one-body
densities. We define it as
$\Delta
N^{2}\equiv\langle\hat{N}^{2}\rangle-\langle\hat{N}\rangle^{2}=\frac{1}{2}{\rm
Tr}\left(N^{20}N^{02}\right)=2{\rm Tr}\left(\rho(1-\rho)\right)=-2{\rm
Tr}\left(\kappa^{*}\kappa\right).$ (10)
The full expansion of the Hamiltonian in the quasiparticle basis is given in
Eqs. (E.20-E.25) of RS . Here we will mainly need $H^{20}$, given by
$H^{20}=h^{20}+\Delta^{20}=U^{\dagger}hV^{*}-V^{\dagger}h^{t}U^{*}-V^{\dagger}\Delta^{*}V^{*}+U^{\dagger}\Delta
U^{*}.$ (11)
where $h=\epsilon+\Gamma$. Starting from any HFB configuration $U,V$ one can
construct a new configuration $U^{\prime},V^{\prime}$ by the generalized
Thouless transformation. The transformation is defined by a skew-symmetric
matrix $Z$ having the same dimensions as $U,V$. One often assumes that the
transformation preserves one or more symmetries such as parity or axial
rotational symmetry. Then the $U,V$ matrices are block diagonal and $Z$ has
the same block structure. Otherwise the elements of $Z$ are arbitrary and can
be real or complex. The transformation is given by
$U^{\prime}=(U+V^{*}Z^{*})(1-ZZ^{*})^{-1/2}=U+V^{*}Z^{*}+{\cal O}(Z^{2})$ (12)
$V^{\prime}=(V+U^{*}Z^{*})(1-ZZ^{*})^{-1/2}=V+U^{*}Z^{*}+{\cal O}(Z^{2}).$
The last factor, $(1-ZZ^{*})^{-1/2}$, ensures that the transformed set
$U^{\prime},V^{\prime}$ satisfies the required unitarity conditions for the
Bogoliubov transformation. We now ask how the expectation value of some
bilinear operator $\hat{Q}$ changes when the Thouless transformation is
applied. The result is very simple, to linear order in $Z$:
$Q_{new}^{00}=Q^{00}-\frac{1}{2}({\rm Tr}(Q^{20}Z^{*})+\textrm{h.c.})+{\cal
O}(Z^{2}).$ (13)
The same formula applies to the Hamiltonian as well,
$H_{new}^{00}=H^{00}-\frac{1}{2}({\rm Tr}(H^{20}Z^{*})+\textrm{h.c.})+{\cal
O}(Z^{2}).$ (14)
From these formulas it is apparent that the derivative of the expectation
value with respect to the variables $z_{ij}^{*}$ in $Z^{*}$ is111 The
derivative is taken with respect to the variables in the skew-symmetric
$Z^{*}$, ie. $z_{ji}^{*}=-z_{ij}^{*}$ and $z_{ij}$, $z_{ij}^{*}$ are treated
as independent variables.
${\partial\over\partial z_{ij}^{*}}Q^{00}=Q^{20}_{ij}.$ (15)
With a formula for the gradient of the quantity to be minimized, we have many
numerical tools at our disposal to carry out the minimization.
It is quite straightforward to introduce constraining fields in the
minimization process. As seen in Eq. (13) the transformation $Z$ will not
change the expectation value of $\hat{Q}$ to linear order provided ${\rm
Tr}{(Q^{20}Z^{*})}+\textrm{h.c.}=0$. Thus, one can change the configuration
without affecting the constraint (to linear order) by projecting $Z$ to
$Z_{c}$ as $Z_{c}=Z-\lambda Q^{20}$ with $\lambda=\frac{1}{2}({\rm
Tr}(Q^{20}Z^{*})+\textrm{h.c.})/{\rm Tr}(Q^{20}Q^{20\,*})$. With multiple
constraints, the projection has the form
$Z_{c}=Z-\sum_{\alpha}\lambda_{\alpha}Q^{20}_{\alpha}.$ (16)
The parameters $\lambda_{\alpha}$ are determined by solving the system of
linear equations,
$\sum_{\alpha}M_{\alpha\beta}\lambda_{\alpha}=\frac{1}{2}({\rm
Tr}(Q^{20}_{\beta}Z^{*})+\textrm{h.c.})$ (17)
where $M_{\alpha\beta}={\rm Tr}(Q^{20}_{\alpha}Q^{20\,*}_{\beta})$. Since we
want to minimize the energy, an obvious choice for the unprojected $Z$ is the
gradient of the Hamiltonian $H^{20}$. In this case the constraining parameters
$\lambda_{\alpha}$ are identical to the Lagrange multipliers in the usual HFB
equations. We will use the notation $H_{c}$ for the constrained Hamiltonian
$H_{c}=H-\sum_{\alpha}\lambda_{\alpha}Q_{\alpha}.$ (18)
### II.1 Numerical aspects of the minimization
The most obvious way to apply the gradient method is to take the direction for
the change from Eq. (16,17), and take the length of the step as an adjustable
numerical parameter. We will call this the fixed gradient (FG) method. It is
implemented in the program as
$Z_{\eta}=\eta H_{c}^{20}.$ (19)
Typically the starting $U,V$ configuration will not satisfy the constraints,
and the $Z$ transformations must also bring the expectation values of the
operators to their target values $q_{\alpha}$. The error vector $\delta
q_{\alpha}$ to be reduced to zero is given by
$\delta q_{\alpha}=Q^{00}_{\alpha}-q_{\alpha}.$ (20)
We apply Eq. (13) to first order to obtain the desired transformation
$Z_{\delta q}$,
$Z_{\delta q}=-\sum_{\alpha\beta}M^{-1}_{\alpha\beta}\delta
q_{\alpha}Q^{20}_{\beta}.$ (21)
With these elements in hand, a new configuration is computed using the
transformation
$Z=Z_{c}+Z_{\delta q}.$ (22)
This process is continued until some criterion for convergence is achieved. We
shall measure the convergence by the norm of the gradient $|H_{c}^{20}|$. This
is calculated as
$|H_{c}^{20}|=\left({\rm Tr}[H_{c}^{20}(H_{c}^{20})^{\dagger}]\right)^{1/2}.$
(23)
An example using this method as given is shown in Fig. 1.
Figure 1: Number of iterations required for convergence using Eq. (19) and
fixed $\eta$. At the point $\eta=0.12$ MeV-1 and beyond, the iteration process
is unstable. The converged solutions and their energies are the same for all
values of $\eta$ shown in the plot. All values producing converged solutions
The system is 24Mg with three constraints, $N$, $Z$, and $<Q_{Q}>=10$
$\hbar/m\omega_{0}$. The convergence criterion is $|H^{20}_{c}|<1.0\times
10^{-2}$ MeV. See Section VII.2 for further details.
The parameter $\eta$ is fixed to some value and the iterations are carried out
until convergence or some upper limit is reached. The required number of
iterations varies roughly inversely with $\eta$, up to some point where the
process is unable to find a minimum in a reasonable number of iterations.
There are a number of ways to speed up the iteration process. If the
constraints are satisfied, the parameter $\eta$ can be increased considerably.
Fig. 2 shows the change in $H^{00}_{c}$ from one iteration cycle as a function
of $\eta$ using $Z_{c}$ to update.
Figure 2: Single-step energy change as a function of $\eta$ in Eq. (19). The
configuration that was updated is the 10th iteration step of the system in
Fig. 1.
For small values of $\eta$, the change in constrained energy is given by the
Taylor expansion Eq. (14), $\Delta H^{00}_{c}\approx-\eta
Tr\left(H_{c}^{00\,*}H_{c}^{00}\right)$. This function is shown as the
straight line in the Figure. The actual change is shown by the black circles.
One sees that $\eta$ could be doubled or tripled from the maximum value
permitted in Fig. 1. However, the constraints and other aspects of the new
$U,V$ become degraded so that such steps are not permissible for many
iterations re82 . Still, one can take advantage of the possible improvement by
choosing $\eta$ at each iteration taking account of the relevant information
from the previous iteration. This can be extracted from the ratio
$r=\frac{\Delta H^{00}_{c}}{\eta Tr\left(H_{c}^{00\,*}H_{c}^{00}\right)}$ (24)
which is close to one for too-small $\eta$ values and close to $\frac{1}{2}$
at the value corresponding to the steepest-descent minimum. We call such
methods variable gradient. We note that updates with $Z_{\delta q}$ alone are
relatively quick because there is no need to evaluation matrix elements of the
Hamiltonian. These considerations are implemented in the code of Ref. wa02 by
interspersing cycles of iteration by $Z_{\delta q}$ alone among the cycles
with updates by Eq. (22).
Another way to improve the efficiency of the iteration process is to divide
the elements of $H^{20}_{c}$ by preconditioning factors $p_{ij}$,
$(Z_{c})_{ij}=\eta{(H_{c}^{20})_{ij}\over p_{ij}}.$ (25)
The choice of the preconditioner is motivated by Newton’s method to find zeros
of a function (here $H_{c}^{20}$) based on knowledge of its derivative. This
could be accessible from the second-order term in Eq. (14), but unfortunately
it cannot be easily computed as it involves the HFB stability matrix. However
a reasonable approximation to it can be obtained from $H_{c}^{11}$, the one-
quasiparticle Hamiltonian that, when in diagonal form, is the dominant
component of the diagonal of the stability matrix. One first transforms $U,V$
to a basis that diagonalizes $H_{c}^{11}$. Call the eigenvalues of the matrix
$E_{i}$ and the transformation to diagonalize it $C$. The $U,V$ are
transformed to $U^{\prime},V^{\prime}$ in the diagonal quasiparticle basis by
$U^{\prime}=UC;\,\,\,\,\,V^{\prime}=V^{\prime}C$ (26)
In the new basis the preconditioner is given by
$p_{ij}=\max(E_{i}+E_{j},E_{min})$ (27)
where $E_{min}$ is a numerical parameter of the order of 1-2 MeV. The main
effect of the preconditioner is to damp away those components of the gradient
with high curvatures (i.e. second derivatives) which correspond to two-
quasiparticle excitations with large excitation energies. This is very
important for Hamiltonians that have a large range of single-particle
energies, such as the ones derived from commonly used nuclear energy density
functionals such as Skyrme and Gogny.
In Table I we show the number of iterations required to reach convergence for
a case calculated in Table II, to be described below.
Method | $\eta$ | $\eta_{min}$ | $\eta_{max}$ | $I_{conv}$
---|---|---|---|---
fixed gradient | 0.10 MeV-1 | | | 140
variable gradient | | 0.08 MeV-1 | 0.3 MeV-1 | 65
fixed pr. | 0.7 | | | 72
variable pr. | | 0.7 | 2.0 | 34
Table 1: Number of iterations to convergence $I_{conv}$ with various
treatments of the update. Eq. (19) with fixed and variable gradients is used
for the top two lines and the preconditioned gradients Eq. (25) are used for
the lower two lines. The system is 21Ne as calculated in the top first entry
in Table II.
We see that there is a gain of more than a factor of 3 between the naive
steepest descent and the preconditioned gradient with a variable $\eta$.
Similar ideas have been used in a HF context in re82 ; um85 with similar
speedups.
## III Odd-A nuclei
As discussed by Ring and SchuckRS , each $U,V$ set can be characterized by its
number parity, either even or odd. This means that when the wave function is
constructed and states of definite particle number are projected out, the
nonzero components will have either all even or all odd particle number.
Another important fact is that the generalized Thouless transformation does
not change the number parity of the Bogoliubov transformation. Thus, if we
start from a $U,V$ set of odd number parity, the final converged configuration
will only have components of odd nucleon number.
In fact, in the matrix-diagonalization method of solving the HFB equations,
the higher energy of the odd-A configurations requires some modification to
the Hamiltonian or to the iteration process. A common solution is to add
additional constraining fields so the that odd-A system has lower energyba73 ;
be09 . Typically the external field to be added breaks time reversal symmetry
in some way. But then one can no longer assert that a true minimum has been
found, because the extra constraints can affect the configuration. The
gradient method does not have this shortcoming. If the space of odd-number
parity Bogoliubov transformations is adequately sampled, it will find the
global minimum of the odd-A configurations. Moreover, with the gradient method
one does not need to modify the computer code to treat odd-$A$ systems. Only
the initial $U,V$ set is different for the two cases.
We note the $H^{11}_{c}$ has negative quasiparticle eigenenergies in the odd
number-parity space, assuming that the true minimum of the HFB functional is
an even number-parity configuration.
## IV Other special cases
The variational minimum might not be directly reachable by the generalized
Thouless transformation, but it always is a limit of a succession of
transformations. This is the case if the condensate vanishes at the minimum
while the starting configuration has a finite condensate. This does not cause
any practical difficulties except for reducing the rate of convergence. Still,
in such cases it is more direct to start with a $U,V$ configuration of the
pure Hartree-Fock form. It is not possible to use the gradient method in the
other direction, to go to a minimum having a finite condensate from a starting
$U,V$ of Hartree-Fock form, as explained below.
## V Imposed symmetries
The $U,V$ matrices have a dimension of the size of the Fock space of nucleon
orbitals and in principle can be dense matrices. However, one often imposes
symmetries on the wave function by assuming that the $U,V$ have a block
structure with all elements zero outside the blocks. For example, most codes
assume separate blocks for neutrons and protons. This is well-justified when
there is a significant difference in neutron and proton numbers but in general
it is better to allow them to mix. Other quantum numbers that are commonly
imposed on the orbital wave functions are parity and axial symmetry. There are
only a few exceptional nuclei that have HFB ground states breaking these
symmetries. For the parity, there are the Ra nuclei and Th nuclei. Concerning
axial symmetry, a global study of even-even nuclei with the Gogny functional
de10 found only three cases of nonaxial HFB minima among 1712 nuclei.
The number of orthogonal minima that can be easily calculated in the gradient
method depends on the assumed block structure. In the even number-parity space
there is just one global minimum. But in the odd number-parity space the
number parity of each block is conserved in the iteration process, so there
will be one state for each block. For example, states of different $K$-quantum
number may be calculated by imposing a block structure that imposes axial
symmetry. Thus for odd-A nuclei, the quasiparticle can be in any of the
$K$-blocks, giving a spectrum of states with $K$ specified by the block.
A more subtle form of possible imposed symmetries is those contained in the
starting $U,V$ configuration. The energy $H^{00}$ is essentially a quadratic
function of symmetry-breaking densities because the products of densities in
the functional must respect the symmetries of the Hamiltonian. If these
components are zero in the initial configuration, the energy is stationary at
that point and there is no gradient to generate nonzero field values. The
typical cases are quadrupole deformation in the ordinary density and any form
of anomalous densities. Fortunately, it is very easy to avoid unwanted
symmetries in the starting $U,V$ as discussed below.
## VI The code hfb_shell
The code hfb_shell presented in this paper is described in more detail in the
Appendix. The main point we want emphasize about the code is that it is
organized in modules that separate out the functions that are independent of
the Hamiltonian from those that are specific to it. Also, the block structure
is specified only by the code input, and can easily be changed. The examples
we show are for the $sd$-shell using the USDB Hamiltonian br06 . Since that
Hamiltonian is specified by the fitted numerical values of the 3 single-
particle energies and the 63 $JT$-coupled two-particle interaction energies,
it does not have any symmetries beyond those demanded by the physics. In
particular, the HFB fields obtained with it should provide a realistic
description of aspects such as the time-odd fields, that are difficult to
assess with the commonly used energy functionals such as those in the Skyrme
family.
### VI.1 Application to the $sd$-shell
The $sd$ shell-model space has a dimension of 24 and the principal matrices
$U,V,Z,...$ have the same dimension. In the application presented here, we
assume axial symmetry which splits the matrices in blocks of dimension 12, 8
and 4 for $m$-quantum numbers $\pm\frac{1}{2}$, $\pm\frac{3}{2}$, and
$\pm\frac{5}{2}$ respectively. Neutron and proton orbitals are in the same
blocks, so the basis is sufficiently general to exhibit neutron-proton
pairing, if that is energetically favorable. We also assume that the matrices
are real.
We often start with a $U,V$ configuration of canonical form, namely $U$
diagonal, $U_{ij}=u\delta_{ij}$. The nonzero entries of the $V$ are all equal
to $\pm v=\pm\sqrt{1-u^{2}}$, and are in positions corresponding to pairing in
the neutron-neutron channel and the proton-proton channel. We arbitrarily take
$u=0.8$ and $v=0.6$ for the starting configuration $U_{0},V_{0}$. This may be
modified in a number of ways before it is used as a starting configuration in
the gradient minimization. When calculating a nucleus for which $N$ or $Z$ is
zero or 12, it is more efficient to use $U,V$ matrices that have those
orbitals empty or completed filled in the starting configuration. This is
carried out by changing $u,v$ to zero or one for the appropriate orbitals. The
particle number of that species is then fixed and is not constrained in the
gradient search.
For odd-number parity configurations, the $U,V$ is changed in the usual way by
interchanging a column in the $U$ matrix with the corresponding column in $V$.
The space that will be searched in the gradient method then depends on the
block where the interchange was made. In principle it does not depend on which
column of the block was changed. However, there is some subtlety is making use
of this independence which will be discussed below.
We may also apply a random $Z$ transformation to the starting configurations.
Since all the entries in the upper triangle of the $Z$ matrix are independent,
we can populate them with random numbers. This seems to be a good way to break
unwanted symmetries in the starting configuration that would be preserved by
the gradient update. We denote by $U_{r},V_{r}$ the configuration generated
from $U_{0},V_{0}$ by a randomly generated $Z$.
In principle one could also start from the $U,V$ configuration of the vacuum:
$U=1,V=0$. We have tried this and found, as might be expected, that the
proportion of false minima is larger than is obtained with $U_{0},V_{0}$.
## VII Three examples
In this section we will describe the HFB calculations for three nuclei, 32Mg,
24Mg, and 21Ne. The first one is typical of a spherical nucleus that exhibits
identical-particle pairing. The second is a well-deformed nucleus. The third
illustrates the method for an odd-A system.
For calculating matrix elements of the quadrupole operator $Q_{Q}$, we will
treat the single-particle wave functions as harmonic oscillator functions of
frequency $\omega_{0}$, and report the quadrupole moments in units of
$\hbar/m\omega_{0}$.
### VII.1 32Mg
The nucleus 32Mg ($(N,Z)=(12,4)$ in the $sd$-shell) behaves as expected of a
semimagic nucleus in HFB. Please note that we do not include in our
configuration space the $f_{7/2}$ intruder shell required to explain the
deformation properties of this nucleus mo95 ; ro02 . We calculate the HFB
ground state in two ways, illustrating the role of the starting configuration.
The first is to use a randomized $U_{r},V_{r}$ configuration and constraining
the particle numbers to the above values. Another way is to start with a
prolate configuration similar to $U_{0},V_{0}$ for the protons and with all
the neutron orbitals filled. In that case, only the proton number is
constrained. Both iteration sets converge to the same minimum, a spherical
configuration having a strong proton pairing condensate. The output
characteristics are $E_{HFB}=-135.641$ MeV, $Q_{Q}^{00}=0.00$ and $\Delta
Z^{2}=2.93$. The zero value for $Q_{Q}^{00}$ shows that the configuration is
spherical, and the nonzero value for $\Delta Z^{2}$ shows that protons are in
a condensate. Next we calculate the condensation energy, defined as the
difference between $E_{HFB}$ and the Hartree-Fock minimum $E_{HF}$. The
easiest way to find the HF minimum is to repeat the calculation with an
additional constraint that forces the condensate to zero. This is done by
adding a $G$-type operator that is sensitive to the presence of a condensate.
Carrying this out, we find a minimum at $E_{HF}=-134.460$ MeV and
$Q_{Q}^{00}=5.08$. The extracted correlation energy is $E_{HF}-E_{HFB}=1.18$
MeV, which is much smaller than what one would obtain with schematic
Hamiltonians fitted to pairing gap. It is also interesting to extract the
quasiparticle energies, since they provide the BCS measure of the odd-even
mass differences. These are obtained by diagonalizing $H_{c}^{11}$. The
results for the HFB ground state range from 1.5 to 9 MeV, with the lowest
giving the BCS estimate of the pairing gap.
### VII.2 24Mg
The next nucleus we consider, 24Mg with $N=4$ and $Z=4$, is strongly deformed
in the HFB ground state. We find that the converged minimum has a quadrupole
moment $\langle Q_{Q}\rangle=12.8$, close to the maximum allowed in the space.
More surprisingly, the pairing condensate vanishes at the HFB convergence. We
now make a set of constrained calculations to display the energy as a function
of quadrupole moment. The starting configuration is generated by applying a
random transformation to $U_{0},V_{0}$. The gradient code carries out the
iterations with the constraints $N=4$, $Z=4$, and the chosen value of $Q$. The
convergence of the constraints to their target values is very rapid, using the
update in Eq. (21). This is illustrated in Fig. 3, showing the
Figure 3: Error in constrained quantities as a function of iteration number
for the $\eta=0.1$ run of the 24Mg iterations in Fig. 1. Quantities
constrained are: $N$, open circles; $Z$, filled squares; and $Q_{Q}$, filled
circles.
deviation from the target values as a function of iteration number in one of
the cases ($Q=10$). On the other hand, the convergence to the minimum of the
HFB energy can be slow, using a fixed-$\eta$ update with Eq. (19). The
calculations were carried out setting the convergence criterion
$|H_{c}^{20}|<0.01$ MeV. Fig. 4 shows the number of iterations required to
reach convergence for the various deformations.
Figure 4: Number of iterations required to convergence for the calculated
configurations on the deformation energy curve Fig. 5.
They range from $~{}40$ to $~{}250$. In a number of cases, the iterations seem
to be approaching convergence, but the system is actually in a long valley,
and eventually a lower minimum is found. It may also happen that the gradient
method finds a local minimum that is not the global one. Perhaps 10% of the
runs end at a false minimum. This can often be recognized when carrying
constrained calculations for a range of constraint values, as it gives rise to
discontinuities in the energy curves. The only systematic way we have to deal
with the false minima is to run the searches with different randomly generated
starting configurations, and select the case that gives the lowest energy. The
resulting deformation plot combining two runs is shown in Fig. 5.
Figure 5: HFB energies as a function of deformation, using the $Q_{Q}$
quadrupole constraint. The nucleus is 24Mg, $N=Z=4$ in the $sd$-shell.
The global minimum is at a large prolate deformation as mentioned earlier.
There is also a secondary minimum at a large oblate deformation. For all
deformations, the ordinary neutron-neutron and proton-proton pairing
condensates are small or vanish.
### VII.3 21Ne
The next nucleus we discuss, 21Ne with $(N,Z)_{sd}=(3,2)$, illustrates how the
gradient method makes use of the conserved number parity to find the minimum
of odd-A systems. We start with the $U_{0},V_{0}$ configuration, and convert
it to an odd-number parity configuration by exchanging two columns in the
$m=\pm\frac{1}{2}$ block. There are 6 possible columns with $m=+\frac{1}{2}$
that can be exchanged. The results for the converged energies are shown in the
top row of Table 2. All of the neutron exchanges give the same final energy,
$-40.837$ MeV. However, the energy is different for proton exchanges. The
reason is that the starting configurations do not mix neutrons and protons,
and for reasons discussed earlier the corresponding gradients are zero. This
unwanted symmetry can be broken by making a random transformation of the
initial configuration. The results are shown in the second row. Now all the
energies are equal, showing that the minimum can be accessed from any column
exchange. Interestingly, the energy is lower than in the previous set of
minimizations. This shows that there is a significant neutron-proton mixing in
the condensate for 21Ne.
$U,V$ | $d^{n}_{5/2,1/2}$ | $d^{n}_{3/2,1/2}$ | $s^{n}_{1/2,1/2}$ | $d^{p}_{5/2,1/2}$ | $d^{p}_{3/2,1/2}$ | $s^{p}_{1/2,1/2}$
---|---|---|---|---|---|---
$U_{0},V_{0}$ | -40.837 | -40.837 | -40.837 | -40.215 | -40.176 | -40.176
$U_{r},V_{r}$ | -41.715 | -41.715 | -41.715 | -41.715 | -41.715 | -41.715
Table 2: HFB energies of 21Ne, with different starting configurations. For the
top row, the starting configuration is $U_{0},V_{0}$ with the indicated column
in the $m=\pm\frac{1}{2}$ block interchanged. The second row starts from a
randomized configuration $U_{r},V_{r}$ as discussed in Sect. VI.1.
## Acknowledgments
The authors thank A. Gezerlis and P. Ring for discussions, T. Lesinski and J.
Dobaczewski for comments on the manuscript, and M. Forbes for comments on the
code. This work (GFB) was supported in part by the U.S. Department of Energy
under Grant DE-FG02-00ER41132, and by the National Science Foundation under
Grant PHY-0835543. The work of LMR was supported by MICINN (Spain) under
grants Nos. FPA2009-08958, and FIS2009-07277, as well as by Consolider-Ingenio
2010 Programs CPAN CSD2007-00042 and MULTIDARK CSD2009-00064.
## References
* (1) P. Ring and P. Schuck, The nuclear many-body problem, (Springer, 1980).
* (2) P.-G. Reinhard and R.Y. Cusson, Nucl. Phys. A378 418 (1982).
* (3) K.T.R. Davies, H. Flocard, S. Krieger, and M.S. Weiss, Nucl. Phys. A342 111 (1980).
* (4) P. Bonche, H. Flocard, and P.-H. Heenen, Comput. Phys. Commun. 171 49 (2005).
* (5) J. Dobaczewski, and P. Olbratowski, Comput. Phys. Commun. 167 214 (2005).
* (6) K. Bennaceur and J. Dobaczewski, Comp. Phys. Commun. 168 96 (2005)
* (7) W. Pöschl, D. Vretenar, A. Rummel, and P. Ring, Comput. Phys. Commun. 101 75 (1997).
* (8) M. Stoitsov, et al., Comput. Phys. Commun. 167 43 (2005).
* (9) J.L. Egido, J. Lessing, V. Martin, and L.M. Robledo Nucl. Phys. A594 70 (1995)
* (10) M. Warda, J.L. Egido, L.M. Robledo, and K. Pomorski, Phys. Rev. C 66 014310 (2002).
* (11) I. Maqbool, J.A. Seikh, P.A. Ganai, and P.Ring, J. Phys. G: Nucl. Part. Phys. 38 045101 (2011).
* (12) R. Rodríguez-Guzmán, Y. Alhassid, and G.F. Bertsch, Phys. Rev C77, 064308 (2008)
* (13) L.M. Robledo and G.F. Bertsch, in preparation.
* (14) A. Gezerlis, G.F. Bertsch, and L. Luo, arXiv:1103.5793 (2011).
* (15) A.S. Umar, et al. Phys. Rev. C32 172 (1985).
* (16) G.F. Bertsch, J. Dobaczewski, W. Nazarewicz, and J. Pei, Phys. Rev. A 79 043662 (2009).
* (17) B. Banerjee, P. Ring, and H.J. Mang, Nucl. Phys. A 215 266 (1973).
* (18) J.-P. Delaroche, et al., Phys. Rev. C 81 014303 (2010).
* (19) B.A. Brown and W.A. Richter, Phys. Rev. C 74 034315 (2006).
* (20) T. Motobayashi, et al., Phys. Lett. B346 9 (1995).
* (21) R.Rodríguez-Guzmán, J.L. Egido, and L.M. Robledo Nucl. Phys. A709 201 (2002).
* (22) K. J. Millman and M. Aivazis, Comp. Sci. Eng. 13 9 (2011)
## Appendix: explanation of the code
The code hfb_shell that accompanies this article implements the gradient
method discussed in the text222The code may be downloaded from
http://www.phys.washington.edu/users/bertsch/hfb-shell.21.tar until it has
been published in a journal repository.. The code is written in Python and
requires the Python numerical library numpy to run (see py11 and accompanying
papers for a description of Python in a scientific environment). The main
program is the file hfb.py. It first carries out the initialization using
information from the primary input data file that in turn contains links to
other needed data files. There are three of these, one for the Hamiltonian
parameters, one for the correspondence between orbitals and rows of the $U,V$
matrices include the assumed block structure, and one for the input $U,V$
configuration. The input data format is explained in the readme.txt of the
code distribution.
Following initialization, program enters the iteration loop, calling the
various functions used to carry out the iteration. The loop terminates when
either a maximum number of iterations itmax is reached or the convergence
parameter $|H^{20}_{c}|$ go below a set value converge.
The function calls that are specific to the $sd$-shell application are
collected in the module sd_specific.py. The tasks carried out by these
functions include:
* •
initialization of matrix sizes and block structures
* •
setting up the matrices representing single-particle operators in the shell-
model basis.
* •
calculation of the fields $\Gamma,\Delta$ from the densities $\rho,\kappa$.
This function makes use of a table of interaction matrix elements $v_{ijkl}$
that are read in from a file. The present distribution of the code only
provides the Hamiltonian data for the USDB interaction br06 .
The functions that are generic to the gradient method are collected in the
module hfb_utilities.py. Many of these functions are defined by equations in
the text; the correspondence is given in Table III.
Function call | Equation in text
---|---
rho_kappa | (1)
F20 | (6)
G20 | (8)
H20 | (11)
H00 | (3)
Ztransform | (12)
Table 3: Python functions in hfb_utilities.py corresponding to equations in
the text.
The output of hfb.py reports the expectation values of the Hamiltonian and the
single-particle operators $N,Z$ and $Q_{Q}$ at each iteration step, together
with the convergence parameter $|H^{20}_{c}|$. After the final iteration, the
values are reported for the expectation values of constraining parameters
$\lambda_{\alpha}$ and the number fluctuations $\Delta N^{2},\Delta Z^{2}$.
The final $U,V$ configuration is written to the file uv.out. Thus additional
iterations can be performed simply by specifying uv.out as the new input file.
In addition, there is a set of functions collected in the module hfb_tools.py.
These are useful for making input $U,V$ configurations and for analyzing the
output $U,V$ configuration, but are not needed to run hfb.py. For example, a
randomizing transformation can be applied to a $U,V$ configuration by the
function randomize. Another useful function is canonical, used to extract the
eigenvalues of the $\rho$ operator needed for the canonical representation.
|
arxiv-papers
| 2011-04-28T17:14:53 |
2024-09-04T02:49:18.486535
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "L.M. Robledo and G.F. Bertsch",
"submitter": "George F. Bertsch",
"url": "https://arxiv.org/abs/1104.5453"
}
|
1104.5596
|
# Graph and depth of a monomial squarefree ideal
Dorin Popescu Dorin Popescu, Institute of Mathematics ”Simion Stoilow”,
University of Bucharest, P.O.Box 1-764, Bucharest 014700, Romania
dorin.popescu@imar.ro
###### Abstract.
Let $I$ be a monomial squarefree ideal of a polynomial ring $S$ over a field
$K$ such that the sum of every three different of its minimal prime ideals is
the maximal ideal of $S$, or more general a constant ideal. We associate to
$I$ a graph on $[s]$, $s=|\operatorname{Min}S/I|$ on which we may read the
depth of $I$. In particular, $\operatorname{depth}_{S}\ I$ does not depend of
char $K$. Also we show that $I$ satisfies the Stanley’s Conjecture.
Key words : Monomial Ideals, Join Graphs, Size, Depth, Stanley Depth.
2000 Mathematics Subject Classification: Primary 13C15, Secondary 13F20,
05E40, 13F55, 05C25.
The Support from the CNCSIS grant PN II-542/2009 of Romanian Ministry of
Education, Research and Inovation is gratefully acknowledged.
## Introduction
Let $S=K[x_{1},\ldots,x_{n}]$, $n\in{\bf N}$ be a polynomial ring over a field
$K$, and $I\subset S$ a monomial squarefree ideal with minimal prime ideals
$P_{1},\ldots,P_{s}$ (here we study only the monomial squarefree ideals).
After [4] the size of $I$ is the number $v+(n-h)-1$, where $h$ is the height
of $\sum_{j=1}^{s}P_{j}$ and $v$ is the minimal number $e$ for which there
exist integers $i_{1}<i_{2}<\cdots<i_{e}$ such that
$\sum_{k=1}^{e}P_{i_{k}}=\sum_{j=1}^{s}P_{j}$. Similarly, we defined in [8]
the bigsize of $I$, which is the number $t+(n-h)-1$, where $t$ is the minimal
number $e$ such that for all integers $i_{1}<i_{2}<\cdots<i_{e}$ it holds
$\sum_{k=1}^{e}P_{i_{k}}=\sum_{j=1}^{s}P_{j}$. Clearly $bigsize(I)\geq
size(I)$. Lyubeznik [4] showed that $\operatorname{depth}I\geq
1+\operatorname{size}I$.
Let $I\subset S$ be a monomial ideal of $S$, $u\in I$ a monomial and $uK[Z]$,
$Z\subset\\{x_{1},\ldots,x_{n}\\}$ the linear $K$-subspace of $I$ of all
elements $uf$, $f\in K[Z]$. A presentation of $I$ as a finite direct sum of
such spaces ${\mathcal{D}}:\ I=\bigoplus_{i=1}^{r}u_{i}K[Z_{i}]$ is called a
Stanley decomposition of $I$. Set
$\operatorname{sdepth}(\mathcal{D})=\operatorname{min}\\{|Z_{i}|:i=1,\ldots,r\\}$
and
$\operatorname{sdepth}\ I:=\operatorname{max}\\{\operatorname{sdepth}\
({\mathcal{D}}):\;{\mathcal{D}}\;\text{is a Stanley decomposition of}\;I\\}.$
The Stanley’s Conjecture [11] says that $\operatorname{sdepth}\
I\geq\operatorname{depth}\ I$. This conjecture holds for arbitrary monomial
squarefree ideals if $n\leq 5$ by [7] (see especially the arXiv version), or
for intersections of four monomial prime ideals by [5], [8]. In the case of
non squarefree monomial ideals $J$ an important inequality is
$\operatorname{sdepth}J\leq\operatorname{sdepth}\sqrt{J}$ (see [3, Theorem
2.1]). Similarly to Lyubeznik’s result, it holds $\operatorname{sdepth}I\geq
1+\operatorname{size}I$ by [2, Theorem 3.1]. If $bigsize(I)=size(I)$ then $I$
satisfies the Stanley’s Conjecture by [2, Theorems 1.2, 3.1].
The purpose of this paper is to study the case when $bigsize(I)=2$,
$size(I)=1$. In the case $\sum_{j=1}^{s}P_{j}=m=(x_{1},\ldots,x_{n})$, we
associate to $I$ a graph $\Gamma$ on $[s]$ given by $\\{ij\\}$ is an edge if
and only if $P_{i}+P_{j}=m$. We express the depth of $I$ in terms of the
properties of $\Gamma$ and of $q(I)=\operatorname{min}\\{\dim
S/(P_{i}+P_{j}):j\not=i,P_{i}+P_{j}\not=m\\}.$ We note that [8, Lemmas 3.2,
3.2] say, in particular, that $\operatorname{depth}_{S}\ I=2$ if and only if
$\Gamma$ is a join graph. Our Corollary 2.8 says that if $q(I)>1$ then
$\operatorname{depth}_{S}\ I=2+q(I)$ if and only if $\Gamma$ is a so called
concatenation of several graphs on two vertices having no edges. Thus knowing
$q(I)$, $\operatorname{depth}_{S}\ I$ can be read on $\Gamma$(see Corollary
2.9). It follows that for a monomial squarefree ideal $I\subset S$ such that
the sum of every three different of its minimal prime ideals is a constant
ideal (for example $m$), $\operatorname{depth}_{S}\ I$ does not depend of char
$K$ (see Theorem 2.10) and the Stanley’s Conjecture holds (see Theorem 3.5).
It is well known that $\operatorname{depth}_{S}\ I$ depends of the
characteristic of $K$ if $bigsize(I)=3$, $size(I)=2$ (see our Remark 2.11), so
it is very likely that this case is much harder for proving Stanley’s
Conjecture. Several people ask if there exist examples when the special
Stanley decomposition of [5], [8], or the splitting variables in the
terminology of [2] do not help in proving Stanley’s Conjecture since there
exists no good main prime ideal. Our Example 3.3 is such an example.
## 1\. Depth two and three
Let $S=K[x_{1},\ldots,x_{n}]$, $n\in{\bf N}$ be a polynomial ring over a field
$K$ and ${\tilde{S}}=K[x_{1},\ldots,x_{n-1}]\subset S$. We start reminding the
following two lemmas from [7].
###### Lemma 1.1.
Let $I,J\subset{\tilde{S}}$, $I\subset J$, $I\not=J$ be two monomial ideals,
$T=(I+x_{n}J)S$ such that
1. (1)
$\operatorname{depth}_{{\tilde{S}}}\
{\tilde{S}}/I\not=\operatorname{depth}_{S}\ S/T-1,$
2. (2)
$\operatorname{sdepth}_{\tilde{S}}\ I\geq\operatorname{depth}_{\tilde{S}}\ I,$
$\operatorname{sdepth}_{\tilde{S}}\ J\geq\operatorname{depth}_{\tilde{S}}\ J.$
Then $\operatorname{sdepth}_{S}\ T\geq\operatorname{depth}_{S}\ T.$
###### Lemma 1.2.
Let $I,J\subset{\tilde{S}}$, $I\subset J$, $I\not=J$ be two monomial ideals,
$T=(I+x_{n}J)S$ such that
1. (1)
$\operatorname{depth}_{{\tilde{S}}}\ {\tilde{S}}/I=\operatorname{depth}_{S}\
S/T-1,$
2. (2)
$\operatorname{sdepth}_{{\tilde{S}}}\ I\geq\operatorname{depth}_{{\tilde{S}}}\
I,$
3. (3)
$\operatorname{sdepth}_{{\tilde{S}}}\
J/I\geq\operatorname{depth}_{{\tilde{S}}}\ J/I.$
Then $\operatorname{sdepth}_{S}\ T\geq\operatorname{depth}_{S}\ T.$
The above lemmas allow us to show Stanley’s Conjecture in a special case.
###### Proposition 1.3.
Let $T\subset S$ be a monomial squarefree ideal. If $S/T$ is Cohen-Macaulay of
dimension $2$ then $\operatorname{sdepth}_{S}\ T\geq\operatorname{depth}_{S}\
T.$
###### Proof.
We use induction on $n$, case $n\leq 5$ being given in [7]. Suppose $n>5$.
Then $T$ has the form $T=I+x_{n}J$ for two monomial squarefree ideals
$I,J\subset{\tilde{S}}$, in fact $I=T\cap{\tilde{S}}$,
$J=(T:x_{n})\cap{\tilde{S}}$. Note that $\dim{\tilde{S}}/I=\dim
S/(T,x_{n})\leq 2$ and $\dim S/JS=\dim((x_{n})+T)/T\leq 2$ and so
$\operatorname{depth}_{{\tilde{S}}}\ {\tilde{S}}/I\leq 2$,
$\operatorname{depth}_{{\tilde{S}}}\ {\tilde{S}}/J\leq 1$. If
$\operatorname{depth}_{{\tilde{S}}}\ {\tilde{S}}/I=2$ then
$\operatorname{sdepth}_{{\tilde{S}}}\ I\geq\operatorname{depth}_{{\tilde{S}}}\
I$ by induction hypothesis. If $\operatorname{depth}_{{\tilde{S}}}\
{\tilde{S}}/I=1$ (by [10, Proposition 1.2]
$\operatorname{depth}_{{\tilde{S}}}\ {\tilde{S}}/I>0$) then
$\operatorname{depth}_{{\tilde{S}}}\
I=2=1+size(I)\leq\operatorname{sdepth}_{{\tilde{S}}}\ I$ by [2, Theorem 3.1]
and similarly for $J$. As $\dim\ J/I\leq\dim\ {\tilde{S}}/I\leq\dim\ S/T=2$ we
have $\operatorname{sdepth}_{{\tilde{S}}}\
J/I\geq\operatorname{depth}_{{\tilde{S}}}\ J/I$ by [6]. Now the result is a
consequence of the Lemmas 1.1, 1.2 if $I\not=J$, otherwise $T=IS$ and we may
apply [2, Lemma 3.6].
Let $I=\cap_{i=1}^{s}P_{i}$, $s\geq 3$ be the intersection of the minimal
monomial prime ideals of $S/I$. Assume that $\Sigma_{i=1}^{s}P_{i}=m$ and the
bigsize of $I$ is two. Set
$q=q(I)=\operatorname{min}\\{\dim
S/(P_{i}+P_{j}):j\not=i,P_{i}+P_{j}\not=m\\}.$
We will need the following two lemmas from [8].
###### Lemma 1.4.
If $P_{1}+P_{2}\not=m$ and $P_{k}+P_{e}=m$ for all $k,e>2$, $k\not=e$ then
1. (1)
$\operatorname{depth}_{S}S/I\in\\{1,2,1+q\\}$,
2. (2)
$\operatorname{depth}_{S}S/I=1$ if and only if there exists $j>2$ such that
$P_{1}+P_{j}=m=P_{2}+P_{j}$,
3. (3)
$\operatorname{depth}_{S}S/I>2$ if and only if $q>1$ and each $j>2$ satisfies
either
$P_{1}+P_{j}\not=m=P_{2}+P_{j},\ \mbox{or}$ $P_{2}+P_{j}\not=m=P_{1}+P_{j},$
4. (4)
$\operatorname{depth}_{S}S/I=2$ if and only if the following conditions hold:
1. (a)
each $j>2$ satisfies either $P_{1}+P_{j}\not=m$ or $P_{2}+P_{j}\not=m,$
2. (b)
$q=1$ or there exists a $k>2$ such that
$P_{1}+P_{k}\not=m\not=P_{2}+P_{k},$
5. (5)
$\operatorname{sdepth}_{S}I\geq\operatorname{depth}_{S}I$.
###### Lemma 1.5.
Suppose that whenever there exist $i\not=j$ in $[s]$ such that
$P_{i}+P_{j}\not=m$ there exist also $k\not=e$ in $[s]\setminus\\{i,j\\}$ such
that $P_{k}+P_{e}\not=m$ (that is the complementary case of the above lemma).
Then
1. (1)
$\operatorname{depth}_{S}S/I\in\\{1,2,1+q\\}$.
2. (2)
$\operatorname{depth}_{S}S/I=1$ if and only if after a renumbering of
$(P_{i})$ there exists $1\leq c<s$ such that $P_{i}+P_{j}=m$ for each $c<j\leq
s$ and $1\leq i\leq c$.
These two lemmas allow us to show the following useful proposition.
###### Proposition 1.6.
Suppose that $P_{1}=(x_{1},\ldots,x_{r})$, $1\leq r<n$,
$S^{\prime}=K[x_{r+1},\ldots,x_{n}]$ and $P_{1}+P_{2}\not=m\not=P_{1}+P_{3}$,
$P_{2}+P_{3}\not=m$. Then $depth_{S}\ S/I\leq 2$, in particular
$\operatorname{sdepth}_{S^{\prime}}(P_{2}\cap P_{3}\cap S^{\prime})\geq
2\geq\operatorname{depth}_{S}\ S/I.$
###### Proof.
Apply induction on $s$, the cases $s=3,4$ follows from [5], [8]. Suppose that
$s>4$. Set $E=S/(P_{1}\cap P_{3}\cap\ldots\cap P_{s})\oplus S/(P_{1}\cap
P_{2}\cap P_{4}\cap\ldots\cap P_{s})$ and
$F=S/(P_{1}\cap(P_{2}+P_{3})\cap P_{4}\cap\ldots\cap P_{s})$. Note that if
$P_{i}\subset P_{2}+P_{3}$ for some $i\not=2,3$ then
$P_{2}+P_{3}=P_{i}+P_{2}+P_{3}=m$ because bigsize of $I$ is two.
Contradiction! Thus the bigsize of $F$ is one and so
$\operatorname{depth}_{S}S/F=1$ by [8]. From the following exact sequence
$0\rightarrow S/I\rightarrow E\rightarrow F\rightarrow 0$
we get $\operatorname{depth}_{S}S/I=2$ if $\operatorname{depth}_{S}E>1$.
Otherwise, suppose that $G=S/(P_{1}\cap P_{2}\cap P_{4}\cap\ldots\cap P_{s})$
has depth one. Then after renumbering $(P_{i})$ we may suppose that there
exists $c\not=3$, $1\leq c<s$ such that $P_{i}+P_{j}=m$ for all $1\leq i\leq
c$, $c<j\leq s$, $i,j\not=3$ (see Lemmas 1.4, 1.5). In fact we may renumber
only $(P_{e})_{e>3}$ and take $c>3$ because $P_{1}+P_{2}\not=m$. Set
$M=S/P_{1}\cap\ldots\cap P_{c}$ and $N=M\oplus S/P_{3}\cap P_{c+1}\cap\ldots
P_{s}$. In the following exact sequence
$0\rightarrow S/I\rightarrow N\rightarrow S/P_{3}\rightarrow 0$
we have the depth of all modules $\leq\operatorname{depth}_{S}S/P_{3}$. By
Depth Lemma [12] it follows
$\operatorname{depth}_{S}S/I=\operatorname{depth}_{S}N$ and so
$\operatorname{depth}_{S}S/I\leq\operatorname{depth}_{S}M$. Applying the
induction hypothesis we get $\operatorname{depth}_{S}M\leq 2$, that is
$\operatorname{depth}_{S}S/I\leq 2$. Finally, by [9] we have
$\operatorname{sdepth}_{S^{\prime}}(P_{2}\cap P_{3}\cap
S^{\prime})\geq\operatorname{depth}_{S^{\prime}}(P_{2}\cap P_{3}\cap
S^{\prime})=1+\operatorname{depth}_{S^{\prime}}S^{\prime}/(P_{2}\cap P_{3}\cap
S^{\prime})=$ $1+\operatorname{depth}_{S}S/(P_{1}+P_{2})\cap(P_{1}+P_{3})=2$
because $P_{1}+P_{2}+P_{3}=m$.
###### Corollary 1.7.
Suppose that $bigsize(I)=size(I)\leq 2$. Then $\operatorname{depth}_{S}\ I$
does not depend on the characteristic of $K$
###### Proof.
If $bigsize(I)=size(I)=1$ then $\operatorname{depth}_{S}\ I=2$ by [8,
Corollary 1.6] and so does not depend on the characteristic of $K$. If
$bigsize(I)=size(I)=2$ then $\operatorname{depth}_{S}\ I\leq 3$ by the above
proposition and so $\operatorname{depth}_{S}\ I=3$ by [4] independently of
char $K$.
###### Theorem 1.8.
If $\operatorname{depth}_{S}\ I\leq 3$ then $\operatorname{sdepth}_{S}\
I\geq\operatorname{depth}_{S}\ I$.
###### Proof.
By [2] we have $\operatorname{sdepth}_{S}\ I\geq 1+size(I)\geq 2$ and it is
enough to consider the case $\operatorname{depth}_{S}\ I=3$, that is
$\operatorname{depth}_{S}\ S/I=2$. If $\dim\ S/I=2$ then we may apply
Proposition 1.3, otherwise we may suppose that $\dim\ S/P_{i}\geq 3$ for an
$i$, let us say $i=1$. We may suppose that $P_{1}=(x_{1},\ldots,x_{r})$ for
some $r<n$, thus $n\geq r+3$. Set $S^{\prime\prime}=K[x_{1},\ldots,x_{r}]$,
$S^{\prime}=K[x_{r+1},\ldots,x_{n}]$.
Applying [8, Theorem 1.5] for ${\mathcal{F}}$ containing some
$\tau_{j}=\\{j\\}$, and $\tau_{jk}=\\{j,k\\}$ $1<j,k\leq s$,$j\not=k$ we get
$\operatorname{sdepth}_{S}I\geq\operatorname{min}\\{A_{0},\\{A_{\tau_{j}}\\}_{\tau_{j}\in{\mathcal{F}}}\\},\\{A_{\tau_{jk}}\\}_{\tau_{jk}\in{\mathcal{F}}}\\}$
for $A_{0}=\operatorname{sdepth}_{S}(I\cap S^{\prime\prime})S$ if $I\cap
S^{\prime\prime}\not=0$ or $A_{0}=n$ otherwise, and
$A_{\tau}\geq\operatorname{sdepth}_{S_{\tau}}J_{\tau}+\operatorname{sdepth}_{S^{\prime}}L_{\tau},$
where $J_{\tau}=\cap_{e\not\in\tau}P_{e}\cap S_{\tau}\not=0$,
$S_{\tau}=K[\\{x_{u}:x_{u}\in
S^{\prime\prime},x_{u}\not\in\Sigma_{e\in\tau}P_{e}\\}]$,
$L_{\tau}=\cap_{e\in\tau}(P_{e}\cap S^{\prime})\not=0$. If $P_{1}+P_{j}\not=m$
then
$A_{\tau_{j}}\geq\operatorname{sdepth}_{S_{\tau_{j}}}J_{\tau_{j}}+\operatorname{sdepth}_{S^{\prime}}(P_{j}\cap
S^{\prime})\geq 1+\dim
S/(P_{1}+P_{j})+\lceil\frac{\operatorname{height}(P_{j}\cap
S^{\prime})}{2}\rceil,$
where $\lceil a\rceil$, $a\in{\bf Q}$ denotes the smallest integer $\geq a$.
Thus $A_{\tau_{j}}\geq 3=\operatorname{depth}_{S}I$. If $P_{1}+P_{j}=m$ then
$P_{j}\cap S^{\prime}$ is the maximal ideal of $S^{\prime}$ and we have
$A_{\tau_{j}}\geq 1+\lceil\frac{\operatorname{height}(P_{j}\cap
S^{\prime})}{2}\rceil\geq 1+\lceil\frac{3}{2}\rceil\geq
3=\operatorname{depth}_{S}I.$
If $P_{1}+P_{j}\not=m\not=P_{1}+P_{k}$,$P_{j}+P_{k}\not=m$ then
$A_{\tau_{jk}}\geq\operatorname{sdepth}_{S_{\tau_{jk}}}J_{\tau_{jk}}+\operatorname{sdepth}_{S^{\prime}}L_{\tau_{jk}}\geq
1+\operatorname{sdepth}_{S^{\prime}}(P_{j}\cap P_{k}\cap
S^{\prime})\geq\operatorname{depth}_{S}I,$
by Proposition 1.6. If $P_{1}+P_{j}=m\not=P_{1}+P_{k}$, $P_{j}+P_{k}\not=m$
then
$\operatorname{sdepth}_{S^{\prime}}L_{\tau_{jk}}\geq\operatorname{depth}_{S^{\prime}}L_{\tau_{jk}}=1+\dim
S/(P_{1}+P_{k})\geq 1+q.$
Thus $A_{\tau_{jk}}\geq 2+q\geq 3=\operatorname{depth}_{S}I$. If
$P_{1}+P_{j}=m=P_{1}+P_{k}$, $P_{j}+P_{k}\not=m$ then $L_{\tau_{jk}}$ is the
maximal ideal of $S^{\prime}$ and we get $A_{\tau_{jk}}\geq
1+\lceil\frac{3}{2}\rceil\geq 3=\operatorname{depth}_{S}I.$ If $I\cap
S^{\prime\prime}\not=0$ then $A_{0}=\operatorname{sdepth}_{S}(I\cap
S^{\prime\prime})S\geq 1+n-r\geq\operatorname{depth}_{S}I$. Hence
$\operatorname{sdepth}_{S}I\geq\operatorname{depth}_{S}I$.
## 2\. Graph of a monomial squarefree ideal
Let $I=\cap_{i=1}^{s}P_{i}$, $s\geq 3$ be the intersection of the minimal
monomial prime ideals of $S/I$. Assume that $\Sigma_{i=1}^{s}P_{i}=m$ and the
bigsize of $I$ is two. We may suppose that $P_{1}=(x_{1},\ldots,x_{r})$ for
some $r<n$ and set
$q=q(I)=\operatorname{min}\\{\dim
S/(P_{i}+P_{j}):j\not=i,P_{i}+P_{j}\not=m\\}.$
Thus $q\leq n-r.$ Set $S^{\prime\prime}=K[x_{1},\ldots,x_{r}]$,
$S^{\prime}=K[x_{r+1},\ldots,x_{n}]$.
###### Definition 2.1.
Let $\Gamma$ be the simple graph on $[s]$ given by $\\{ij\\}$ is an edge (we
write $\\{ij\\}\in E(\Gamma)$) if and only if $P_{i}+P_{j}=m$. We call
$\Gamma$ the graph associated to $I$. $\Gamma$ has the triangle property if
there exists $i\in[s]$ such that for all $j,k\in[s]$ with
$\\{ij\\},\\{ik\\}\in E(\Gamma)$ it follows $\\{jk\\}\in E(\Gamma)$. In fact
the triangle property says that it is possible to find a ”good” main prime in
the terminology of [8, Example 4.3], which we remind shortly next.
###### Example 2.2.
Let $n=10$, $P_{1}=(x_{1},\ldots,x_{7})$, $P_{2}=(x_{3},\ldots,x_{8})$,
$P_{3}=(x_{1},\ldots,x_{4},x_{8},\ldots,x_{10})$,
$P_{4}=(x_{1},x_{2},x_{5},x_{8},x_{9},x_{10})$, $P_{5}=(x_{5},\ldots,x_{10})$
and $I=\cap_{i=1}^{5}P_{i}$. Then $q(I)=2$, and $\operatorname{depth}_{S}I=4$.
The graph associated to $I$ on [5] as above has edges
$E(\Gamma)=\\{\\{13\\},\\{15\\},\\{35\\},\\{14\\},\\{23\\},\\{24\\}\\}$
and has the triangle property, but only $\\{5\\}$ is a ”good” vertex, that is
for all $j,k\in[4]$ with $\\{j5\\},\\{k5\\}\in E(\Gamma)$ it follows
$\\{jk\\}\in E(\Gamma)$. Below you have the picture of $\Gamma$.
---
$\textstyle{4\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{5}$$\textstyle{2\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{3\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
###### Proposition 2.3.
If the bigsize of $I$ is two and $\Gamma=\Gamma(I)$ has the triangle property
then $\operatorname{sdepth}_{S}I\geq\operatorname{depth}_{S}I$.
###### Proof.
Renumbering $(P_{i})$ we may suppose that $i=1$, that is for all $j,k\in[s]$
with $\\{1j\\},\\{1k\\}\in E(\Gamma)$ it follows $\\{jk\\}\in E(\Gamma)$ by
the triangle property. We repeat somehow the proof of Proposition 1.8.
Applying [8, Theorem 1.5] for ${\mathcal{F}}$ containing some
$\tau_{j}=\\{j\\}$, and $\tau_{jk}=\\{j,k\\}$ $1<j,k\leq s$,$j\not=k$ we get
$\operatorname{sdepth}_{S}I\geq\operatorname{min}\\{A_{0},\\{A_{\tau_{j}}\\}_{\tau_{j}\in{\mathcal{F}}}\\},\\{A_{\tau_{jk}}\\}_{\tau_{jk}\in{\mathcal{F}}}\\}$.
Note that the bigsize of $J_{\tau}$ is $\leq 1$ (similarly $L_{\tau}$),
$\tau\in{\mathcal{F}}$ and so
$\operatorname{sdepth}_{S_{\tau}}J_{\tau}\geq\operatorname{depth}_{S_{\tau}}J_{\tau}$
by [8, Corollary 1.6]. If $P_{1}+P_{j}\not=m$ then
$A_{\tau_{j}}\geq\operatorname{sdepth}_{S_{\tau_{j}}}J_{\tau_{j}}+\operatorname{sdepth}_{S^{\prime}}(P_{j}\cap
S^{\prime})\geq 1+\dim
S/(P_{1}+P_{j})+\lceil\frac{\operatorname{height}(P_{j}\cap
S^{\prime})}{2}\rceil.$
Thus $A_{\tau_{j}}\geq 2+q\geq\operatorname{depth}_{S}I$ by Lemmas 1.4, 1.5.
If $P_{1}+P_{j}=m$ but there exists $e\not=j$ such that $P_{e}+P_{j}\not=m$,
then
$\operatorname{sdepth}_{S_{\tau_{j}}}J_{\tau_{j}}\geq\operatorname{depth}_{S_{\tau_{j}}}J_{\tau_{j}}=1+\operatorname{depth}_{S}S/(\cap_{u\not=j}(P_{u}+P_{j}))\geq
1+q$ and so again $A_{\tau_{j}}\geq 2+q\geq\operatorname{depth}_{S}I$. If
$P_{e}+P_{j}=m$ for all $e\not=j$ then $\operatorname{depth}_{S}I=2$ by [8,
Lemma 1.2] and clearly $A_{\tau_{j}}\geq\operatorname{depth}_{S}I$.
Now note that if $P_{1}+P_{j}\not=m\not=P_{1}+P_{k}$, $P_{j}+P_{k}\not=m$ then
$A_{\tau_{jk}}\geq\operatorname{sdepth}_{S_{\tau_{jk}}}J_{\tau_{jk}}+\operatorname{sdepth}_{S^{\prime}}L_{\tau_{jk}}\geq
1+\operatorname{sdepth}_{S^{\prime}}(P_{j}\cap P_{k}\cap
S^{\prime})\geq\operatorname{depth}_{S}I$
by Proposition 1.6. If $P_{1}+P_{j}=m$, $P_{j}+P_{k}\not=m$ then
$P_{1}+P_{k}\not=m$ by the triangle property and
$\operatorname{sdepth}_{S^{\prime}}L_{\tau_{jk}}\geq\operatorname{depth}_{S^{\prime}}L_{\tau_{jk}}=1+\dim
S/(P_{1}+P_{k})\geq 1+q$. Thus $A_{\tau_{jk}}\geq
2+q\geq\operatorname{depth}_{S}I$. If $I\cap S^{\prime\prime}\not=0$ then as
in the proof of Proposition 1.8 $A_{0}\geq\operatorname{depth}_{S}I$. Hence
$\operatorname{sdepth}_{S}I\geq\operatorname{depth}_{S}I$.
###### Definition 2.4.
The graph $\Gamma$ is a join graph if it is a join of two of its subgraphs,
that is after a renumbering of the vertices there exists $1\leq c<s$ such that
$\\{ij\\}\in E(\Gamma)$ for all $1\leq i\leq c$, $c<j\leq s$. Thus in Lemmas
1.4, 1.5 one may say that $\operatorname{depth}_{S}\ S/I=1$ if and only if the
associated graph of $I$ is a join graph. Let $\Gamma_{1}$, $\Gamma_{2}$ be
graphs on $[r]$, respectively $\\{r,r+1,\ldots,s\\}$ for some integers
$1<r\leq s-2$. Let $\Gamma$ be the graph on $[s]$ given by
$E(\Gamma)=E(\Gamma_{1})\cup E(\Gamma_{2})\cup\\{\\{ij\\}:1\leq i<r,r\leq
j\leq s\\}.$ We call $\Gamma$ the graph given by concatenation of
$\Gamma_{1}$, $\Gamma_{2}$ in the vertex $\\{r\\}$.
###### Lemma 2.5.
Let $I=\cap_{i=1}^{s}P_{i}$ be the intersection of the minimal monomial prime
ideals of $S/I$, $I_{1}=\cap_{i=1}^{r}P_{i}$, $I_{2}=\cap_{i\geq r}^{s}P_{i}$
and $\Gamma$, $\Gamma_{1}$, $\Gamma_{2}$ be the graphs associated to $I$,
respectively $I_{1}$, $I_{2}$ as in the previous section. Suppose that
$\Sigma_{i=1}^{s}P_{i}=m$, $\Gamma$ is the concatenation of $\Gamma_{1}$,
$\Gamma_{2}$ in $\\{r\\}$ and $bigsize(I)=2$. Then
$\operatorname{depth}_{S}I=\operatorname{min}\\{\operatorname{depth}_{S}I_{1},\operatorname{depth}_{S}I_{2}\\}.$
###### Proof.
We consider the following exact sequence
$0\rightarrow S/I\rightarrow S/I_{1}\oplus S/I_{2}\rightarrow
S/I_{1}+I_{2}\rightarrow 0.$
Since $P_{i}+P_{j}=m$ for all $1\leq i<r$, $r<j\leq s$ we get
$I_{1}+I_{2}=P_{r}$. But
$\operatorname{depth}_{S}S/I,\operatorname{depth}_{S}S/I_{i}\leq\operatorname{depth}_{S}S/P_{r}$
for $i=1,2$ and by Depth Lemma [12] we get
$\operatorname{depth}_{S}S/I=\operatorname{min}\\{\operatorname{depth}_{S}S/I_{1},\operatorname{depth}_{S}S/I_{2}\\}.$
###### Remark 2.6.
Let $I=\cap_{i=1}^{3}P_{i}$ be the intersection of the minimal monomial prime
ideals of $S/I$. Suppose that $P_{1}+P_{2}\not=m\not=P_{1}+P_{3}$ and
$P_{2}+P_{3}=m$. Let $I_{1}=P_{1}\cap P_{2}$, $I_{2}=P_{1}\cap P_{3}$ and
$\Gamma$, $\Gamma_{1}$, $\Gamma_{2}$ be the graphs associated to $I$,
respectively $I_{1}$, $I_{2}$. We have $E(\Gamma_{1})=E(\Gamma_{2})=\emptyset$
and $E(\Gamma)=\\{\\{23\\}\\}$. Then $\Gamma$ is the concatenation of
$\Gamma_{1}$, $\Gamma_{2}$ in $\\{1\\}$ and
$\operatorname{depth}_{S}\ I=\operatorname{min}\\{\operatorname{depth}_{S}\
I_{1},\operatorname{depth}_{S}\ I_{2}\\}=2+\operatorname{min}\\{\dim\
S/(P_{1}+P_{2}),\dim\ S/(P_{1}+P_{3})\\}$
by the above lemma. That is the formula found in [5, Proposition 2.1].
###### Proposition 2.7.
Let $I=\cap_{i=1}^{s}P_{i}$ be the intersection of the minimal monomial prime
ideals of $S/I$, and $\Gamma$ be the graph associated to $I$. Suppose that
$\Sigma_{i=1}^{s}P_{i}=m$, $bigsize(I)=2$ and $\operatorname{depth}_{S}I>3$.
Then after renumbering $(P_{i})$ there exists $1<r\leq s-2$ such that $\Gamma$
is the concatenation in $\\{r\\}$ of the graphs $\Gamma_{1}$, $\Gamma_{2}$
associated to $I_{1}=\cap_{i=1}^{r}P_{i}$, respectively $I_{2}=\cap_{i\geq
r}^{s}P_{i}$. Moreover, $\operatorname{depth}_{S}\
I_{1},\operatorname{depth}_{S}\ I_{2}>3$.
###### Proof.
Since $bigsize(I)=2$ we may suppose that $P_{s-1}+P_{s}\not=m$, that is
$\\{s-1,s\\}\not\in\Gamma$. Consider the following exact sequence
$0\rightarrow S/I\rightarrow S/P_{1}\cap\ldots\cap P_{s-1}\oplus
S/P_{1}\cap\ldots\cap P_{s-2}\cap P_{s}\rightarrow S/P_{1}\cap\ldots\cap
P_{s-2}\cap(P_{s}+P_{s-1})\rightarrow 0.$
As in the proof of Proposition 2.3 we see that $P_{i}\not\subset
P_{s}+P_{s-1}$ for $i<s-1$ because $bigsize(I)=2$. Then
$\operatorname{depth}_{S}S/P_{1}\cap\ldots\cap P_{s-2}\cap(P_{s}+P_{s-1})=1$
using [8, Corollary 1.6]. By Depth Lemma we get, let us say,
$\operatorname{depth}_{S}S/P_{1}\cap\ldots\cap P_{s-1}=1$ since
$\operatorname{depth}_{S}S/I>2$. After a renumbering of $(P_{i})_{i<s-1}$
using Lemmas 1.4, 1.5 we may suppose that there exists $1\leq c<s-1$ such that
$P_{i}+P_{j}=m$ for all $1\leq i\leq c$, $c<j<s$. Set $r=c+1$ and renumber
$P_{s}$ by $P_{r}$ and $P_{i}$ by $P_{i+1}$ for $r\leq i<s$. Then
$I_{1}=\cap_{i=1}^{r}P_{i}$ and $I_{2}=\cap_{i\geq r}^{s}P_{i}$ satisfy our
proposition. The last statement follows by Lemma 2.5.
###### Corollary 2.8.
Let $I=\cap_{i=1}^{s}P_{i}$ be the intersection of the minimal monomial prime
ideals of $S/I$. Suppose that $\Sigma_{i=1}^{s}P_{i}=m$, $bigsize(I)=2$ and
$q(I)>1$. Then $\operatorname{depth}_{S}\ I=2+q(I)>3$ if and only if the graph
associated to $I$ is a concatenation of several graphs on two vertices having
no edges.
###### Proof.
The necessity follows applying the above proposition by recurrence and the
sufficiency follows applying Lemma 2.5 by recurrence.
###### Corollary 2.9.
Let $I=\cap_{i=1}^{s}P_{i}$, $I^{\prime}=\cap_{i=1}^{s}P^{\prime}_{i}$ be the
intersection of the minimal monomial prime ideals of $S/I$, respectively
$S/I^{\prime}$. Suppose that
$\Sigma_{i=1}^{s}P_{i}=m=\Sigma_{i=1}^{s}P^{\prime}_{i}$,
$bigsize(I)=bigsize(I^{\prime})=2$ and $q(I)=q(I^{\prime})$. If the graphs
associated to $I$, respectively $I^{\prime}$ coincide, then
$\operatorname{depth}_{S}\ I=\operatorname{depth}_{S}\ I^{\prime}$.
###### Proof.
By the above corollary $\operatorname{depth}_{S}\ I>3$ and
$\operatorname{depth}_{S}\ I^{\prime}>3$ hold if and only if the graphs
$\Gamma(I)$, $\Gamma(I^{\prime})$ are concatenations of several graphs on two
vertices having no edges. Since $\Gamma(I)=\Gamma(I^{\prime})$ we get that
$\operatorname{depth}_{S}\ I>3$ if and only if $\operatorname{depth}_{S}\
I^{\prime}>3$. But Lemmas 1.4, 1.5 says that in this case
$\operatorname{depth}_{S}\ I=2+q(I)=2+q(I^{\prime})=\operatorname{depth}_{S}\
I^{\prime}$. Note that $\operatorname{depth}_{S}\ I=2$ holds if and only if
$\Gamma(I)=\Gamma(I^{\prime})$ is a join graph which happens if and only if
$\operatorname{depth}_{S}\ I^{\prime}=2$. Then necessary
$\operatorname{depth}_{S}\ I=\operatorname{depth}_{S}\ I^{\prime}$ also in the
left case $\operatorname{depth}_{S}\ I=3$.
###### Theorem 2.10.
The depth of a monomial squarefree ideal $I$ of $S$ such that the sum of every
three different of its minimal prime ideals is the maximal ideal of $S$, or
more general a constant ideal of $S$. Then the depth of $I$ does not depend on
the characteristic of $K$.
###### Proof.
It is enough to suppose $\Sigma_{i=1}^{s}P_{i}=m$ and $bigsize(I)=2$,
$size(I)=1$ by Corollary 1.7. By Lemmas 1.4, 1.5 (see also Remark 2.4)
$\operatorname{depth}_{S}\ I=2$ if and only if the graph $\Gamma(I)$
associated to $I$ is a join graph which is a combinatorial characterization
and so does not depend on $p=$char $K$. By Corollary 2.8
$\operatorname{depth}_{S}\ I=2+q(I)>3$ if and only if $q(I)>1$ and $\Gamma(I)$
is a concatenation of several graphs on two vertices having no edges, the
exact value of $\operatorname{depth}_{S}\ I$ being given by $q(I)$. Thus again
$\operatorname{depth}_{S}\ I$ does not depend on $p$. Finally,
$\operatorname{depth}_{S}\ I=3$ happens only when we are not in the above
cases, that is, it does not depend on $p$.
###### Remark 2.11.
The above theorem fails if just the sum of every four minimal prime ideals of
$I$ is the maximal ideal of $S$. [2, Examples 1.3 ii)] says that the Stanley-
Reisner ideal $I$ of the simplicial complex associated to the canonical
triangulation of the real projective plane has $bigsize(I)=3$, $size(I)=2$ and
$\operatorname{depth}_{S}\ I=4$ if char $K\not=2$, otherwise
$\operatorname{depth}_{S}\ I=3$.
## 3\. Stanley’s Conjecture for monomial squarefree ideals of bigsize $2$
The case when $\operatorname{depth}_{S}I>1+bigsize(I)$ is unusual big and it
is hard to check the Stanley’s Conjecture in this case. Next we will construct
such examples, where Lemma 2.5 and Proposition 2.7 prove to be very useful.
###### Example 3.1.
Let $\Gamma_{1}$ be the graph given on $\\{1,2,5\\}$ by
$E(\Gamma_{1})=\\{\\{15\\}\\}$ and $\Gamma_{2}$ be the graph given on
$\\{3,4,5\\}$ by $E(\Gamma_{1})=\\{\\{35\\}\\}$. Suppose that $I_{1}=P_{1}\cap
P_{2}\cap P_{5}$ and $I_{2}=P_{3}\cap P_{4}\cap P_{5}$ are irredundant
intersections of monomial prime ideals of $S$ with $q(I_{1})>1$, $q(I_{2})>1$,
$bigsize(I_{1})=bigsize(I_{2})=2$. Then $\operatorname{depth}_{S}\
I=2+\operatorname{min}\\{\dim S/P_{1}+P_{2},\dim S/P_{2}+P_{5}\\}\geq
2+q(I_{1})>3$ by [5] (see Remark 2.6) if $q(I_{1})>1$. Similarly, $\Gamma_{2}$
is the graph associated to $I_{2}$ and $\operatorname{depth}_{S}I_{2}>3$. Let
$\Gamma$ be the concatenation in $\\{5\\}$ of $\Gamma_{1}$ and $\Gamma_{2}$.
If $I=I_{1}\cap I_{2}$ is an irredundant intersection of those $5$-prime
ideals and $q(I)>1$, $bigsize(I)=2$ then $\Gamma$ is the graph associated to
$I$ and $\operatorname{depth}_{S}I>3$ by Lemma 2.5. This is the graph from the
Example 2.2.
The above example is not bad because this case can be handled by our
Proposition 2.3, that is there exists a ”good” main prime $P_{5}$. Are there
ideals $I$ for which there exists no ”good” main prime, that is the graph
associated to $I$ does not have the triangle property? Next we will construct
such a bad example. First we will see how should look its graph.
###### Example 3.2.
Let $\Gamma_{1}$ be the graph constructed above on $[5]$ and $\Gamma_{2}$ be
the graph given on $\\{1,6\\}$ with $E(\Gamma_{2})=\emptyset$. Let $\Gamma$ be
the concatenation in $\\{1\\}$ of $\Gamma_{1}$ and $\Gamma_{2}$. Below you
have the picture of $\Gamma$ and clearly it does not satisfy the triangle
property. If we show that $\Gamma$ is the graph associated to a monomial
squarefree ideal $I$ of $S$ with $bigsize(I)=2$ and $q(I)>1$ then we will have
$\operatorname{depth}_{S}I>3$ by Lemma 2.5. This is done in the next example.
$\textstyle{6\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{2\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{3\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{4\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{5}$
###### Example 3.3.
Let $n=12$, $P_{1}=(x_{1},x_{4},x_{5},x_{6},x_{9},\ldots,x_{12})$,
$P_{2}=(x_{1},x_{4},\ldots,x_{10})$,
$P_{3}=(x_{1},x_{2},x_{3},x_{7},x_{8},\ldots,x_{12})$,
$P_{4}=(x_{1},x_{2},x_{3},x_{6},x_{7},x_{8},x_{11},x_{12})$,
$P_{5}=(x_{1},\ldots,x_{8})$, $P_{6}=(x_{2},\ldots,x_{6},x_{9},\ldots,x_{12})$
and $I=\cap_{i=1}^{6}P_{i}$. We have $P_{1}+P_{4}=P_{1}+P_{5}=P_{1}+P_{3}=$
$P_{2}+P_{3}=P_{2}+P_{4}=P_{2}+P_{6}=P_{3}+P_{5}=P_{3}+P_{6}=P_{4}+P_{6}=P_{5}+P_{6}=m$
and $P_{1}+P_{2}=m\setminus\\{x_{2},x_{3}\\}$,
$P_{1}+P_{6}=m\setminus\\{x_{7},x_{8}\\}$,
$P_{2}+P_{5}=m\setminus\\{x_{11},x_{12}\\}$,
$P_{3}+P_{4}=m\setminus\\{x_{4},x_{5}\\}$,
$P_{4}+P_{5}=m\setminus\\{x_{9},x_{10}\\}$. Clearly, $bigsize(I)=2=q(I)$ and
the graph $\Gamma$ associated to $I$ is the graph constructed in Example 3.2.
We have $\operatorname{depth}_{S}\ I=2+q(I)=4$ by Lemma 2.5. Let
$S^{\prime}=K[x_{2},\ldots,x_{12}]$ and $P^{\prime}_{i}=P_{i}\cap S^{\prime}$,
$I^{\prime}=I\cap S^{\prime}$. We have
$I^{\prime}=\cap_{i=1}^{5}P^{\prime}_{i}$ because $P^{\prime}_{1}\subset
P^{\prime}_{6}$. The graph associated to $I^{\prime}$ is in fact $\Gamma_{1}$
from the above example and has the triangle property. Then by Proposition 2.3
we get $\operatorname{sdepth}_{S^{\prime}}\
I^{\prime}\geq\operatorname{depth}_{S^{\prime}}\ I^{\prime}=2+q(I^{\prime})=4$
because $q(I^{\prime})=q(I)$. Using the decomposition $I=I^{\prime}\oplus
x_{1}(I:x_{1})$ as linear spaces we get
$\operatorname{sdepth}_{S}I\geq\operatorname{min}\\{\operatorname{sdepth}_{S^{\prime}}I^{\prime},\operatorname{sdepth}_{S}(I:x_{1})\\}\geq\operatorname{min}\\{4,\operatorname{sdepth}_{S}P_{6}\\}=$
$\operatorname{min}\\{4,\lceil\frac{9}{2}\rceil+3\\}=4=\operatorname{depth}_{S}\
I.$
This gives us the idea to handle such bad examples in the next.
###### Proposition 3.4.
Let $I=\cap_{i=1}^{s}P_{i}$ be the intersection of the minimal monomial prime
ideals of $S/I$. Suppose that $\Sigma_{i=1}^{s}P_{i}=m$, $bigsize(I)=2$ and
$\operatorname{depth}_{S}\ I>3$. Then $\operatorname{sdepth}_{S}\
I\geq\operatorname{depth}_{S}\ I$.
###### Proof.
Apply induction on $s$. The case $s\leq 4$ are given in [9], [5], [8]. Assume
that $s>4$ and let $\Gamma$ be the graph of $I$. By Proposition 2.7 we may
suppose after renumbering $(P_{i})$ that there exists $1<r\leq s-2$ such that
$\Gamma$ is the concatenation in $\\{r\\}$ of the graphs $\Gamma_{1}$,
$\Gamma_{2}$ associated to $I_{1}=\cap_{i=1}^{r}P_{i}$, respectively
$I_{2}=\cap_{i=r}^{s}P_{i}$. Note that if $r=2$ or $s-r=2$ then
$bigsize(I_{i})$ could be not $2$ but this makes no troubles since we need the
bigsize $2$ only to apply Proposition 2.7. From Lemma 2.5 it follows that
$\operatorname{depth}_{S}\ I=\operatorname{min}\\{\operatorname{depth}_{S}\
I_{1},\operatorname{depth}_{S}\ I_{2}\\}$ and so $\operatorname{depth}_{S}\
I_{i}>3$ for $i=1,2$. Note that $P_{r}\setminus P_{j}\subset P_{i}\setminus
P_{j}=m\setminus P_{j}$ for all $1\leq i<r$, $r<j\leq s$. After renumbering
variables we may suppose that $\\{x_{1},\ldots,x_{e}\\}$ $1\leq e<n$ are all
variables of $\cup_{j>r}^{s}(P_{r}\setminus P_{j})$. As we noticed they are
contained in any $P_{i}$, $1\leq i<r$. Set
$S^{\prime}=K[x_{e+1},\ldots,x_{n}]$, $P^{\prime}_{i}=P_{i}\cap S^{\prime}$,
$I^{\prime}=I\cap S^{\prime}$. Then $P^{\prime}_{r}\subset P^{\prime}_{j}$ for
all $r<j\leq s$ and we get $I^{\prime}=\cap_{i=1}^{r}P^{\prime}_{i}$.
Moreover, since $\\{x_{1},\ldots,x_{e}\\}$ is contained in any $P_{i}$, $1\leq
i\leq r$ we see that the ”relations” between these prime ideals preserve after
intersection with $S^{\prime}$ and the graph $\Gamma^{\prime}$ of $I^{\prime}$
is in fact $\Gamma_{1}$. Moreover, $q(I^{\prime})=q(I_{1})$ and
$bigsize(I^{\prime})=bigsize(I_{1})$. Then $\operatorname{depth}_{S^{\prime}}\
I^{\prime}=\operatorname{depth}_{S}\ I_{1}$ by Corollary 2.9, the case $r=2$
being trivial. Using induction hypothesis on $s$ we get
$\operatorname{sdepth}_{S^{\prime}}\
I^{\prime}\geq\operatorname{depth}_{S^{\prime}}\ I^{\prime}$. We have the
decomposition $I=I^{\prime}\oplus((x_{1},\ldots,x_{e})\cap I)$ as linear
spaces and it follows
$\operatorname{sdepth}_{S}\
I\geq\operatorname{min}\\{\operatorname{sdepth}_{S^{\prime}}\
I^{\prime},\operatorname{sdepth}_{S}\ ((x_{1},\ldots,x_{e})\cap I)\\}.$
But $J=(x_{1},\ldots,x_{e})\cap
I=\cap_{i>r}^{s}P^{\prime}_{i}\cap(x_{1},\ldots,x_{e})$ because
$(x_{1},\ldots,x_{e})\subset P_{i}$ for $1\leq i\leq r$ and the decomposition
is irredundant since if $(x_{1},\ldots,x_{e})\subset P_{j}$ then $P_{r}\subset
P_{j}$ which is false. Note that $q(J)=q(I_{2})$ and the graph associated to
$J$ coincides with $\Gamma_{2}$. Again by Corollary 2.9
$\operatorname{depth}_{S}\ J=\operatorname{depth}_{S}\ I_{2}>3$, the case
$r=2$ being trivial. Using induction hypothesis on $s$ we get
$\operatorname{sdepth}_{S}\ J\geq\operatorname{depth}_{S}\ J$ and so
$\operatorname{sdepth}_{S}\
I\geq\operatorname{min}\\{\operatorname{sdepth}_{S^{\prime}}\
I^{\prime},\operatorname{sdepth}_{S}\
J\\}\geq\operatorname{min}\\{\operatorname{depth}_{S}\
I_{1},\operatorname{depth}_{S}\ I_{2}\\}=\operatorname{depth}_{S}\ I.$
###### Theorem 3.5.
Let $I$ be a monomial squarefree ideal of $S$ such that the sum of every three
different of its minimal prime ideals is the maximal ideal of $S$, or more
general a constant ideal $J$ of $S$. Then $\operatorname{sdepth}_{S}\
I\geq\operatorname{depth}_{S}\ I$.
The proof follows from Theorem 1.8 and the above proposition, the reduction to
the case $J=m$ being given by [1, Lemma 3.6].
## References
* [1] J. Herzog, M. Vladoiu, X. Zheng, How to compute the Stanley depth of a monomial ideal, J. Algebra, 322 (2009), 3151-3169.
* [2] J. Herzog, D. Popescu, M. Vladoiu, Stanley depth and size of a monomial ideal, arXiv:AC/1011.6462v1, 2010, to appear in Proceed. AMS.
* [3] M. Ishaq, Upper bounds for the Stanley depth, to appear in Comm. Algebra, arXiv:AC/1003.3471.
* [4] G. Lyubeznik, On the arithmetic rank of monomial ideals. J. Algebra 112, 86 89 (1988).
* [5] A. Popescu, Special Stanley Decompositions, Bull. Math. Soc. Sc. Math. Roumanie, 53(101), no 4 (2010), arXiv:AC/1008.3680.
* [6] D. Popescu, Stanley depth of multigraded modules, J. Algebra 321(2009), 2782-2797.
* [7] D. Popescu, An inequality between depth and Stanley depth, Bull. Math. Soc. Sc. Math. Roumanie 52(100), (2009), 377-382, arXiv:AC/0905.4597v2.
* [8] D. Popescu, Stanley conjecture on intersections of four monomial prime ideals. arXiv.AC/1009.5646.
* [9] D. Popescu, I. Qureshi, Computing the Stanley depth, J. Algebra, 323 (2010), 2943-2959.
* [10] A. Rauf, Depth and Stanley depth of multigraded modules, Comm. Algebra, 38 (2010),773-784.
* [11] R.P. Stanley, Linear Diophantine equations and local cohomology, Invent. Math. 68 (1982) 175-193.
* [12] R. H. Villarreal, Monomial Algebras, Marcel Dekker Inc., New York, 2001.
|
arxiv-papers
| 2011-04-29T10:48:52 |
2024-09-04T02:49:18.496500
|
{
"license": "Public Domain",
"authors": "Dorin Popescu",
"submitter": "Dorin Popescu",
"url": "https://arxiv.org/abs/1104.5596"
}
|
1104.5625
|
# Volume growth and the Cheeger Isoperimetric Constant of submanifolds in
manifolds which posses a pole
Vicent Gimeno# Departament de Matemàtiques-INIT, Universitat Jaume I,
Castelló, Spain. gimenov@guest.uji.es and Vicente Palmer Departament de
Matemàtiques-INIT, Universitat Jaume I, Castelló, Spain. palmer@mat.uji.es
###### Abstract.
We obtain an estimate of the Cheeger isperimetric constant in terms of the
volume growth for a properly immersed submanifold in a Riemannian manifold
which posses at least one pole and sectional curvature bounded form above .
###### Key words and phrases:
Cheeger isoperimetric constant, volume growth, submanifold, Chern-Osserman
inequality.
###### 2000 Mathematics Subject Classification:
Primary 53C20, 53C42
# Supported by the Fundació Caixa Castelló-Bancaixa Grants P1.1B2006-34 and
P1.1B2009-14
* Supported by MICINN grant No. MTM2010-21206-C02-02.
## 1\. Introduction
The Cheeger isoperimetric constant $\mathcal{I}_{\infty}(M)$ of a noncompact
Riemannian manifold of dimension $n\geq 2$ is defined as (see [3], [4]):
(1.1)
$\mathcal{I}_{\infty}(M):=\inf\limits_{\Omega}\bigg{\\{}\frac{\operatorname{Vol}(\partial\Omega)}{\operatorname{Vol}(\Omega)}\bigg{\\}}$
where $\Omega$ ranges over open submanifolds of $M$ possesing compact closure
and smooth boundary, $\operatorname{Vol}(\partial\Omega)$ denotes de
$(n-1)$-dimensional volume of the boundary $\partial\Omega$ and
$\operatorname{Vol}(\Omega)$ denotes the $n$-dimensional volume of $\Omega$.
We are going to focus on obtaining sharp upper and lower bounds for the
Cheeger isoperimetric constant $\mathcal{I}_{\infty}(P)$ of a complete
submanifold $P$ with controlled mean curvature and properly immersed in an
ambient manifold $N$ with sectional curvatures bounded from above and which
posses at least one pole.
As a consequence of these upper and lower bounds, and as a first glimpse of
our main theorems (Theorems 3.2 and 3.3 in section §.3), we present the
following results, which constitutes a particular case of them when it is
considered a complete and minimal submanifold properly immersed in a Cartan-
Hadamard manifold, (if we consider compact and minimal submanifolds of a
Riemannian manifold satisfying other geometric restrictions, we refer to the
work [11], where certain isoperimetric inequalities involving these
submanifolds have been proved).
###### Theorem A.
Let $P^{m}$ be a complete and minimal submanifold properly immersed in a
Cartan-Hadamard manifold $N$ with sectional curvatures bounded from above as
$K_{N}\leq b\leq 0$, and suppose that
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(P\cap
B^{N}_{t})}{\operatorname{Vol}(B^{m,b}_{t})})<\infty$, where $B^{N}_{t}$ is
the geodesic $t$-ball in the ambient manifold $N$ and $B^{m,b}_{t}$ denotes
the geodesic $t$-ball in the real space form of constant sectional curvature
$\mathbb{K}^{m}(b)$.
Then
(1.2) $\mathcal{I}_{\infty}(P)\leq(m-1)\sqrt{-b}$
###### Theorem B.
Let $P^{m}$ be a complete and minimal submanifold properly immersed in a
Cartan-Hadamard manifold $N$ with sectional curvatures bounded from above as
$K_{N}\leq b\leq 0$. Then
(1.3) $\mathcal{I}_{\infty}(P)\geq(m-1)\sqrt{-b}$
Lower bounds for $\mathcal{I}_{\infty}(P)$ come from a direct application of
the divergence theorem to the laplacian of the extrinsic distance defined on
the submanifold using the distance in the ambient manifold, following the
arguments of Proposition 3 in [19] and of Theorem 6.4 in [4].
On the other hand, upper bounds has been obtained assuming that the
(extrinsic) volume growth of the submanifold is bounded from above by a finite
quantity. As we shall see in the corollaries, when the submanifold is a
minimal immersion in the Euclidean space or when we are dealing with minimal
surfaces in the Euclidean or the Hyperbolic space this crucial fact relates
Cheeger’s constant $\mathcal{I}_{\infty}(P)$ with the total extrinsic
curvature of the submanifold $\int_{P}\|B^{P}\|^{m}d\sigma$, in the sense that
finiteness of this total extrinsic curvature implies upper bounds for the
Cheeger’s constant, using the results in [1], [5] and [7].
We are going to see how to obtain lower and upper bounds of
$\mathcal{I}_{\infty}(P)$ encompasses to obtain comparisons to the Laplacian
of the extrinsic distance defined on the submanifold, and the techniques used
in order to obtain these bounds are based on the Hessian analysis of this
restricted distance function. When the extrinsic curvature of the submanifold
is bounded, (from above or from below), this analysis focuses on the relation,
given in [9], among the Hessian of this function with these (extrinsic)
curvature bounds, providing comparison results for the Hessian and the
Laplacian of the distance function in the submanifold.
As a model in these comparisons are used the corresponding values for these
operators computed for the intrinsic distance of a rotationally symmetric
space whose sectional curvatures bounds the corresponding curvatures of the
ambient manifold.
### 1.1. Outline of the paper
In section §.2 we present the basic definitions and facts about the extrinsic
distance restricted to a submanifold, and about the rotationally symmetric
spaces used as a model to compare. We present too the basic results about the
Hessian comparison theory of restricted distance function we are going to use,
finishing this section with the description of the isoperimetric context where
the results holds. Section §.3 is devoted to the statement and proof of two
main Theorems 3.2 and 3.3 and in final section §.4 we can find stated and
proved three corollaries.
## 2\. Preliminaires
### 2.1. The extrinsic distance
We assume throughout the paper that $P^{m}$ is a complete, non-compact,
properly immersed, $m$-dimensional submanifold in a complete Riemannian
manifold $N^{n}$ which posses at least one pole $o\in N$. Recall that a pole
is a point $o$ such that the exponential map
$\exp_{o}\colon T_{o}N^{n}\to N^{n}$
is a diffeomorphism. For every $x\in N^{n}\setminus\\{o\\}$ we define
$r(x)=r_{o}(x)=\operatorname{dist}_{N}(o,x)$, and this distance is realized by
the length of a unique geodesic from $o$ to $x$, which is the radial geodesic
from $o$. We also denote by $r$ the restriction
$r|_{P}:P\to\mathbb{R}_{+}\cup\\{0\\}$. This restriction is called the
extrinsic distance function from $o$ in $P^{m}$. The gradients of $r$ in $N$
and $P$ are denoted by $\operatorname{\nabla}^{N}r$ and
$\operatorname{\nabla}^{P}r$, respectively. Let us remark that
$\operatorname{\nabla}^{P}r(x)$ is just the tangential component in $P$ of
$\operatorname{\nabla}^{N}r(x)$, for all $x\in S$. Then we have the following
basic relation:
(2.1)
$\nabla^{N}r=\operatorname{\nabla}^{P}r+(\operatorname{\nabla}^{N}r)^{\bot},$
where $(\operatorname{\nabla}^{N}r)^{\bot}(x)=\nabla^{\bot}r(x)$ is
perpendicular to $T_{x}P$ for all $x\in P$.
###### Definition 2.1.
Given a connected and complete submanifold $P^{m}$ properly immersed in a
manifold $N^{n}$ with a pole $o\in N$, we denote the extrinsic metric balls of
radius $t>0$ and center $o\in N$ by $D_{t}(o)$. They are defined as the
intersection
$B^{N}_{t}(o)\cap P=\\{x\in P\colon r(x)<t\\},$
where $B^{N}_{t}(o)$ denotes the open geodesic ball of radius $t$ centered at
the pole $o$ in $N^{n}$.
###### Remark a.
The extrinsic domains $D_{t}(o)$ are precompact sets, (because we assume in
the definition above that the submanifold $P$ is properly immersed), with
smooth boundary $\partial D_{t}(o)$. The assumption on the smoothness of
$\partial D_{t}(o)$ makes no restriction. Indeed, the distance function $r$ is
smooth in $N\setminus\\{o\\}$ since $N$ is assumed to possess a pole $o\in N$.
Hence the restriction $r|_{P}$ is smooth in $P$ and consequently the radii $t$
that produce smooth boundaries $\partial D_{t}(o)$ are dense in $\mathbb{R}$
by Sard’s theorem and the Regular Level Set Theorem.
We now present the curvature restrictions which constitute the geometric
framework of our investigations.
###### Definition 2.2.
Let $o$ be a point in a Riemannian manifold $N$ and let $x\in N-\\{o\\}$. The
sectional curvature $K_{N}(\sigma_{x})$ of the two-plane $\sigma_{x}\in
T_{x}N$ is then called a $o$-radial sectional curvature of $N$ at $x$ if
$\sigma_{x}$ contains the tangent vector to a minimal geodesic from $o$ to
$x$. We denote these curvatures by $K_{o,N}(\sigma_{x})$.
In order to control the mean curvatures $H_{P}(x)$ of $P^{m}$ at distance $r$
from $o$ in $N^{n}$ we introduce the following definition:
###### Definition 2.3.
The $o$-radial mean curvature function for $P$ in $N$ is defined in terms of
the inner product of $H_{P}$ with the $N$-gradient of the distance function
$r(x)$ as follows:
$\mathcal{C}(x)=-\langle\nabla^{N}r(x),H_{P}(x)\rangle\quad{\textrm{for
all}}\quad x\in P\,\,.$
### 2.2. Model Spaces
The model spaces $M^{m}_{w}$ are rotationally symmetric spaces which serve as
comparison controllers for the radial sectional curvatures of the ambient
space $N^{n}$.
###### Definition 2.4 (See [10], [9]).
A $w-$model $M_{w}^{m}$ is a smooth warped product with base
$B^{1}=[\,0,\,R[\,\,\subset\,\mathbb{R}$ (where $\,0<R\leq\infty$ ), fiber
$F^{m-1}=S^{m-1}_{1}$ (i.e. the unit $(m-1)-$sphere with standard metric), and
warping function $w:\,[\,0,\,R[\,\to\mathbb{R}_{+}\cup\\{0\\}\,$ with
$w(0)=0$, $w^{\prime}(0)=1$, and $w(r)>0\,$ for all $\,r>0\,$. The point
$o_{w}=\pi^{-1}(0)$, where $\pi$ denotes the projection onto $B^{1}$, is
called the center point of the model space. If $R=\infty$, then $o_{w}$ is a
pole of $M_{w}^{m}$.
###### Remark b.
The simply connected space forms $\mathbb{K}^{m}(b)$ of constant curvature $b$
can be constructed as $w-$models $\mathbb{K}^{n}(b)=M^{n}_{w_{b}}$ with any
given point as center point using the warping functions
(2.2) $w_{b}(r)=\begin{cases}\frac{1}{\sqrt{b}}\sin(\sqrt{b}\,r)&\text{if
$b>0$}\\\ \phantom{\frac{1}{\sqrt{b}}}r&\text{if $b=0$}\\\
\frac{1}{\sqrt{-b}}\sinh(\sqrt{-b}\,r)&\text{if $b<0$}\quad.\end{cases}$
Note that for $b>0$ the function $w_{b}(r)$ admits a smooth extension to
$r=\pi/\sqrt{b}$. For $\,b\leq 0\,$ any center point is a pole.
###### Remark c.
The sectional curvatures of the model spaces $K_{o_{w},M_{w}}$ in the radial
directions from the center point are determined by the radial function
$K_{o_{w},M_{w}}(\sigma_{x})\,=\,K_{w}(r)\,=\,-\frac{w^{\prime\prime}(r)}{w(r)}$,
(see [9], [10] [16]) . Moreover, the mean curvature of the distance sphere of
radius $r$ from the center point is
(2.3) $\eta_{w}(r)=\frac{w^{\prime}(r)}{w(r)}=\frac{d}{dr}\ln(w(r))\quad.$
Hence, the sectional curvature of $\mathbb{K}^{n}(b)$ is given by
$-\frac{w_{b}^{\prime\prime}(r)}{w_{b}(r)}=b$ and the mean curvature of the
geodesic $r-$sphere $S^{w_{b}}_{r}=S^{b,n-1}_{r}$ in the real space form
$\mathbb{K}^{n}(b)$, ’pointed inward’ is (see [17]):
$\eta_{w_{b}}=h_{b}(t)=\left\\{\begin{array}[]{l}\sqrt{b}\cot\sqrt{b}t\,\,\text{
if }\,\,b>0\\\ 1/t\,\,\text{ if }\,\,b=0\\\ \sqrt{-b}\coth\sqrt{-b}t\,\,\text{
if }\,\,b<0\end{array}\right.$
In particular, in [14] we introduced, for any given warping function
$\,w(r)\,$, the isoperimetric quotient function $\,q_{w}(r)\,$ for the
corresponding $w-$model space $\,M_{w}^{m}\,$ as follows:
(2.4)
$q_{w}(r)\,=\,\frac{\operatorname{Vol}(B_{r}^{w})}{\operatorname{Vol}(S_{r}^{w})}\,=\,\frac{\int_{0}^{r}\,w^{m-1}(t)\,dt}{w^{m-1}(r)}\quad.\\\
$
where $B^{w}_{r}$ and $S^{w}_{r}$ denotes respectively the metric $r-$ball and
the metric $r-$sphere in $M^{m}_{w}$.
### 2.3. Hessian comparison analysis of the extrinsic distance
We are going to give here a corollary of the Hessian comparison Theorem A in
[9], which concerns the bounds for the laplacian of a radial function defined
on the submanifold, (see [12] and [18] for detailed computations) .
###### Theorem 2.5.
Let $N^{n}$ be a manifold with a pole $o$, let $M_{w}^{m}$ denote a $w-$model
with center $o_{w}$. Let $P^{m}$ be a properly immersed submanifold in $N$.
Then we have the following dual Laplacian inequalities for modified distance
functions $f\circ r:P\longrightarrow\mathbb{R}$:
Suppose that every $o$-radial sectional curvature at $x\in N-\\{o\\}$ is
bounded by the $o_{w}$-radial sectional curvatures in $M_{w}^{m}$ as follows:
(2.5)
$\mathcal{K}(\sigma(x))\,=\,K_{o,N}(\sigma_{x})\leq-\frac{w^{\prime\prime}(r)}{w(r)}\quad.$
Then we have for every smooth function $f(r)$ with $f^{\prime}(r)\leq
0\,\,\textrm{for all}\,\,\,r$, (respectively $f^{\prime}(r)\geq
0\,\,\textrm{for all}\,\,\,r$):
(2.6) $\displaystyle\Delta^{P}(f\circ r)\,\leq(\geq)$
$\displaystyle\left(\,f^{\prime\prime}(r)-f^{\prime}(r)\eta_{w}(r)\,\right)\|\nabla^{P}r\|^{2}$
$\displaystyle+mf^{\prime}(r)\left(\,\eta_{w}(r)+\langle\,\nabla^{N}r,\,H_{P}\,\rangle\,\right)\quad,$
where $H_{P}$ denotes the mean curvature vector of $P$ in $N$.
### 2.4. The Isoperimetric Comparison space
###### Definition 2.6 ( [15]).
Given the smooth functions $w:\mathbb{R}_{+}\longrightarrow\mathbb{R}_{+}$ and
$h:\mathbb{R}_{+}\longrightarrow\mathbb{R}$ with $w(0)=0$, $w^{\prime}(0)=1$
and $-\infty<h(0)<\infty$, the isoperimetric comparison space $M^{m}_{W}$ is
the $W-$model space with base interval $B\,=\,[\,0,R\,]$ and warping function
$W(r)$ defined by the following differential equation:
(2.7) $\frac{W^{\prime}(r)}{W(r)}\,=\,\eta_{w}(r)\,-\,\frac{m}{m-1}h(r)\\\
\quad.$
and the following boundary condition:
(2.8) $W^{\prime}(0)=1\quad.$
It is straightforward to see, using equation (2.8), that $W(r)$ is only $0$ at
$r=0$, so $M^{m}_{W}$ has a well defined pole $o_{W}$ at $r=0$. Moreover,
$W(r)>0$ for all $r>0$.
Note that when $h(r)=0\,\,\,{\textrm{for all \,\,}}r$, then
$W(r)=w(r)\,\,\,{\textrm{for all \,}}r$, so $M^{m}_{W}$ becomes a model space
with warping function $w$, $M^{m}_{w}$.
###### Definition 2.7.
The model space $M_{W}^{m}$ is $w-$balanced from above (with respect to the
intermediary model space $M_{w}^{m}$) iff the following holds for all
$r\in\,[\,0,R\,]$:
(2.9) $\displaystyle\eta_{w}(r)$ $\displaystyle\geq 0$
$\displaystyle\eta^{\prime}_{W}(r)$ $\displaystyle\leq 0\,\,\forall r\,\quad.$
Note that $\eta^{\prime}_{W}(r)\leq 0\,\,\forall r$ is equivalent to the
condition
(2.10) $-(m-1)(\eta^{2}_{w}(r)+K_{w}(r))\,\leq mh^{\prime}(r)\quad.$
###### Definition 2.8.
The model space $M_{W}^{m}\,$ is $w-$balanced from below (with respect to the
intermediary model space $M_{w}^{m}$) iff the following holds for all
$r\in\,[\,0,R\,]$:
(2.11) $q_{W}(r)\left(\eta_{w}(r)-h(r)\right)\,\geq 1/m\quad.\\\ $
###### Examples .
We present here a list of examples of isoperimetric comparison spaces and
balance.
1. (1)
Given the functions $w_{b}(r)$ and $h(r)=C\geq\sqrt{-b}\,\,\forall r>0$, let
us consider $\mathbb{K}^{m}(b)=M^{m}_{w_{b}}$ as an intermediary model space
with constant sectional curvature $b<0$. Then, it is straightforward to check
that the model space $M^{m}_{W}$ defined from $w_{b}$ and $h$ as in Definition
2.6 is $w_{b}-$balanced from above, and is not $w_{b}-$balanced balanced from
below.
2. (2)
Let $M^{m}_{w}$ be a model space, with $w(r)=e^{r^{2}}+r-1$. Let us consider
now $h(r)=0\,\,\forall r>0$. In this case, as $h(r)=0$, then $W(r)=w(r)$, so
the isoperimetric comparison space $M^{m}_{W}$ agrees with its corresponding
intermediary model space $M^{m}_{w}$. Moreover, (see [14]),
$q_{w}(r)\eta_{w}(r)\geq\frac{1}{m}\quad.$
so $M^{m}_{w}$ is $w$-balanced from below.
However, it is easy to see that
$\eta_{w}(r)=\frac{2re^{r^{2}}+1}{e^{r^{2}}+r-1}$ is an increasing function
from a given value $r_{0}>0$, and hence, doesn’t satisfies second inequality
in (2.9) and thence, it is not $w$-balanced from above.
3. (3)
Let $\mathbb{K}^{m}(b)=M^{m}_{w_{b}}$, ($b\leq 0$), the Euclidean or the
Hyperbolic space, with warping function $w_{b}(r)$. Let us consider
$h(r)=0\,\,\,\forall r$. In this context, these spaces are isoperimetric
spaces with themselves as intermediary spaces, and satisfies both balance
conditions given in definitions 2.7 and 2.8, (see [14]).
### 2.5. Comparison Constellations
We now present the precise settings where our main results take place,
introducing the notion of comparison constellations.
###### Definition 2.9.
Let $N^{n}$ denote a Riemannian manifold with a pole $o$ and distance function
$r\,=\,r(x)\,=\,\operatorname{dist}_{N}(o,x)$. Let $P^{m}$ denote a complete
and properly immersed submanifold in $N^{n}$. Suppose the following conditions
are satisfied for all $x\in P^{m}$ with $r(x)\in[\,0,R]\,$:
1. (a)
The $o$-radial sectional curvatures of $N$ are bounded from above by the
$o_{w}$-radial sectional curvatures of the $w-$model space $M_{w}^{m}$:
$\mathcal{K}(\sigma_{x})\,\leq\,-\frac{w^{\prime\prime}(r(x))}{w(r(x))}\quad.$
2. (b)
The $o$-radial mean curvature of $P$ is bounded from above by a smooth radial
function, (the bounding function) $h:\mathbb{R}_{+}\longrightarrow\mathbb{R}$,
($h(0)\in]-\infty,\infty[$):
$\mathcal{C}(x)\leq h(r(x))\quad.$
Let $M_{W}^{m}$ denote the $W$-model with the specific warping function
$W:\pi(M^{m}_{W})\to\mathbb{R}_{+}$ constructed in Definition 2.6 via $w$, and
$h$. Then the triple $\\{N^{n},P^{m},M^{m}_{W}\\}$ is called an isoperimetric
comparison constellation on the interval $[\,0,R]\,$.
###### Examples .
We are going to describe minimal and non-minimal settings.
1. (1)
Minimal submanifolds immersed in an ambient Cartan-Hadamard manifold: let $P$
be a minimal submanifold of a Cartan-Hadamard manifold $N$, with sectional
curvatures bounded above by $b\leq 0$. Let us consider as the bounding
function for the $o$-radial mean curvature of $P$ the function
$h(r)=0\,\,\forall r\geq 0$ and as $w(r)$ the functions $w_{b}(r)$ with $b\leq
0$.
It is straigthforward to see that, under these restrictions, $W=w_{b}$ and
hence, $M^{m}_{W}=\mathbb{K}^{m}(b)$, so
$\,\\{N^{n},\,P^{m},\,\mathbb{K}^{m}(b)\\}\,$ is an isoperimetric comparison
constellation on the interval $[\,0,R]\,$, for all $R>0$. Here the model space
$M^{m}_{W}=M^{m}_{w_{b}}=\mathbb{K}^{m}(b)$, is $w_{b}$-balanced from above
and from below.
2. (2)
Non-minimal submanifolds immersed in an ambient Cartan-Hadamard manifold. Let
us consider again a Cartan-Hadamard manifold $N$, with sectional curvatures
bounded above by $a\leq 0$. Let $P^{m}$ be a properly immersed submanifold in
$N$ such that
$\mathcal{C}(x)\leq h_{a,b}(r(x))\quad.$
where, fixing $a<b<0$, we define
$h_{a,b}(r)=\frac{m-1}{m}(\eta_{w_{a}}(r)-\eta_{w_{b}}(r))\,\,\forall r>0$.
Then, it is straightforward to check that $W=w_{b}$ and hence,
$M^{m}_{W}=\mathbb{K}^{m}(b)$, so $\\{N^{n},P^{m},M^{m}_{W}\\}$ is an
isoperimetric comparison constellation on the interval $[\,0,R]\,$, for all
$R>0$. Moreover the model space $M^{m}_{W}=M^{m}_{w_{b}}=\mathbb{K}^{m}(b)$ is
$w_{a}$-balanced from above and from below.
## 3\. Main results
Previous to the statement of our main theorems, we find upper bounds for the
isoperimetric quotient defined as the volume of the extrinsic sphere divided
by the volume of the extrinsic ball, in the setting given by the comparison
constellations.
###### Theorem 3.1.
(see [12], [17], [13]) Consider an isoperimetric comparison constellation
$\\{N^{n},P^{m},M^{m}_{W}\\}$. Assume that the isoperimetric comparison space
$\,M^{m}_{W}\,$ is $w$-balanced from below. Then
(3.1) $\frac{\operatorname{Vol}(\partial
D_{t})}{\operatorname{Vol}(D_{t})}\geq\frac{\operatorname{Vol}(S^{W}_{t})}{\operatorname{Vol}(B^{W}_{t})}.$
Furthemore, the function
$f(t)=\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{W}_{t})}$ is
monotone non-decreasing in $t$.
Moreover, if equality holds in (3.1) for some fixed radius $t_{0}>0$, then
$D_{t_{0}}$ is a cone in the ambient space $N^{n}$.
Now, we give the following upper bound for the Cheeger constant of a
submanifold $P$:
###### Theorem 3.2.
Consider an isoperimetric comparison constellation
$\\{N^{n},P^{m},M^{m}_{W}\\}$. Assume that the isoperimetric comparison space
$\,M^{m}_{W}\,$ is $w$-balanced from below. Assume moreover, that
1. (1)
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{W}_{t})})<\infty$
2. (2)
The limit
$\lim_{t\to\infty}\frac{\operatorname{Vol}(S^{W}_{t})}{\operatorname{Vol}(B^{W}_{t})}$
exists
Then
(3.2)
$\mathcal{I}_{\infty}(P)\leq\lim_{r\to\infty}\frac{\operatorname{Vol}(S^{W}_{t})}{\operatorname{Vol}(B^{W}_{t})}$
In particular, let $P^{m}$ be a complete and minimal submanifold properly
immersed in a Cartan-Hadamard manifold $N$ with sectional curvatures bounded
from above as $K_{N}\leq b\leq 0$, and suppose that
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{m,b}_{t})})<\infty$.
Then
(3.3) $\mathcal{I}_{\infty}(P)\leq(m-1)\sqrt{-b}$
###### Proof.
Let us define
(3.4)
$F(t):=\frac{\operatorname{Vol}(D_{t})^{\prime}}{\operatorname{Vol}(D_{t})}-\frac{\operatorname{Vol}(S_{t}^{W})}{\operatorname{Vol}(B_{t}^{W})}=\left[\ln\left(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{W}_{t})}\right)\right]^{\prime}$
It is easy to see by the co-area formula and applying Theorem 3.1 that $F(t)$
is a nonnegative function. Moreover,
$\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{W}_{t})}$ is non-
decreasing, (see [13]) .
Integrating between $t_{0}>0$ and $t>t_{0}$:
$\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{W}_{t})}=\frac{\operatorname{Vol}(D_{t_{0}})}{\operatorname{Vol}(B^{W}_{t_{0}})}\,e^{\int_{t_{0}}^{t}F(s)\,ds}$
But on the other hand by hypothesis (2) and the fact that
$\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{W}_{t})}$ is non-
decreasing, we know that
$\lim_{t\to\infty}\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{W}_{t})}=\sup_{t}\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{W}_{t})}<\infty$.
Then, as $F(t))\geq 0\,\,\forall t>0$:
$\int_{t_{0}}^{\infty}F(s)ds<\infty$
and hence there is a monotone increasing sequence $\\{t_{i}\\}_{i=0}^{\infty}$
tending to infinity, such that:
(3.5) $\lim_{i\to\infty}F(t_{i})=0$
Let us consider now the exhaustion $\\{D_{t_{i}}\\}_{i=1}^{\infty}$ of $P$ by
extrinsic balls.
We have that, using equation (1.1),
(3.6) $\mathcal{I}_{\infty}(P)\leq\frac{\operatorname{Vol}(\partial
D_{t_{i}})}{\operatorname{Vol}(D_{t_{i}})}\leq\frac{(\operatorname{Vol}(D_{t_{i}}))^{\prime}}{\operatorname{Vol}(D_{t_{i}})}\,\,\,\,\forall
r_{i}$
On the other hand, as $\lim_{i\to\infty}F(t_{i})=0$, then
(3.7)
$\lim_{i\to\infty}\frac{(\operatorname{Vol}(D_{t_{i}}))^{\prime}}{\operatorname{Vol}(D_{t_{i}}))}=\lim_{i\to\infty}\frac{\operatorname{Vol}(S^{W}_{t_{i}})}{\operatorname{Vol}(B^{W}_{t_{i}})}$
and therefore
(3.8)
$\mathcal{I}_{\infty}(P)\leq\lim_{i\to\infty}\frac{\operatorname{Vol}(S^{W}_{t_{i}})}{\operatorname{Vol}(B^{W}_{t_{i}})}$
Inequality (3.3) follows inmediately having into account that, as it was
showed in the examples above, when $P$ is minimal in a Cartan-Hadamard
manifold, then considering $h(r)=0\,\,\,\,\forall r$ and considering
$w(r)=w_{b}(r)$ we have that $\\{N^{n},P^{m},\mathbb{K}^{m}(b)\\}$ is a
comparison constellation, with $\mathbb{K}^{m}(b)$ $w_{b}$-balanced from
below.
As, by hypothesis,
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{b,m}_{t})})<\infty$
and we have that,
(3.9)
$\displaystyle\lim_{t\to\infty}\frac{\operatorname{Vol}(S^{W}_{t})}{\operatorname{Vol}(B^{W}_{t})}$
$\displaystyle=\lim_{t\to\infty}\frac{\operatorname{Vol}(S^{0,m-1}_{t})}{\operatorname{Vol}(B^{0,m}_{t})}=0\quad\text{if
$b=0$}$
$\displaystyle\lim_{t\to\infty}\frac{\operatorname{Vol}(S^{W}_{t})}{\operatorname{Vol}(B^{W}_{t})}$
$\displaystyle=\lim_{t\to\infty}\frac{\operatorname{Vol}(S^{b,m-1}_{t})}{\operatorname{Vol}(B^{b,m}_{t})}=(m-1)\sqrt{-b}\quad\text{if
$b<0$}$
we apply now inequality (3.2). ∎
Now, we have the following result, which is a direct extension to the
classical Yau’s result, (see [19]), to minimal submanifolds, using the same
techniques than in [4]:
###### Theorem 3.3.
Consider an isoperimetric comparison constellation
$\\{N^{n},P^{m},M^{m}_{W}\\}$. Assume that the isoperimetric comparison space
$\,M^{m}_{W}\,$ is $w$-balanced from above. Assume moreover, that the limit
$\lim_{r\to\infty}\frac{W^{\prime}(r)}{W(r)}$ exists.
Then
(3.10)
$\mathcal{I}_{\infty}(P)\geq(m-1)\lim_{r\to\infty}\frac{W^{\prime}(r)}{W(r)}$
In particular, let $P^{m}$ be an complete and minimal submanifold properly
immersed in a Cartan-Hadamard manifold $N$ with sectional curvatures bounded
from above as $K_{N}\leq b\leq 0$.
Then
(3.11) $\mathcal{I}_{\infty}(P)\geq(m-1)\sqrt{-b}$
###### Proof.
From equation (2.7) in the definition 2.6 of the isoperimetric comparison
space, we have:
(3.12) $\displaystyle(m-1)\frac{W^{\prime}(r)}{W(r)}+\eta_{w}(r)$
$\displaystyle=\,m\,\left(\eta_{w}(r)-h(r)\right)$
On the other hand, from Theorem 2.5:
(3.13) $\displaystyle\Delta^{P}r\,\geq$
$\displaystyle\left(m-\|\nabla^{P}r\|^{2}\right)\eta_{w}(r)+m\langle\,\nabla^{N}r,\,H_{P}\,\rangle\geq$
$\displaystyle(m-1)\eta_{w}(r)+m\langle\,\nabla^{N}r,\,H_{P}\,\rangle\geq$
$\displaystyle(m-1)\eta_{w}(r)-m\,h(r)=$ $\displaystyle
m\left(\eta_{w}(r)-h(r)\right)-\eta_{w}(r)$
Then, applying (3.12)
(3.14) $\triangle^{P}r\geq(m-1)\frac{W^{\prime}(r)}{W(r)}$
Now, if we consider a domain $\Omega\subseteq P$, precompact and with smooth
clausure, we have, given its outward unitary normal vector field, $\nu$:
$\langle\nu,\nabla^{P}r\rangle\leq 1$
so, applying divergence Theorem, and having into account that
$\frac{W^{\prime}(r)}{W(r)}$ is non-increasing
(3.15) $\displaystyle\operatorname{Vol}(\partial\Omega)$
$\displaystyle\geq\int_{\partial\Omega}\langle\nu,\nabla^{P}r\rangle d\mu$
$\displaystyle=\int_{\Omega}\Delta^{P}rd\sigma\geq\int_{\Omega}\frac{W^{\prime}(r)}{W(r)}d\sigma$
$\displaystyle\geq(m-1)\lim_{r\to\infty}\frac{W^{\prime}(r)}{W(r)}\operatorname{Vol}(\Omega)$
As
$\frac{\operatorname{Vol}(\partial\Omega)}{\operatorname{Vol}(\Omega)}\geq(m-1)\lim_{r\to\infty}\frac{W^{\prime}(r)}{W(r)}$
for any domain $\Omega$, we have the result.
Inequality (3.11) follows inmediately having into account that, as in the
proof of Theorem 3.2 and in the examples above, when $P$ is minimal in a
Cartan-Hadamard manifold, then we have that
$\\{N^{n},P^{m},\mathbb{K}^{m}(b)\\}$ is a comparison constellation
($h(r)=0\,\,\,\,\forall r$ and $w(r)=w_{b}(r)$), with the isoperimetric
comparison space used as a model $M^{m}_{W}=\mathbb{K}^{m}(b)$
$w_{b}$-balanced from above. Moreover,
$\lim_{r\to\infty}\frac{W^{\prime}(r)}{W(r)}=\sqrt{-b}$. ∎
## 4\. Applications: Cheeger constant of minimal submanifolds of Cartan-
Hadamard manifolds
### 4.1. Isoperimetric results and Chern-Osserman Inequality
In this subsection we are going to state two results, which describes how
minimality and the control on the total extrinsic curvature of the submanifold
implies, among other topological consequences, to have finite volume growth.
###### Theorem 4.1.
(see [1]) Let $P^{m}$ be an oriented, connected and complete minimal
submanifold immersed in the Euclidean space $\mathbb{R}^{n}$. Let us suppose
that $\int_{P}\|B^{P}\|^{m}d\sigma<\infty$, where $B^{P}$ is the second
fundamental form of $P$. Then
1. (1)
$P$ has finite topological type.
2. (2)
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(\partial
D_{t})}{\operatorname{Vol}(S^{0,m-1}_{t})})<\infty$
3. (3)
$-\chi(P)=\int_{P}\Phi
d\sigma+\lim_{t\to\infty}\frac{\operatorname{Vol}(\partial
D_{t})}{\operatorname{Vol}(S_{t}^{0,m-1})}$
where $\chi(P)$ is the Euler characteristic of $P$ and $\Phi$ is the Gauss-
Bonnet-Chern form on $P$, and $S_{t}^{b,m-1}$ denotes the geodesic $t$-sphere
in $\mathbb{K}^{m}(b)$.
###### Remark d.
Note that, applying inequality (3.1) in Theorem 3.1 to the submanifold $P$ in
above theorem, we conclude that, under the assumptions of Theorem 4.1, we have
the following bound for the volume growth
(4.1)
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{0,m}_{t})})\leq\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(\partial
D_{t})}{\operatorname{Vol}(S^{0,m-1}_{t})})<\infty$
where $B_{t}^{b,m}$ denotes the geodesic $t$-ball in $\mathbb{K}^{m}(b)$
On the other hand, we have that the Chern-Osserman inequality is satisfied by
complete and minimal surfaces in a simply connected real space form with
constant sectional curvature $b\leq 0$, $\mathbb{K}^{n}(b)$. Namely
###### Theorem 4.2.
(see [1], [5], [7], [8]) Let $P^{2}$ be an complete minimal surface immersed
in a simply connected real space form with constant sectional curvature $b\leq
0$, $\mathbb{K}^{n}(b)$. Let us suppose that
$\int_{P}\|B^{P}\|^{2}d\sigma<\infty$. Then
1. (1)
$P$ has finite topological type.
2. (2)
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{b,2}_{t})})<\infty$
3. (3)
$-\chi(P)\leq\frac{\int_{P}\|B^{P}\|^{2}}{4\pi}-\operatorname{Sup}_{t>0}\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B_{t}^{b,2})}$
where $\chi(P)$ is the Euler characteristic of $P$.
### 4.2. The Corollaries
The first Corollary is a direct application of the two main theorems.
###### Corollary 4.3.
Let $P^{m}$ be an complete and minimal submanifold properly immersed in a
Cartan-Hadamard manifold $N$ with sectional curvatures bounded from above as
$K_{N}\leq b\leq 0$. Let us suppose that
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{b,m}_{t})})<\infty$
Then
(4.2) $\mathcal{I}_{\infty}(P)=(m-1)\sqrt{-b}$
###### Proof.
It is a direct consequence of inequalities (3.3) and (3.11) in Theorem 3.2 and
Theorem 3.3. ∎
The second and the third corollaries are based in Theorems 4.1 and 4.2.
When we consider minimal submanifolds in $\mathbb{R}^{n}$, we have the
following result:
###### Corollary 4.4.
Let $P^{m}$ be an complete and minimal submanifold properly immersed in
$\mathbb{R}^{n}$, with finite total extrinsic curvature
$\int_{P}\|B^{P}\|^{m}d\sigma<\infty$.
Then
(4.3) $\mathcal{I}_{\infty}(P)=0$
###### Proof.
In this case, taking $h(r)=0\,\,\,\forall r$ and $w_{0}(r)=r$, we have that
$\\{\mathbb{R}^{n},P^{m},\mathbb{R}^{m}\\}$ is a comparison constellation
bounded from above, with $\mathbb{R}^{m}$ $w_{0}$-balanced from below. Hence
we apply Theorem 3.1 to obtain
(4.4)
$\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{0,m}_{t})}\leq\frac{\operatorname{Vol}(\partial
D_{t})}{\operatorname{Vol}(S^{0,m-1}_{t})}\,\,\,\,\,\textrm{for
all}\,\,\,t>0\quad.$
Therefore, as the total extrinsic curvature of $P$ is finite, we have,
applying Theorem 4.1, inequality (4.4) and Remark d,
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{0,m}_{t})})<\infty$
Finally,
$\lim_{t\to\infty}\frac{\operatorname{Vol}(S^{0,m-1}_{t})}{\operatorname{Vol}(B^{0,m}_{t})}=\lim_{t\to\infty}\frac{m}{t}=0$
Hence, applying Theorem 3.2, $\mathcal{I}_{\infty}(P)\leq 0$, so
$\mathcal{I}_{\infty}(P)=0$. ∎
Corollary 4.4 can be extended to complete and minimal surfaces (properly)
immersed in the Hyperbolic space, with finite total extrinsic curvature:
###### Corollary 4.5.
Let $P^{2}$ be an complete and minimal surface immersed in $\mathbb{K}^{n}(b)$
with finite total extrinsic curvature $\int_{P}\|B^{P}\|^{2}d\sigma<\infty$ .
Then
(4.5) $\mathcal{I}_{\infty}(P)=\sqrt{-b}$
###### Proof.
As the total extrinsic curvature of $P$ is finite, we have, applying Theorem
4.2:
$\operatorname{Sup}_{t>0}(\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B^{b,2}_{t})})<\infty$
Then, apply Corollary 4.3 with $m=2$. ∎
## References
* [1] M. T. Anderson, The compactification of a minimal submanifold in Euclidean space by the Gauss map , I.H.E.S. Preprint, 1984
* [2] I. Chavel, Eigenvalues in Riemannian Geometry, Academic Press (1984).
* [3] I. Chavel, Isoperimetric inequalities. Differential geometric and analytic perspectives, Cambridge Tracts in Mathematics, 145. Cambridge University Press (2001).
* [4] I. Chavel, Riemannian geometry: A modern introduction, Cambridge Tracts in Mathematics, 108. Cambridge University Press (1993).
* [5] Chen Qing. On the volume growth and the topology of complete minimal submanifolds of a Euclidean space J. Math. Sci. Univ. Tokyo 2 (1995), 657-669.
* [6] Chen Qing, On the area growth of minimal surfaces in $\mathbb{H}^{n}$, Geometriae Dedicata, 75 (1999), 263–273.
* [7] Chen Qing and Cheng Yi, Chern-osserman inequality for minimal surfaces in $\mathbb{H}^{n}$, Proc. Amer. Math Soc., Vol. 128, 8, (1999), 2445-2450.
* [8] V. Gimeno and V. Palmer Extrinsic isoperimetry and compactification of minimal surfaces in Euclidean and Hyperbolic spaces, Preprint , (2010), arXiv:1011.5380v1.
* [9] R. Greene and S. Wu Function theory on manifolds which posses a pole, Lecture Notes in Math.,699, (1979), Springer Verlag, Berlin.
* [10] A. Grigor’yan, Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds, Bull. Amer. Math. Soc. 36 (1999), 135–249.
* [11] D. Hoffman and J. Spruck, Sobolev and Isoperimetric Inequalities for Riemannian Submanifolds, Comm. of Pure and App. Mathematics, 27 (1974), 715-727.
* [12] A. Hurtado, S. Markvorsen and V. Palmer, Torsional rigidity of submanifolds with controlled geometry, Math. Ann. 344 (2009), 511–542.
* [13] S. Markvorsen and V. Palmer, The relative volume growth of minimal submanifolds, Archiv der Mathematik, 79 (2002), 507–514.
* [14] S. Markvorsen and V. Palmer, Torsional rigidity of minimal submanifolds, Proc. London Math. Soc. 93 (2006), 253–272.
* [15] S. Markvorsen and V. Palmer, Extrinsic isoperimetric analysis on submanifolds with curvatures bounded from below, J. Geom. Anal. 20 (2010) 388–421.
* [16] B. O’Neill, Semi-Riemannian Geometry; With Applications to Relativity, Academic Press (1983).
* [17] V. Palmer, Isoperimetric Inequalities for extrinsic balls in minimal submanifolds and their applications, J. London Math. Soc. (2) 60 (1999), 607-616.
* [18] V. Palmer, On deciding whether a submanifold is parabolic of hyperbolic using its mean curvature Simon Stevin Transactions on Geometry, vol 1. 131-159, Simon Stevin Institute for Geometry, Tilburg, The netherlands, 2010.
* [19] S.T. Yau, Isoperimetric constants and the first eigenvalue of a compact Riemannian manifold, Annales Scientifiques de L’E.N.S., 8, num. 4, (1975), 487-507.
|
arxiv-papers
| 2011-04-29T13:30:17 |
2024-09-04T02:49:18.502358
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vicent Gimeno and Vicente Palmer",
"submitter": "Vicent Gimeno",
"url": "https://arxiv.org/abs/1104.5625"
}
|
1105.0037
|
figuret
11institutetext: E. Butler (11email: eoin.butler@cern.ch) 22institutetext: W.
Bertsche 33institutetext: M. Charlton 44institutetext: A.J. Humphries
55institutetext: N. Madsen 66institutetext: D.P. van der Werf 77institutetext:
D. Wilding 88institutetext: Department of Physics, Swansea University, Swansea
SA2 8PP, United Kingdom 99institutetext: G.B. Andresen 1010institutetext: P.D.
Bowe 1111institutetext: J.S. Hangst 1212institutetext: Department of Physics
and Astronomy, Aarhus University, DK-8000 Aarhus C, Denmark 1313institutetext:
M.D. Ashkezari 1414institutetext: M.E. Hayden 1515institutetext: Department of
Physics, Simon Fraser University, Burnaby BC, V5A 1S6, Canada
1616institutetext: M. Baquero-Ruiz 1717institutetext: C.C. Bray
1818institutetext: S. Chapman 1919institutetext: J. Fajans 2020institutetext:
A. Povilus 2121institutetext: C. So 2222institutetext: J.S. Wurtele
2323institutetext: Department of Physics, University of California, Berkeley,
CA 94720-7300, USA 2424institutetext: C.L. Cesar 2525institutetext: R. Lambo
2626institutetext: Instituto de Física, Universidade Federal do Rio de
Janeiro, Rio de Janeiro 21941-972, Brazil 2727institutetext: T. Friesen
2828institutetext: M.C. Fujiwara 2929institutetext: R. Hydomako
3030institutetext: R.I. Thompson 3131institutetext: Department of Physics and
Astronomy, University of Calgary, Calgary AB, T2N 1N4, Canada
3232institutetext: M.C. Fujiwara 3333institutetext: D.R. Gill
3434institutetext: L. Kurchaninov 3535institutetext: K. Olchanski
3636institutetext: A. Olin 3737institutetext: J.W. Storey 3838institutetext:
TRIUMF, 4004 Wesbrook Mall, Vancouver BC, V6T 2A3, Canada 3939institutetext:
W.N. Hardy 4040institutetext: Department of Physics and Astronomy, University
of British Columbia, Vancouver BC, V6T 1Z4, Canada 4141institutetext: R.S.
Hayano 4242institutetext: Department of Physics, University of Tokyo, Tokyo
113-0033, Japan 4343institutetext: S. Jonsell 4444institutetext: Fysikum,
Stockholm University, SE-10609, Stockholm, Sweden 4545institutetext: S. Menary
4646institutetext: Department of Physics and Astronomy, York University,
Toronto, ON, M3J 1P3, Canada 4747institutetext: P. Nolan 4848institutetext: P.
Pusa 4949institutetext: Department of Physics, University of Liverpool,
Liverpool L69 7ZE, United Kingdom 5050institutetext: F. Robicheaux
5151institutetext: Department of Physics, Auburn University, Auburn, AL
36849-5311, USA 5252institutetext: E. Sarid 5353institutetext: Department of
Physics, NRCN-Nuclear Research Center Negev, Beer Sheva, IL-84190, Israel
5454institutetext: D.M. Silveira 5555institutetext: Y. Yamazaki
5656institutetext: Atomic Physics Laboratory, RIKEN Advanced Science
Institute, Wako, Saitama 351-0198, Japan
Graduate School of Arts and Sciences, University of Tokyo, Tokyo 153-8902,
Japan
# Towards Antihydrogen Trapping and Spectroscopy at ALPHA
E. Butler G.B. Andresen M.D. Ashkezari M. Baquero-Ruiz W. Bertsche P.D.
Bowe C.C. Bray C.L. Cesar S. Chapman M. Charlton J. Fajans T. Friesen
M.C. Fujiwara D.R. Gill J.S. Hangst W.N. Hardy R.S. Hayano M.E. Hayden
A.J. Humphries R. Hydomako S. Jonsell L. Kurchaninov R. Lambo N. Madsen
S. Menary P. Nolan K. Olchanski A. Olin A. Povilus P. Pusa F. Robicheaux
E. Sarid D.M. Silveira C. So J.W. Storey R.I. Thompson D.P. van der Werf
D. Wilding J.S. Wurtele Y. Yamazaki
ALPHA Collaboration
(Received: date / Accepted: date)
###### Abstract
Spectroscopy of antihydrogen has the potential to yield high-precision tests
of the CPT theorem and shed light on the matter-antimatter imbalance in the
Universe. The ALPHA antihydrogen trap at CERN’s Antiproton Decelerator aims to
prepare a sample of antihydrogen atoms confined in an octupole-based Ioffe
trap and to measure the frequency of several atomic transitions. We describe
our techniques to directly measure the antiproton temperature and a new
technique to cool them to below 10 K. We also show how our unique position-
sensitive annihilation detector provides us with a highly sensitive method of
identifying antiproton annihilations and effectively rejecting the cosmic-ray
background.
###### Keywords:
antihydrogen antimatter CPT Penning trap Atom trap
††journal: Hyperfine Interactions
## 1 Introduction
Antihydrogen, the bound state of an antiproton and a positron, is the simplest
pure antimatter atomic system. The first cold (non-relativistic) antihydrogen
atoms were synthesised by the ATHENA experiment in 2002 by combining
antiprotons and positrons under cryogenic conditions in a Penning trap
Amoretti _et al._ (2002). The neutral antihydrogen atoms formed were not
confined by the electric and magnetic fields used to hold the antiprotons and
positrons as non-neutral plasmas, but escaped to strike the matter of the
surrounding apparatus and annihilate. Detection of the coincident antiproton
and positron annihilation signals was used to identify antihydrogen in these
experiments. However, before performing high-precision spectroscopy, it is
highly desirable, perhaps even necessary, to confine the antihydrogen in an
atomic trap.
## 2 Atom Trap
Atoms with a permanent magnetic dipole moment $\vec{\mu}$ can be trapped by
exploiting the interaction of the dipole moment with an inhomogeneous magnetic
field. A three-dimensional maximum of magnetic field is not compatible with
Maxwell’s equations, but a minimum is. Thus, only atoms with $\mu$ aligned
antiparallel to the magnetic field (so-called ‘low-field seekers’) can be
trapped.
ALPHA creates a magnetic minimum using a variation of the Ioffe-Pritchard
configuration Pritchard (1983), replacing the transverse quadrupole magnet
with an octupole Bertsche _et al._ (2006). The octupole and the ‘mirror
coils’ that complete the trap are superconducting and are cooled to 4 K by
immersing them in liquid helium. The depth of the magnetic minimum produced is
approximately 0.8 T, equivalent to a trap depth of $0.6~{}\mathrm{K}\times
k_{\mathrm{B}}$ for ground state antihydrogen.
ALPHA’s scheme to detect trapped antihydrogen is to quickly release trapped
atoms from the atomic trap and detect their annihilation as they strike the
apparatus. Having the antihydrogen atoms escape over a short time minimises
the background from cosmic rays that can mimic antihydrogen annihilations (see
section 5), so the magnet systems have been designed to remove the stored
energy in as short a time as possible. The current has been measured to decay
with a time constant of 9 ms for the octupole and 8.5 ms for the mirror coils.
The atom trap is superimposed on a Penning trap, which is used to confine the
charged particles used in antihydrogen production. The Penning trap electrodes
are also cooled by a liquid helium reservoir and reach a temperature of
approximately 7 K. In the absence of external heating sources, the stored non-
neutral plasmas should come into thermal equilibrium at this temperature.
Introduction of the multipolar transverse magnetic field modifies the
confinement properties of the Penning trap. In the most extreme case, this
manifests as a ‘critical radius’ Fajans, Madsen, and Robicheaux (2008),
outside which particles can be lost from the trap simply because the magnetic
field lines along which the particles move intersect the electrode walls. Even
if particles are not lost, the transverse field results in a higher rate of
plasma diffusion Gilson and Fajans (2003). As the plasma diffuses and expands,
electrostatic potential energy is converted to thermal energy, resulting in a
temperature higher than would be otherwise expected.
ALPHA chose to use an octupole instead of the prototypical quadrupole in its
Ioffe trap to reduce the transverse fields close to the axis of the Penning
trap, where the non-neutral plasmas are stored. Though this choice can
significantly ameliorate the undesirable effects, it does not eliminate them
entirely. Other sources of heating, notably the coupling of the particles to
electronic noise Beck, Fajans, and Malmberg (1996), will also increase the
temperature. This highlights the importance of direct, absolute measurements
of the particle temperature to accurately determine the experimental
conditions.
## 3 Cooling and temperature measurements of antiprotons
The temperature of a plasma can be determined by measuring the distribution of
particles in the tail of a Boltzmann distribution - a technique common-place
in non-neutral plasma physics Eggleston _et al._ (1992). This measurement has
the advantage of yielding the absolute temperature of the particles without
recourse to supporting measurements (for example, of the density
distribution), unlike measurements of the frequencies of the normal plasma
modes Amoretti _et al._ (2003), which can only give a relative temperature
change. The plasmas typical in ALPHA have densities in the range $10^{6}$ to
$10^{8}~{}\mathrm{cm^{-3}}$, with collision rates high enough to ensure that
the plasma comes to equilibrium in a few seconds. In equilibrium, the energy
of the particles conforms to a Boltzmann distribution.
To measure the temperature, the particles are released from a confining well
by slowly (compared to the axial oscillation frequency) reducing the voltage
on one side of the well. As the well depth is reduced, particles escape
according to their energy; the first (highest-energy) particles to be released
will be drawn from the tail of a Boltzmann distribution. As the dump
progresses, the loss of particles causes redistribution of energy and, at
later times, the measured distribution deviates from the expected Boltzmann
distribution. The escaping particles can be detected using a micro-channel
plate as a charge amplifier, or for antimatter particles, by detecting their
annihilation. The temperature is determined by fitting an exponential curve to
the number of particles released as a function of energy, such as in the
example measurement shown in Fig. 1.
513,265)554,265) 1547,265)1506,265) 1513,334)533,334) 1547,334)1527,334)
513,374)533,374) 1547,374)1527,374) 513,402)533,402) 1547,402)1527,402)
513,425)533,425) 1547,425)1527,425) 513,443)533,443) 1547,443)1527,443)
513,458)533,458) 1547,458)1527,458) 513,471)533,471) 1547,471)1527,471)
513,483)533,483) 1547,483)1527,483) 513,493)554,493) 1547,493)1506,493)
10513,562)533,562) 1547,562)1527,562) 513,602)533,602) 1547,602)1527,602)
513,631)533,631) 1547,631)1527,631) 513,653)533,653) 1547,653)1527,653)
513,671)533,671) 1547,671)1527,671) 513,686)533,686) 1547,686)1527,686)
513,699)533,699) 1547,699)1527,699) 513,711)533,711) 1547,711)1527,711)
513,722)554,722) 1547,722)1506,722) 100513,790)533,790) 1547,790)1527,790)
513,830)533,830) 1547,830)1527,830) 513,859)533,859) 1547,859)1527,859)
513,881)533,881) 1547,881)1527,881) 513,899)533,899) 1547,899)1527,899)
513,914)533,914) 1547,914)1527,914) 513,928)533,928) 1547,928)1527,928)
513,939)533,939) 1547,939)1527,939) 513,950)554,950) 1547,950)1506,950)
1000513,1018)533,1018) 1547,1018)1527,1018) 513,1059)533,1059)
1547,1059)1527,1059) 513,1087)533,1087) 1547,1087)1527,1087)
513,1109)533,1109) 1547,1109)1527,1109) 513,1127)533,1127)
1547,1127)1527,1127) 513,1143)533,1143) 1547,1143)1527,1143)
513,1156)533,1156) 1547,1156)1527,1156) 513,1168)533,1168)
1547,1168)1527,1168) 513,1178)554,1178) 1547,1178)1506,1178)
10000513,265)513,306) 513,1178)513,1137) 0720,265)720,306) 720,1178)720,1137)
0.1927,265)927,306) 927,1178)927,1137) 0.21133,265)1133,306)
1133,1178)1133,1137) 0.31340,265)1340,306) 1340,1178)1340,1137)
0.41547,265)1547,306) 1547,1178)1547,1137)
0.5513,1178)513,265)(1547,265)(1547,1178)(513,1178)
Integrated annihilation counts
Well depth
[V]513,1073)518,1073)(518,1070)(528,1070)(528,1060)(539,1060)(539,1050)(549,1050)(549,1036)(559,1036)(559,1024)(570,1024)(570,1011)(580,1011)(580,999)(590,999)(590,985)(601,985)(601,967)(611,967)(611,956)(621,956)(621,941)(632,941)(632,926)(642,926)(642,907)(652,907)(652,890)(663,890)(663,870)(673,870)(673,851)(683,851)(683,831)(694,831)(694,820)(704,820)(704,800)(714,800)(714,787)(725,787)(725,766)(735,766)(735,745)(745,745)(745,722)(756,722)(756,701)(766,701)(766,680)
766,680)776,680)(776,659)(787,659)(787,633)(797,633)(797,615)(807,615)(807,605)(818,605)(818,602)(828,602)(828,580)(838,580)(838,567)(849,567)(849,527)(859,527)(859,527)(869,527)(869,493)(880,493)(880,493)(890,493)(890,493)(900,493)(900,483)(911,483)(911,458)(921,458)(921,458)(931,458)(931,443)(942,443)(942,443)(952,443)(952,443)(962,443)(962,443)(973,443)(973,443)(983,443)(983,425)(993,425)(993,425)(1004,425)(1004,425)(1014,425)(1014,425)(1024,425)(1024,425)
1024,425)1035,425)(1035,425)(1045,425)(1045,425)(1055,425)(1055,425)(1066,425)(1066,425)(1076,425)(1076,425)(1086,425)(1086,425)(1097,425)(1097,425)(1107,425)(1107,402)(1117,402)(1117,402)(1128,402)(1128,402)(1138,402)(1138,374)(1148,374)(1148,374)(1159,374)(1159,374)(1169,374)(1169,374)(1179,374)(1179,374)(1190,374)(1190,374)(1200,374)(1200,374)(1210,374)(1210,374)(1221,374)(1221,374)(1231,374)(1231,374)(1241,374)(1241,374)(1252,374)(1252,374)(1262,374)(1262,374)(1272,374)(1272,374)(1283,374)(1283,374)
1283,374)1293,374)(1293,374)(1303,374)(1303,374)(1314,374)(1314,374)(1324,374)(1324,374)(1334,374)(1334,374)(1345,374)(1345,374)(1355,374)(1355,374)(1365,374)(1365,374)(1376,374)(1376,374)(1386,374)(1386,374)(1396,374)(1396,374)(1407,374)(1407,374)(1417,374)(1417,374)(1427,374)(1427,374)(1438,374)(1438,374)(1448,374)(1448,374)(1458,374)(1458,374)(1469,374)(1469,374)(1479,374)(1479,374)(1489,374)(1489,374)(1500,374)(1500,374)(1510,374)(1510,374)(1520,374)(1520,374)(1531,374)(1531,374)(1541,374)(1541,374)
1541,374)1547,374)(1547,374)
680,848)680,848)(691,829)(701,810)(711,791)(722,773)(732,754)(743,735)(753,717)(764,698)(774,679)(785,660)(795,642)(805,623)(816,604)(826,585)(837,567)(847,548)(858,529)(868,511)(879,492)
513,1178)513,265)(1547,265)(1547,1178)(513,1178) Figure 1: An example
temperature measurement of approximately 45,000 antiprotons, after separation
from the cooling electrons and with the inhomogeneous trapping fields
energised. The straight line shows an exponential fit to determine the
temperature, which in this case, is $\left(310~{}\pm~{}20\right)~{}\mathrm{K}$
The actual process of manipulating the trap potentials can change the
temperature of the particles as the measurement takes place. Particle-in-cell
(PIC) simulations of the measurement process have predicted that the
temperature obtained from the fit is around 15% higher than the initial
temperature for a typical antiproton cloud. For the denser electron and
positron plasmas, the measured temperature can be as much as factor of two
higher than the initial temperature. We can apply the corrections determined
from these simulations to the measured temperature to find the true
temperature. This temperature diagnostic has been applied to all three
particle species used in ALPHA - antiprotons, positrons and electrons. The
lowest temperatures measured for electron or positron plasmas at
experimentally relevant densities $\left(10^{6}~{}\mathrm{cm^{-3}}\text{or
more}\right)$ is of the order of 40 K.
Electrons are used to collisionally cool the antiprotons, which, due to their
larger mass, do not effectively self-cool via synchrotron radiation. Before
mixing the antiprotons with positrons to produce antihydrogen, the electrons
must be removed. If the electrons were allowed to remain, they could
potentially deplete the positron plasma by forming positronium, destroy
antihydrogen atoms through charge exchange, or destabilise the positron plasma
by partially neutralising it.
Electron removal is accomplished through the application of electric field
pulses. These pulses remove the confining potential on one side of the well
holding the antiproton/electron two-component plasma, typically for 100-300
ns. The electrons, moving faster than the antiprotons, escape the well. The
well is restored before the antiprotons can escape, so they remain trapped.
However, the process does not avoid disturbing the antiprotons. The electron
removal process has been the focus of a significant portion of experimental
effort at ALPHA, and the coldest antiproton temperatures obtained have been
around 200-300 K.
## 4 Evaporative Cooling
Antiprotons at a few hundred Kelvin will have a very small probability of
forming low-energy, trappable, antihydrogen atoms. To further cool the
antiprotons, ALPHA has implemented a technique of forced evaporative cooling.
Evaporative cooling is a common-place technique in neutral particle trapping,
and has been instrumental in the production of Bose-Einstein condensates Davis
_et al._ (1995). However, evaporative cooling has found limited application to
charged particles.
Before evaporative cooling, a cloud of antiprotons, containing 45,000
particles, with a radius of 0.6 mm, density $7.6\times
10^{6}~{}\mathrm{cm^{-3}}$, and initial temperature of
$\left(1040~{}\pm~{}45\right)~{}\mathrm{K}$ was prepared in a 1.5 V deep
potential well. The collision rate between antiprotons was of order 200
$\mathrm{s}^{-1}$, high enough to ensure that the temperatures in the parallel
and perpendicular degrees of freedom had equilibrated before evaporative
cooling commenced.
To perform evaporative cooling, the confining potential on one side of the
well is slowly (with respect to the equilibration rate) lowered. Particles
with kinetic energy higher than the instantaneous well depth escape the trap,
carrying with them energy in excess of the mean thermal energy. The
distribution then evolves towards a Boltzmann distribution with lower
temperature, and the process continues.
Starting with $45,000$ antiprotons at 1040 K, we have obtained temperatures as
low as (9 $\pm$ 4) K with $\left(6\pm 1\right)\%$ of the particles stored in a
10 mV deep well. Measurements of the temperature, number of particles and
transverse size of the clouds were made at a number of points between the most
extreme well depths. The temperatures and number of particles remaining at
each measurement point are shown in Fig. 2.
431,265)451,265) 1336,265)1316,265) 431,320)451,320) 1336,320)1316,320)
431,359)451,359) 1336,359)1316,359) 431,390)451,390) 1336,390)1316,390)
431,414)451,414) 1336,414)1316,414) 431,435)451,435) 1336,435)1316,435)
431,453)451,453) 1336,453)1316,453) 431,469)451,469) 1336,469)1316,469)
431,484)472,484) 1336,484)1295,484) 10431,578)451,578) 1336,578)1316,578)
431,633)451,633) 1336,633)1316,633) 431,672)451,672) 1336,672)1316,672)
431,703)451,703) 1336,703)1316,703) 431,727)451,727) 1336,727)1316,727)
431,748)451,748) 1336,748)1316,748) 431,766)451,766) 1336,766)1316,766)
431,782)451,782) 1336,782)1316,782) 431,797)472,797) 1336,797)1295,797)
100431,891)451,891) 1336,891)1316,891) 431,946)451,946) 1336,946)1316,946)
431,985)451,985) 1336,985)1316,985) 431,1015)451,1015) 1336,1015)1316,1015)
431,1040)451,1040) 1336,1040)1316,1040) 431,1061)451,1061)
1336,1061)1316,1061) 431,1079)451,1079) 1336,1079)1316,1079)
431,1095)451,1095) 1336,1095)1316,1095) 431,1110)472,1110)
1336,1110)1295,1110) 1000431,1204)451,1204) 1336,1204)1316,1204)
431,1259)451,1259) 1336,1259)1316,1259) 431,265)431,285) 431,1259)431,1239)
451,265)451,285) 451,1259)451,1239) 469,265)469,285) 469,1259)469,1239)
484,265)484,306) 484,1259)484,1218) 10588,265)588,285) 588,1259)588,1239)
648,265)648,285) 648,1259)648,1239) 691,265)691,285) 691,1259)691,1239)
725,265)725,285) 725,1259)725,1239) 752,265)752,285) 752,1259)752,1239)
775,265)775,285) 775,1259)775,1239) 795,265)795,285) 795,1259)795,1239)
812,265)812,285) 812,1259)812,1239) 828,265)828,306) 828,1259)828,1218)
100932,265)932,285) 932,1259)932,1239) 992,265)992,285) 992,1259)992,1239)
1035,265)1035,285) 1035,1259)1035,1239) 1068,265)1068,285)
1068,1259)1068,1239) 1096,265)1096,285) 1096,1259)1096,1239)
1119,265)1119,285) 1119,1259)1119,1239) 1139,265)1139,285)
1139,1259)1139,1239) 1156,265)1156,285) 1156,1259)1156,1239)
1172,265)1172,306) 1172,1259)1172,1218) 10001275,265)1275,285)
1275,1259)1275,1239) 1336,265)1336,285) 1336,1259)1336,1239)
431,1259)431,265)(1336,265)(1336,1259)(431,1259)
Temperature [K]
On-axis well depth [mV]1047,951)1047,962) 1027,951)1067,951)
1027,962)1067,962) 840,694)840,740) 820,694)860,694) 820,740)860,740)
772,646)772,680) 752,646)792,646) 752,680)792,680) 674,553)674,630)
654,553)694,553) 654,630)694,630) 570,549)570,585) 550,549)590,549)
550,585)590,585) 490,367)490,520) 470,367)510,367) 470,520)510,520)
1231,1106)1231,1118) 1211,1106)1251,1106) 1211,1118)1251,1118)
431,1259)431,265)(1336,265)(1336,1259)(431,1259) (a)
390,265)410,265) 1336,265)1316,265) 390,347)410,347) 1336,347)1316,347)
390,410)410,410) 1336,410)1316,410) 390,461)410,461) 1336,461)1316,461)
390,505)410,505) 1336,505)1316,505) 390,543)410,543) 1336,543)1316,543)
390,576)410,576) 1336,576)1316,576) 390,606)431,606) 1336,606)1295,606)
0.1390,803)410,803) 1336,803)1316,803) 390,918)410,918) 1336,918)1316,918)
390,999)410,999) 1336,999)1316,999) 390,1063)410,1063) 1336,1063)1316,1063)
390,1114)410,1114) 1336,1114)1316,1114) 390,1158)410,1158)
1336,1158)1316,1158) 390,1196)410,1196) 1336,1196)1316,1196)
390,1229)410,1229) 1336,1229)1316,1229) 390,1259)431,1259)
1336,1259)1295,1259) 1390,265)390,285) 390,1259)390,1239) 411,265)411,285)
411,1259)411,1239) 429,265)429,285) 429,1259)429,1239) 446,265)446,306)
446,1259)446,1218) 10554,265)554,285) 554,1259)554,1239) 617,265)617,285)
617,1259)617,1239) 662,265)662,285) 662,1259)662,1239) 697,265)697,285)
697,1259)697,1239) 725,265)725,285) 725,1259)725,1239) 749,265)749,285)
749,1259)749,1239) 770,265)770,285) 770,1259)770,1239) 789,265)789,285)
789,1259)789,1239) 805,265)805,306) 805,1259)805,1218) 100913,265)913,285)
913,1259)913,1239) 977,265)977,285) 977,1259)977,1239) 1021,265)1021,285)
1021,1259)1021,1239) 1056,265)1056,285) 1056,1259)1056,1239)
1085,265)1085,285) 1085,1259)1085,1239) 1109,265)1109,285)
1109,1259)1109,1239) 1130,265)1130,285) 1130,1259)1130,1239)
1148,265)1148,285) 1148,1259)1148,1239) 1165,265)1165,306)
1165,1259)1165,1218) 10001273,265)1273,285) 1273,1259)1273,1239)
1336,265)1336,285) 1336,1259)1336,1239)
390,1259)390,265)(1336,265)(1336,1259)(390,1259)
Fraction of particles remaining
On-axis well depth [mV]1226,1240)1226,1249) 1206,1240)1246,1240)
1206,1249)1246,1249) 817,1035)817,1066) 797,1035)837,1035) 797,1066)837,1066)
536,615)536,650) 516,615)556,615) 516,650)556,650) 747,983)747,1015)
727,983)767,983) 727,1015)767,1015) 644,827)644,857) 624,827)664,827)
624,857)664,857) 1034,1171)1034,1198) 1014,1171)1054,1171)
1014,1198)1054,1198) 452,439)452,482) 432,439)472,439) 432,482)472,482)
390,1259)390,265)(1336,265)(1336,1259)(390,1259) (b)
Figure 2: The temperature (a) and the fraction of the initial number of
particles (b) after evaporative cooling to a series of well depths. The
minimum temperature is (9 $\pm$ 4) K
The evaporation process can be described using simple rate equations for the
number of particles $N$ and the temperature $T$;
$\frac{\mathrm{d}N}{\mathrm{d}t}=-\frac{N}{\tau_{ev}},$ (1a) | $\frac{\mathrm{d}T}{\mathrm{d}t}=-\alpha\frac{T}{\tau_{ev}}.$ (1b)
---|---
Here, $\tau_{ev}$ is the characteristic evaporation timescale and $\alpha$ is
the excess energy carried away by an evaporating particle, in multiples of
$k_{\mathrm{B}}T$. At a given time, the distribution of energies can be
thought of as a truncated Boltzmann distribution, characterised by a
temperature $T$, and the well depth $U$. $\tau_{ev}$ is linked to the mean
time between collisions, $\tau_{col}$ as Currell and Fussmann (2005)
$\frac{\tau_{ev}}{\tau_{col}}=\frac{\sqrt{2}}{3}\eta e^{\eta},$ (2)
where $\eta=U/{k_{\mathrm{B}}T}$ is the rescaled well depth. We note the
strong dependence of $\tau_{ev}$ on $\eta$, indicating that this is the
primary factor determining the temperature in a given well. We find values of
$\eta$ between 10 and 20 over the range of our measurements. The value of
$\alpha$ can be calculated using the treatment in reference Ketterle and van
Druten (1995). We have numerically modelled evaporative cooling in our
experiment using these equations and have found very good agreement between
our measurements and the model Andresen _et al._ (2010a).
Measurements of the transverse density profile were made by ejecting the
particles onto an MCP/phosphor/CCD imaging device Andresen _et al._ (2009).
It was seen that, as evaporation progressed, the cloud radius increased
dramatically - see Fig. 3. We interpret this effect to be due to escape of the
evaporating particles principally from the radial centre of the cloud, and the
conservation of the total canonical angular momentum during the subsequent
redistribution process. Inside the cloud, the space charge reduces the depth
of the confining well. This effect is accentuated closer to the trap axis,
with the result that the well depth close to the axis can be significantly
lower than further away. The evaporation rate is exponentially suppressed at
higher well depths (eqn. 2), so evaporation is confined to a small region
close to the axis, causing the on-axis density to become depleted. This is a
non-equilibrium configuration, and the particles will redistribute to replace
the lost density. In doing so, some particles will move inwards, and to
conserve the canonical angular momentum, some particles must also move to
higher radii O’Neil (1980). Assuming that all loss occurs at $r=0$, the mean
squared radius of the particles, $\left<r^{2}\right>$, will obey the
relationship
$N_{0}\left<r_{0}^{2}\right>=N\left<r^{2}\right>,\vspace{-0.5cm}$ (3)
where N is the number of particles, and the zero subscript indicates the
initial conditions.
As seen in Fig. 3, this model agrees very well with the measurements. This
radial expansion can be problematic when attempting to prepare low kinetic
energy antiprotons to produce trappable antihydrogen atoms, as the energy
associated with the magnetron motion grows with the distance from the axis,
and the electrostatic potential energy released as the radius expands can
reheat the particles. The effect can be countered somewhat by taking a longer
time to cool the particles, resulting in a higher efficiency and, thus, a
smaller expansion, but we find that the efficiency depends very weakly on the
cooling time.
390,265)431,265) 1546,265)1505,265) 0390,431)431,431) 1546,431)1505,431)
0.5390,596)431,596) 1546,596)1505,596) 1390,762)431,762) 1546,762)1505,762)
1.5390,928)431,928) 1546,928)1505,928) 2390,1093)431,1093)
1546,1093)1505,1093) 2.5390,1259)431,1259) 1546,1259)1505,1259)
3390,265)390,306) 390,1259)390,1218) 0621,265)621,306) 621,1259)621,1218)
0.2852,265)852,306) 852,1259)852,1218) 0.41084,265)1084,306)
1084,1259)1084,1218) 0.61315,265)1315,306) 1315,1259)1315,1218)
0.81546,265)1546,306) 1546,1259)1546,1218)
1390,1259)390,265)(1546,265)(1546,1259)(390,1259)
Plasma Size [mm]
Fraction of Antiprotons LostMeasured Size1046,1177)1251,1177)
1046,1197)1046,1157) 1251,1197)1251,1157) 1477,1027)1477,1093)
1457,1027)1497,1027) 1457,1093)1497,1093) 1419,729)1419,795)
1399,729)1439,729) 1399,795)1439,795) 1280,629)1280,696) 1260,629)1300,629)
1260,696)1300,696) 1084,563)1084,629) 1064,563)1104,563) 1064,629)1104,629)
991,497)991,563) 971,497)1011,497) 971,563)1011,563) 656,431)656,497)
636,431)676,431) 636,497)676,497) 390,431)390,497) 370,431)410,431)
370,497)410,497) Predicted Size1046,1094)1251,1094)
390,464)390,464)(402,465)(413,466)(425,467)(437,468)(448,469)(460,470)(472,471)(483,472)(495,474)(507,475)(518,476)(530,477)(542,478)(553,480)(565,481)(577,482)(589,483)(600,485)(612,486)(624,488)(635,489)(647,490)(659,492)(670,493)(682,495)(694,497)(705,498)(717,500)(729,501)(740,503)(752,505)(764,507)(775,508)(787,510)(799,512)(810,514)(822,516)(834,518)(845,520)(857,523)(869,525)(880,527)(892,529)(904,532)(915,534)(927,537)(939,539)(950,542)(962,545)
962,545)974,548)(986,551)(997,554)(1009,557)(1021,560)(1032,563)(1044,567)(1056,570)(1067,574)(1079,578)(1091,582)(1102,586)(1114,590)(1126,595)(1137,599)(1149,604)(1161,609)(1172,615)(1184,620)(1196,626)(1207,632)(1219,639)(1231,646)(1242,653)(1254,661)(1266,669)(1277,677)(1289,687)(1301,697)(1312,707)(1324,719)(1336,731)(1347,745)(1359,760)(1371,776)(1383,794)(1394,814)(1406,836)(1418,861)(1429,891)(1441,924)(1453,964)(1464,1013)(1476,1073)(1488,1150)(1499,1254)(1500,1259)
390,1259)390,265)(1546,265)(1546,1259)(390,1259) Figure 3: The measured size
of the antiproton cloud using a MCP/phosphor/CCD device as a function of the
number of particles lost. This is compared to the size predicted from eqn 3
Colder antiprotons are of great utility in the effort to produce cold
antihydrogen atoms. Antihydrogen production techniques can be broadly
categorised as ‘static’ - in which a cloud of antiprotons is held stationary
and positrons, perhaps in the form of positronium atoms are introduced
Charlton (1990), or ‘dynamic’ - where antiprotons are passed through a
positron plasma Gabrielse _et al._ (1988). In the first case, the advantages
of cold antiprotons are obvious, as the lower kinetic energy translates
directly into lower-energy antihydrogen atoms. In the second case, the colder
temperature allows the manipulations used to ‘inject’ the antiprotons into the
positrons to produce much more precisely defined antiproton energies.
Indirectly, this will also permit these schemes to produce more trappable
antihydrogen.
## 5 Annihilation vertex detector
Among the most powerful diagnostic tools available to experiments working with
antimatter are detectors capable of detecting matter-antimatter annihilations.
Antiproton annihilations produce an average of three charged pions, which can
be detected by scintillating material placed around the trap. The passage of a
pion through the scintillator produces photons, which trigger a cascade in a
photo-multiplier tube to produce a voltage pulse. Individual voltage pulses
can be counted to determine the number of annihilations.
A further technique uses a position-sensitive detector to reconstruct the
trajectories of the pions and find the point where the antiproton annihilated
(usually called the ‘vertex’). The ALPHA annihilation vertex detector
comprises sixty double-sided silicon wafers, arranged in three layers in a
cylindrical fashion around the antihydrogen production and trapping region.
Each wafer is divided into 256 strips, oriented in orthogonal directions on
the p- and n- sides. Charged particles passing through the silicon result in
charge deposits, and the intersection of perpendicular strips with charge
above a defined threshold marks the location a particle passed through the
silicon.
Each module is controlled by a circuit that produces a digital signal when a
charge is detected on the silicon. If a coincidence of modules is satisfied in
a 400 ns time window, the charge profile is ‘read-out’ and digitised for
further analysis. Each readout and associated trigger and timing information
comprises an ‘event’. The pion trajectories are reconstructed by fitting
helices to sets of three hits, one from each layer of the detector. The point
that minimises the distance to the helices is then identified as the
annihilation vertex. An example of an annihilation event is shown in Fig.
4(a).
Figure 4: (a) an example reconstruction of an antihydrogen annihilation and
(b) a cosmic ray event. The diamond indicates the position of the vertex
identified by the reconstruction algorithm, the polygonal structure shows the
locations of the silicon wafers, the dots are the positions of the detected
hits, and the inner circle shows the radius of the Penning trap electrodes.
Also shown are annihilation density distributions associated with antihydrogen
production (c, e) and deliberately induced antiproton loss (d, f). (c) and (d)
are projected along the cylindrical axis, with the inner radius of the
electrodes marked with a white circle, while (e) and (f) show the azimuthal
angle $\phi$ against the axial position $z$
Examination of the spatial distributions of annihilations can yield much
insight into the physical processes at work. ATHENA established that
antihydrogen production resulted in a characteristic ‘ring’ structure - an
azimuthally smooth distribution concentrated at the radius of the trap
electrodes Fujiwara _et al._ (2004), shown in 4(c) and (e). In contrast, the
loss of bare antiprotons occurred in spatially well-defined locations, called
‘hot-spots’, examples of which are shown in 4(d) and (f). This was interpreted
to be due to microscopic imperfections in the trap elements. These produce
electric fields that break the symmetry of the trap and give rise to preferred
locations for charged particle loss. When antihydrogen is produced in a
multipole field, antiprotons generated by ionisation of weakly-bound
antihydrogen also contribute small asymmetries Andresen _et al._ (2010b).
These features are present in Fig. 4(c) and (e).
The vertex detector is also sensitive to charged particles in cosmic rays.
When passing through the detector, they are typically identified as a pair of
almost co-linear tracks (Fig. 4(b)), and can be misidentified as an
annihilation. Cosmic-ray events when searching for the release of trapped
antihydrogen thus present a background.
To develop a method to reject cosmic ray events, while retaining
annihilations, we compared samples of the events using three parameters, shown
in Fig. 5. Cosmic rays have predominantly two tracks, while antiproton
annihilations typically have more. 95% of cosmic events have two or fewer
identified tracks, while 58% of antiproton annihilations have at least three.
A significant number of antiproton annihilations can have only two tracks, so
it is not desirable to reject all these events as background.
Figure 5: Comparison of the distributions of event parameters for antiproton
annihilations (solid line) and cosmic rays (dashed line). Shown are (a) the
number of identified charged particle tracks, (b) the radial coordinate of the
vertex, and the squared residual from a linear fit to the identified positions
for the events with (c) two tracks and (d) more than two tracks. The shaded
regions indicate the range of parameters that are rejected to minimise the
p-value as discussed in the text
We determine if the tracks form a straight line by fitting a line to the hits
from each pair of tracks, and calculating the squared residual value. As seen
in Fig. 5(c) and (d), cosmic events have much lower squared residual values
than annihilations. This is to be expected, since particles from cosmic rays
have high momentum and pass through the apparatus and the magnetic field
essentially undeflected, while the particles produced in an annihilation will,
in general, move in all directions. In addition, annihilations occur on the
inner wall of the Penning trap, at a radius of $\sim$2.2 cm, and as shown in
Fig. 5(b), reconstructed annihilation vertices are concentrated here, whereas
cosmic rays pass through at a random radius.
By varying the ranges of parameters for which events are accepted, we could
optimise the annihilation detection strategy. The point where the ‘p-value’ –
the probability that an observed signal is due to statistical fluctuations in
the background Amsler _et al._ (2008) – was minimised requiring the vertex to
lie within 4 cm of the trap axis, and the squared residual value to be at
least 2 $\mathrm{cm}^{2}$ or 0.05 $\mathrm{cm}^{2}$ for events with two tracks
and more than two tracks, respectively.
These thresholds reject more than 99% of the cosmic background, reducing the
absolute rate of background events to 22 mHz, while still retaining the
ability of identify $\sim 40\%$ of antiproton annihilations. While this method
effectively removes cosmic rays as a source of concern, other background
processes, including mirror-trapped antiprotons must also be considered when
searching for trapped antihydrogen. Our cosmic-ray rejection method has been
applied to data taken from the 2009 ALPHA antihydrogen trapping run, and a
full discussion of the results obtained will be made in a forthcoming
publication.
## 6 Conclusions and outlook
In this paper we have described two of the most recent techniques developed by
the ALPHA collaboration in our search for trapped antihydrogen. Evaporative
cooling of antiprotons has the potential to greatly increase the number of
low-energy, trappable atoms produced in our experiment. The use of our unique
annihilation vertex imaging detector to discriminate with high power between
annihilations and cosmic rays will be a vital tool to identify the first
trapped antihydrogen atoms. We have integrated both of these techniques into
our experiment and are hopeful of soon being able to report detection of
trapped antihydrogen.
###### Acknowledgements.
This work was supported by CNPq, FINEP/RENAFAE (Brazil), ISF (Israel), MEXT
(Japan), FNU (Denmark), VR (Sweden), NSERC, NRC/TRIUMF, AIF (Canada), DOE, NSF
(USA), EPSRC and the Leverhulme Trust (UK). We are also grateful to the AD
team for the delivery of a high-quality antiproton beam, and to CERN for its
technical support.
### Note added in proof
: Since the preparation of this article, trapping of antihydrogen atoms has
been achieved by the ALPHA collaboration Andresen _et al._ (2010c)
## References
* Amoretti _et al._ (2002) M. Amoretti _et al._ (ATHENA), Nature 419, 456 (2002).
* Pritchard (1983) D. E. Pritchard, Phys. Rev. Lett. 51, 1336 (1983).
* Bertsche _et al._ (2006) W. Bertsche _et al._ (ALPHA), Nuc. Inst. Meth. A 566, 746 (2006).
* Fajans, Madsen, and Robicheaux (2008) J. Fajans, N. Madsen, and F. Robicheaux, Phys. Plasmas 15, 032108 (2008).
* Gilson and Fajans (2003) E. P. Gilson and J. Fajans, Phys. Rev. Lett. 90, 015001 (2003).
* Beck, Fajans, and Malmberg (1996) B. R. Beck, J. Fajans, and J. H. Malmberg, Phys. Plasmas 3, 1250 (1996).
* Eggleston _et al._ (1992) D. L. Eggleston _et al._ , Phys. Fluids B 4, 3432 (1992).
* Amoretti _et al._ (2003) M. Amoretti _et al._ , Phys. Rev. Lett. 91, 055001 (2003).
* Davis _et al._ (1995) K. B. Davis _et al._ , Phys. Rev. Lett. 74, 5202 (1995).
* Currell and Fussmann (2005) F. Currell and G. Fussmann, IEEE Trans. Plasma Sci. 33, 1763 (2005).
* Ketterle and van Druten (1995) W. Ketterle and N. J. van Druten, Adv. At. Mol. Opt. Phys. 37 (1995).
* Andresen _et al._ (2010a) G. B. Andresen _et al._ (ALPHA), Phys. Rev. Lett. 105, 013003 (2010a).
* Andresen _et al._ (2009) G. B. Andresen _et al._ (ALPHA), Rev. Sci. Inst. 80, 123701 (2009).
* O’Neil (1980) T. M. O’Neil, Phys. Fluids 23, 2216 (1980).
* Charlton (1990) M. Charlton, Phys. Lett. A 143, 143 (1990).
* Gabrielse _et al._ (1988) G. Gabrielse _et al._ , Phys. Lett. A 129, 38 (1988).
* Fujiwara _et al._ (2004) M. C. Fujiwara _et al._ (ATHENA), Phys. Rev. Lett. 92, 065005 (2004).
* Andresen _et al._ (2010b) G. B. Andresen _et al._ (ALPHA), Phys. Lett. B 685, 141 (2010b).
* Amsler _et al._ (2008) C. Amsler _et al._ (Particle Data Group), Phys. Lett. B 667, 1 (2008).
* Andresen _et al._ (2010c) G. B. Andresen _et al._ (ALPHA), Nature 468, 673 (2010c).
|
arxiv-papers
| 2011-04-30T00:48:52 |
2024-09-04T02:49:18.514423
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Eoin Butler, Gorm. B. Andresen, Mohammad. D. Ashkezari, Marcelo\n Baquero-Ruiz, William Bertsche, Paul D. Bowe, Crystal C. Bray, Claudio L.\n Cesar, Steven Chapman, Michael Charlton, Joel Fajans, Tim Friesen, Makoto C.\n Fujiwara, David R. Gill, Jeffrey S. Hangst, Walter N. Hardy, Ruyugo S.\n Hayano, Michael E. Hayden, Andrew J. Humphries, Richard Hydomako, Svante\n Jonsell, Leonid Kurchaninov, Ricardo Lambo, Niels Madsen, Scott Menary, Paul\n Nolan, Konstantin Olchanski, Art Olin, Alexander Povilus, Petteri Pusa,\n Francis Robicheaux, Eli Sarid, Daniel M. Silveira, Chukman So, James W.\n Storey, Robert I. Thompson, Dirk P. van der Werf, Dean Wilding, Jonathan S.\n Wurtele, Yasunori Yamazaki",
"submitter": "Eoin Butler",
"url": "https://arxiv.org/abs/1105.0037"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.